1
|
Kang SK, Kim D, Shin SA, Kim YK, Choi H, Lee JS. Accurate Automated Quantification of Dopamine Transporter PET Without MRI Using Deep Learning-based Spatial Normalization. Nucl Med Mol Imaging 2024; 58:354-363. [PMID: 39308485 PMCID: PMC11415331 DOI: 10.1007/s13139-024-00869-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 06/16/2024] [Accepted: 06/17/2024] [Indexed: 09/25/2024] Open
Abstract
Purpose Dopamine transporter imaging is crucial for assessing presynaptic dopaminergic neurons in Parkinson's disease (PD) and related parkinsonian disorders. While 18F-FP-CIT PET offers advantages in spatial resolution and sensitivity over 123I-β-CIT or 123I-FP-CIT SPECT imaging, accurate quantification remains essential. This study presents a novel automatic quantification method for 18F-FP-CIT PET images, utilizing an artificial intelligence (AI)-based robust PET spatial normalization (SN) technology that eliminates the need for anatomical images. Methods The proposed SN engine consists of convolutional neural networks, trained using 213 paired datasets of 18F-FP-CIT PET and 3D structural MRI. Remarkably, only PET images are required as input during inference. A cyclic training strategy enables backward deformation from template to individual space. An additional 89 paired 18F-FP-CIT PET and 3D MRI datasets were used to evaluate the accuracy of striatal activity quantification. MRI-based PET quantification using FIRST software was also conducted for comparison. The proposed method was also validated using 135 external datasets. Results The proposed AI-based method successfully generated spatially normalized 18F-FP-CIT PET images, obviating the need for CT or MRI. The striatal PET activity determined by proposed PET-only method and MRI-based PET quantification using FIRST algorithm were highly correlated, with R 2 and slope ranging 0.96-0.99 and 0.98-1.02 in both internal and external datasets. Conclusion Our AI-based SN method enables accurate automatic quantification of striatal activity in 18F-FP-CIT brain PET images without MRI support. This approach holds promise for evaluating presynaptic dopaminergic function in PD and related parkinsonian disorders.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Brightonix Imaging Inc., Seongsu-Yeok SK V1 Tower, 25 Yeonmujang 5Ga-Gil, Seongdong-Gu, Seoul, 04782 Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
| | - Daewoon Kim
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
| | - Seong A. Shin
- Brightonix Imaging Inc., Seongsu-Yeok SK V1 Tower, 25 Yeonmujang 5Ga-Gil, Seongdong-Gu, Seoul, 04782 Korea
| | - Yu Kyeong Kim
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
- Department of Nuclear Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Korea
| | - Hongyoon Choi
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
| | - Jae Sung Lee
- Brightonix Imaging Inc., Seongsu-Yeok SK V1 Tower, 25 Yeonmujang 5Ga-Gil, Seongdong-Gu, Seoul, 04782 Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, Korea
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-Ro, Jongno-Gu, Seoul, 03080 Korea
| |
Collapse
|
2
|
Rao D, Singh R, Koteshwara P, Vijayananda J. Exploring the Impact of Model Complexity on Laryngeal Cancer Detection. Indian J Otolaryngol Head Neck Surg 2024; 76:4036-4042. [PMID: 39376269 PMCID: PMC11455748 DOI: 10.1007/s12070-024-04776-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 05/26/2024] [Indexed: 10/09/2024] Open
Abstract
Background: Laryngeal cancer accounts for a third of all head and neck malignancies, necessitating timely detection for effective treatment and enhanced patient outcomes. Machine learning shows promise in medical diagnostics, but the impact of model complexity on diagnostic efficacy in laryngeal cancer detection can be ambiguous. Methods: In this study, we examine the relationship between model sophistication and diagnostic efficacy by evaluating three approaches: Logistic Regression, a small neural network with 4 layers of neurons and a more complex convolutional neural network with 50 layers and examine their efficacy on laryngeal cancer detection on computed tomography images. Results: Logistic regression achieved 82.5% accuracy. The 4-Layer NN reached 87.2% accuracy, while ResNet-50, a deep learning architecture, achieved the highest accuracy at 92.6%. Its deep learning capabilities excelled in discerning fine-grained CT image features. Conclusion: Our study highlights the choices involved in selecting a laryngeal cancer detection model. Logistic regression is interpretable but may struggle with complex patterns. The 4-Layer NN balances complexity and accuracy. ResNet-50 excels in image classification but demands resources. This research advances understanding affect machine learning model complexity could have on learning features of laryngeal tumor features in contrast CT images for purposes of disease prediction.
Collapse
Affiliation(s)
- Divya Rao
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104 India
| | - Rohit Singh
- Department of Otorhinolaryngology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104 India
| | - Prakashini Koteshwara
- Department of Radiodiagnosis and Imaging, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104 India
| | - J. Vijayananda
- Data Science and Artificial Intelligence, Philips, Bangalore, 560045 India
| |
Collapse
|
3
|
Lee JS, Lee MS. Advancements in Positron Emission Tomography Detectors: From Silicon Photomultiplier Technology to Artificial Intelligence Applications. PET Clin 2024; 19:1-24. [PMID: 37802675 DOI: 10.1016/j.cpet.2023.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
This review article focuses on PET detector technology, which is the most crucial factor in determining PET image quality. The article highlights the desired properties of PET detectors, including high detection efficiency, spatial resolution, energy resolution, and timing resolution. Recent advancements in PET detectors to improve these properties are also discussed, including the use of silicon photomultiplier technology, advancements in depth-of-interaction and time-of-flight PET detectors, and the use of artificial intelligence for detector development. The article provides an overview of PET detector technology and its recent advancements, which can significantly enhance PET image quality.
Collapse
Affiliation(s)
- Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, South Korea; Brightonix Imaging Inc., Seoul 04782, South Korea
| | - Min Sun Lee
- Environmental Radioactivity Assessment Team, Nuclear Emergency & Environmental Protection Division, Korea Atomic Energy Research Institute, Daejeon 34057, South Korea.
| |
Collapse
|
4
|
Xu ZH, Fan DG, Huang JQ, Wang JW, Wang Y, Li YZ. Computer-Aided Diagnosis of Laryngeal Cancer Based on Deep Learning with Laryngoscopic Images. Diagnostics (Basel) 2023; 13:3669. [PMID: 38132254 PMCID: PMC10743023 DOI: 10.3390/diagnostics13243669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 12/11/2023] [Accepted: 12/12/2023] [Indexed: 12/23/2023] Open
Abstract
Laryngeal cancer poses a significant global health burden, with late-stage diagnoses contributing to reduced survival rates. This study explores the application of deep convolutional neural networks (DCNNs), specifically the Densenet201 architecture, in the computer-aided diagnosis of laryngeal cancer using laryngoscopic images. Our dataset comprised images from two medical centers, including benign and malignant cases, and was divided into training, internal validation, and external validation groups. We compared the performance of Densenet201 with other commonly used DCNN models and clinical assessments by experienced clinicians. Densenet201 exhibited outstanding performance, with an accuracy of 98.5% in the training cohort, 92.0% in the internal validation cohort, and 86.3% in the external validation cohort. The area under the curve (AUC) values consistently exceeded 92%, signifying robust discriminatory ability. Remarkably, Densenet201 achieved high sensitivity (98.9%) and specificity (98.2%) in the training cohort, ensuring accurate detection of both positive and negative cases. In contrast, other DCNN models displayed varying degrees of performance degradation in the external validation cohort, indicating the superiority of Densenet201. Moreover, Densenet201's performance was comparable to that of an experienced clinician (Clinician A) and outperformed another clinician (Clinician B), particularly in the external validation cohort. Statistical analysis, including the DeLong test, confirmed the significance of these performance differences. Our study demonstrates that Densenet201 is a highly accurate and reliable tool for the computer-aided diagnosis of laryngeal cancer based on laryngoscopic images. The findings underscore the potential of deep learning as a complementary tool for clinicians and the importance of incorporating advanced technology in improving diagnostic accuracy and patient care in laryngeal cancer diagnosis. Future work will involve expanding the dataset and further optimizing the deep learning model.
Collapse
Affiliation(s)
- Zhi-Hui Xu
- Department of Otolaryngology, The Second Affiliated Hospital, Fujian Medical University, 950 Donghai Street, Fengze District, Quanzhou 362000, China; (Z.-H.X.)
| | - Da-Ge Fan
- Department of Pathology, The Second Affiliated Hospital, Fujian Medical University, 950 Donghai Street, Fengze District, Quanzhou 362000, China;
| | - Jian-Qiang Huang
- Department of Otolaryngology, The Second Affiliated Hospital, Fujian Medical University, 950 Donghai Street, Fengze District, Quanzhou 362000, China; (Z.-H.X.)
| | - Jia-Wei Wang
- Department of Emergency, The Second Affiliated Hospital, Fujian Medical University, 950 Donghai Street, Fengze District, Quanzhou 362000, China;
| | - Yi Wang
- CT/MRI Department, The Second Affiliated Hospital, Fujian Medical University, 950 Donghai Street, Fengze District, Quanzhou 362000, China
| | - Yuan-Zhe Li
- CT/MRI Department, The Second Affiliated Hospital, Fujian Medical University, 950 Donghai Street, Fengze District, Quanzhou 362000, China
| |
Collapse
|
5
|
Li L, Tan J, Yu L, Li C, Nan H, Zheng S. LSAM: L2-norm self-attention and latent space feature interaction for automatic 3D multi-modal head and neck tumor segmentation. Phys Med Biol 2023; 68:225004. [PMID: 37852283 DOI: 10.1088/1361-6560/ad04a8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 10/18/2023] [Indexed: 10/20/2023]
Abstract
Objective.Head and neck (H&N) cancers are prevalent globally, and early and accurate detection is absolutely crucial for timely and effective treatment. However, the segmentation of H&N tumors is challenging due to the similar density of the tumors and surrounding tissues in CT images. While positron emission computed tomography (PET) images provide information about the metabolic activity of the tissue and can distinguish between lesion regions and normal tissue. But they are limited by their low spatial resolution. To fully leverage the complementary information from PET and CT images, we propose a novel and innovative multi-modal tumor segmentation method specifically designed for H&N tumor segmentation.Approach.The proposed novel and innovative multi-modal tumor segmentation network (LSAM) consists of two key learning modules, namely L2-Norm self-attention and latent space feature interaction, which exploit the high sensitivity of PET images and the anatomical information of CT images. These two advanced modules contribute to a powerful 3D segmentation network based on a U-shaped structure. The well-designed segmentation method can integrate complementary features from different modalities at multiple scales, thereby improving the feature interaction between modalities.Main results.We evaluated the proposed method on the public HECKTOR PET-CT dataset, and the experimental results demonstrate that the proposed method convincingly outperforms existing H&N tumor segmentation methods in terms of key evaluation metrics, including DSC (0.8457), Jaccard (0.7756), RVD (0.0938), and HD95 (11.75).Significance.The innovative Self-Attention mechanism based on L2-Norm offers scalability and is effective in reducing the impact of outliers on the performance of the model. And the novel method for multi-scale feature interaction based on Latent Space utilizes the learning process in the encoder phase to achieve the best complementary effects among different modalities.
Collapse
Affiliation(s)
- Laquan Li
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China
- School of Science, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China
| | - Jiaxin Tan
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China
| | - Lei Yu
- Emergency Department, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, People's Republic of China
| | - Chunwen Li
- Emergency Department, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, People's Republic of China
| | - Hai Nan
- College of Computer Science and Engineering, Chongqing University of Technology, Chongqing, People's Republic of China
| | - Shenhai Zheng
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, People's Republic of China
| |
Collapse
|
6
|
Zhong NN, Wang HQ, Huang XY, Li ZZ, Cao LM, Huo FY, Liu B, Bu LL. Enhancing head and neck tumor management with artificial intelligence: Integration and perspectives. Semin Cancer Biol 2023; 95:52-74. [PMID: 37473825 DOI: 10.1016/j.semcancer.2023.07.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/11/2023] [Accepted: 07/15/2023] [Indexed: 07/22/2023]
Abstract
Head and neck tumors (HNTs) constitute a multifaceted ensemble of pathologies that primarily involve regions such as the oral cavity, pharynx, and nasal cavity. The intricate anatomical structure of these regions poses considerable challenges to efficacious treatment strategies. Despite the availability of myriad treatment modalities, the overall therapeutic efficacy for HNTs continues to remain subdued. In recent years, the deployment of artificial intelligence (AI) in healthcare practices has garnered noteworthy attention. AI modalities, inclusive of machine learning (ML), neural networks (NNs), and deep learning (DL), when amalgamated into the holistic management of HNTs, promise to augment the precision, safety, and efficacy of treatment regimens. The integration of AI within HNT management is intricately intertwined with domains such as medical imaging, bioinformatics, and medical robotics. This article intends to scrutinize the cutting-edge advancements and prospective applications of AI in the realm of HNTs, elucidating AI's indispensable role in prevention, diagnosis, treatment, prognostication, research, and inter-sectoral integration. The overarching objective is to stimulate scholarly discourse and invigorate insights among medical practitioners and researchers to propel further exploration, thereby facilitating superior therapeutic alternatives for patients.
Collapse
Affiliation(s)
- Nian-Nian Zhong
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Han-Qi Wang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Xin-Yue Huang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Zi-Zhan Li
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Lei-Ming Cao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Fang-Yi Huo
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Bing Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| | - Lin-Lin Bu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| |
Collapse
|
7
|
Rao D, Koteshwara P, Singh R, Jagannatha V. Exploring Radiomics for Classification of Supraglottic Tumors: A Pilot Study in a Tertiary Care Center. Indian J Otolaryngol Head Neck Surg 2023; 75:433-439. [PMID: 37275092 PMCID: PMC10235219 DOI: 10.1007/s12070-022-03239-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 10/11/2022] [Indexed: 11/27/2022] Open
Abstract
Accurate classification of laryngeal cancer is a critical step for diagnosis and appropriate treatment. Radiomics is a rapidly advancing field in medical image processing that uses various algorithms to extract many quantitative features from radiological images. The high dimensional features extracted tend to cause overfitting and increase the complexity of the classification model. Thereby, feature selection plays an integral part in selecting relevant features for the classification problem. In this study, we explore the predictive capabilities of radiomics on Computed Tomography (CT) images with the incidence of laryngeal cancer to predict the histopathological grade and T stage of the tumour. Working with a pilot dataset of 20 images, an experienced radiologist carefully annotated the supraglottic lesions in the three-dimensional plane. Over 280 radiomic features that quantify the shape, intensity and texture were extracted from each image. Machine learning classifiers were built and tested to predict the stage and grade of the malignant tumour based on the calculated radiomic features. To investigate if radiomic features extracted from CT images can be used for the classification of laryngeal tumours. Out of 280 features extracted from every image in the dataset, it was found that 24 features are potential classifiers of laryngeal tumour stage and 12 radiomic features are good classifiers of histopathological grade of the laryngeal tumor. The novelty of this paper lies in the ability to create these classifiers before the surgical biopsy procedure, giving the clinician valuable, timely information.
Collapse
Affiliation(s)
- Divya Rao
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104 India
- Department of Otorhinolaryngology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104 India
| | - Prakashini Koteshwara
- Department of Radiodiagnosis and Imaging, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104 India
| | - Rohit Singh
- Department of Otorhinolaryngology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104 India
| | | |
Collapse
|
8
|
Rao D, Singh R, Prakashini K, Vijayananda J. Investigating Public Sentiment on Laryngeal Cancer in 2022 Using Machine Learning. Indian J Otolaryngol Head Neck Surg 2023:1-7. [PMID: 37362133 PMCID: PMC10132422 DOI: 10.1007/s12070-023-03813-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 04/14/2023] [Indexed: 06/28/2023] Open
Abstract
This study aims to investigate public sentiment on laryngeal cancer via tweets in 2022 using machine learning. We aimed to analyze the public sentiment about laryngeal cancer on Twitter last year. A novel dataset was created for the purpose of this study by scraping all tweets from 1st Jan 2022 that included the hashtags #throatcancer, #laryngealcancer, #supraglotticcancer, #glotticcancer, and #subglotticcancer in their text. After all tweets underwent a fourfold data cleaning process, they were analyzed using natural language processing and sentiment analysis techniques to classify tweets into positive, negative, or neutral categories and to identify common themes and topics related to laryngeal cancer. The study analyzed a corpus of 733 tweets related to laryngeal cancer. The sentiment analysis revealed that 53% of the tweets were neutral, 34% were positive, and 13% were negative. The most common themes identified in the tweets were treatment and therapy, risk factors, symptoms and diagnosis, prevention and awareness, and emotional impact. This study highlights the potential of social media platforms like Twitter as a valuable source of real-time, patient-generated data that can inform healthcare research and practice. Our findings suggest that while Twitter is a popular platform, the limited number of tweets related to laryngeal cancer indicates that a better strategy could be developed for online communication among netizens regarding the awareness of laryngeal cancer.
Collapse
Affiliation(s)
- Divya Rao
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104 India
- Department of Otorhinolaryngology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104 India
| | - Rohit Singh
- Department of Otorhinolaryngology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104 India
| | - K. Prakashini
- Department of Radiodiagnosis and Imaging, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, 576104 India
| | - J. Vijayananda
- Data Science and Artificial Intelligence, Philips, Bangalore, 560045 India
| |
Collapse
|
9
|
Cubero L, Castelli J, Simon A, de Crevoisier R, Acosta O, Pascau J. Deep Learning-Based Segmentation of Head and Neck Organs-at-Risk with Clinical Partially Labeled Data. ENTROPY (BASEL, SWITZERLAND) 2022; 24:e24111661. [PMID: 36421515 PMCID: PMC9689629 DOI: 10.3390/e24111661] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/28/2022] [Accepted: 11/09/2022] [Indexed: 06/06/2023]
Abstract
Radiotherapy is one of the main treatments for localized head and neck (HN) cancer. To design a personalized treatment with reduced radio-induced toxicity, accurate delineation of organs at risk (OAR) is a crucial step. Manual delineation is time- and labor-consuming, as well as observer-dependent. Deep learning (DL) based segmentation has proven to overcome some of these limitations, but requires large databases of homogeneously contoured image sets for robust training. However, these are not easily obtained from the standard clinical protocols as the OARs delineated may vary depending on the patient's tumor site and specific treatment plan. This results in incomplete or partially labeled data. This paper presents a solution to train a robust DL-based automated segmentation tool exploiting a clinical partially labeled dataset. We propose a two-step workflow for OAR segmentation: first, we developed longitudinal OAR-specific 3D segmentation models for pseudo-contour generation, completing the missing contours for some patients; with all OAR available, we trained a multi-class 3D convolutional neural network (nnU-Net) for final OAR segmentation. Results obtained in 44 independent datasets showed superior performance of the proposed methodology for the segmentation of fifteen OARs, with an average Dice score coefficient and surface Dice similarity coefficient of 80.59% and 88.74%. We demonstrated that the model can be straightforwardly integrated into the clinical workflow for standard and adaptive radiotherapy.
Collapse
Affiliation(s)
- Lucía Cubero
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, 28911 Madrid, Spain
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Joël Castelli
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Antoine Simon
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Renaud de Crevoisier
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Oscar Acosta
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI-UMR 1099, F-35000 Rennes, France
| | - Javier Pascau
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, 28911 Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
| |
Collapse
|
10
|
Sahoo PK, Mishra S, Panigrahi R, Bhoi AK, Barsocchi P. An Improvised Deep-Learning-Based Mask R-CNN Model for Laryngeal Cancer Detection Using CT Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:8834. [PMID: 36433430 PMCID: PMC9697116 DOI: 10.3390/s22228834] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 11/08/2022] [Accepted: 11/10/2022] [Indexed: 05/30/2023]
Abstract
Recently, laryngeal cancer cases have increased drastically across the globe. Accurate treatment for laryngeal cancer is intricate, especially in the later stages. This type of cancer is an intricate malignancy inside the head and neck area of patients. In recent years, diverse diagnosis approaches and tools have been developed by researchers for helping clinical experts to identify laryngeal cancer effectively. However, these existing tools and approaches have diverse issues related to performance constraints such as lower accuracy in the identification of laryngeal cancer in the initial stage, more computational complexity, and large time consumption in patient screening. In this paper, the authors present a novel and enhanced deep-learning-based Mask R-CNN model for the identification of laryngeal cancer and its related symptoms by utilizing diverse image datasets and CT images in real time. Furthermore, our suggested model is capable of capturing and detecting minor malignancies of the larynx portion in a significant and faster manner in the real-time screening of patients, and it saves time for the clinicians, allowing for more patient screening every day. The outcome of the suggested model is enhanced and pragmatic and obtained an accuracy of 98.99%, precision of 98.99%, F1 score of 97.99%, and recall of 96.79% on the ImageNet dataset. Several studies have been performed in recent years on laryngeal cancer detection by using diverse approaches from researchers. For the future, there are vigorous opportunities for further research to investigate new approaches for laryngeal cancer detection by utilizing diverse and large dataset images.
Collapse
Affiliation(s)
- Pravat Kumar Sahoo
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India
| | - Sushruta Mishra
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, India
| | - Ranjit Panigrahi
- Department of Computer Applications, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar, Rangpo 737136, India
| | - Akash Kumar Bhoi
- KIET Group of Institutions, Delhi-NCR, Ghaziabad 201206, India
- Directorate of Research, Sikkim Manipal University, Gangtok 737102, India
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| | - Paolo Barsocchi
- Institute of Information Science and Technologies, National Research Council, 56124 Pisa, Italy
| |
Collapse
|