1
|
Bekedam NM, Koot EL, de Cuba EMV, van Alphen MJA, van Veen RLP, Karssemakers LHE, Smeele LE, Karakullukcu MB. Clinical validation of the accuracy of an intra-operative assessment tool using 3D ultrasound compared to histopathology in patients with squamous cell carcinoma of the tongue. Eur Arch Otorhinolaryngol 2024:10.1007/s00405-024-08753-3. [PMID: 38829555 DOI: 10.1007/s00405-024-08753-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Accepted: 05/23/2024] [Indexed: 06/05/2024]
Abstract
BACKGROUND Histopathological analysis often shows close resection margins after surgical removal of tongue squamous cell carcinoma (TSCC). This study aimed to investigate the agreement between intraoperative 3D ultrasound (US) margin assessment and postoperative histopathology of resected TSCC. METHODS In this study, ten patients were prospectively included. Three fiducial cannulas were inserted into the specimen. To acquire a motorized 3D US volume, the resected specimen was submerged in saline, after which images were acquired while the probe moved over the specimen. The US volumes were annotated twice: (1) automatically and (2) manually, with the automatic segmentation as initialization. After standardized histopathological processing, all hematoxylin-eosin whole slide images (WSI) were included for analysis. Corresponding US images were found based on the known WSI spacing and fiducials. Blinded observers measured the tumor thickness and the margin in the caudal, deep, and cranial directions on every slide. The anterior and posterior margin was measured per specimen. RESULTS The mean difference in all measurements between manually segmented US and histopathology was 2.34 (SD: ±3.34) mm, and Spearman's rank correlation coefficient was 0.733 (p < 0.001). The smallest mean difference was in the tumor thickness with 0.80 (SD: ±2.44) mm and a correlation of 0.836 (p < 0.001). Limitations were observed in the caudal region, where no correlation was found. CONCLUSION This study shows that 3D US and histopathology have a moderate to strong statistically significant correlation (r = 0.733; p < 0.001) and a mean difference between the modalities of 2.3 mm (95%CI: -4.2; 8.9). Future research should focus on patient outcomes regarding resection margins.
Collapse
Affiliation(s)
- N M Bekedam
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Antoni van Leeuwenhoek, Amsterdam, The Netherlands.
- Academic Centre of Dentistry Amsterdam, Vrije Universiteit, Gustav Mahlerlaan 3004, Amsterdam, 1081 LA, The Netherlands.
- Department of Head and Neck Surgery and Oncology, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands.
| | - E L Koot
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - E M V de Cuba
- Department of Pathology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - M J A van Alphen
- Department of Head and Neck Surgery and Oncology, Cancer Institute, Antoni van Leeuwenhoek, Verwelius 3D Lab, Amsterdam, The Netherlands
| | - R L P van Veen
- Department of Head and Neck Surgery and Oncology, Cancer Institute, Antoni van Leeuwenhoek, Verwelius 3D Lab, Amsterdam, The Netherlands
| | - L H E Karssemakers
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - L E Smeele
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - M B Karakullukcu
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Fallahpoor M, Nguyen D, Montahaei E, Hosseini A, Nikbakhtian S, Naseri M, Salahshour F, Farzanefar S, Abbasi M. Segmentation of liver and liver lesions using deep learning. Phys Eng Sci Med 2024; 47:611-619. [PMID: 38381270 DOI: 10.1007/s13246-024-01390-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
Segmentation of organs and lesions could be employed for the express purpose of dosimetry in nuclear medicine, assisted image interpretations, and mass image processing studies. Deep leaning created liver and liver lesion segmentation on clinical 3D MRI data has not been fully addressed in previous experiments. To this end, the required data were collected from 128 patients, including their T1w and T2w MRI images, and ground truth labels of the liver and liver lesions were generated. The collection of 110 T1w-T2w MRI image sets was divided, with 94 designated for training and 16 for validation. Furthermore, 18 more datasets were separately allocated for use as hold-out test datasets. The T1w and T2w MRI images were preprocessed into a two-channel format so that they were used as inputs to the deep learning model based on the Isensee 2017 network. To calculate the final Dice coefficient of the network performance on test datasets, the binary average of T1w and T2w predicted images was used. The deep learning model could segment all 18 test cases, with an average Dice coefficient of 88% for the liver and 53% for the liver tumor. Liver segmentation was carried out with rather a high accuracy; this could be achieved for liver dosimetry during systemic or selective radiation therapies as well as for attenuation correction in PET/MRI scanners. Nevertheless, the delineation of liver lesions was not optimal; therefore, tumor detection was not practical by the proposed method on clinical data.
Collapse
Affiliation(s)
- Maryam Fallahpoor
- Department of Nuclear Medicine, Vali-Asr Hospital, Tehran University of Medical Sciences, 1419731351, Tehran, Iran
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 75390, Dallas, TX, USA
| | - Ehsan Montahaei
- Department of Computer Engineering, Sharif University of Technology, Tehran, Iran
| | - Ali Hosseini
- Department of Nuclear Medicine, Vali-Asr Hospital, Tehran University of Medical Sciences, 1419731351, Tehran, Iran
| | - Shahram Nikbakhtian
- Departmemt of Artificial Intelligence and machine learning, Human Digital Healthcare, London, UK
| | - Maryam Naseri
- Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA, USA
| | - Faeze Salahshour
- Department of Radiology, School of Medicine, Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
- Liver Transplantation Research Center, Imam-Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Saeed Farzanefar
- Department of Nuclear Medicine, Vali-Asr Hospital, Tehran University of Medical Sciences, 1419731351, Tehran, Iran
| | - Mehrshad Abbasi
- Department of Nuclear Medicine, Vali-Asr Hospital, Tehran University of Medical Sciences, 1419731351, Tehran, Iran.
| |
Collapse
|
3
|
Bekedam NM, Idzerda LHW, van Alphen MJA, van Veen RLP, Karssemakers LHE, Karakullukcu MB, Smeele LE. Implementing a deep learning model for automatic tongue tumour segmentation in ex-vivo 3-dimensional ultrasound volumes. Br J Oral Maxillofac Surg 2024; 62:284-289. [PMID: 38402068 DOI: 10.1016/j.bjoms.2023.12.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 11/28/2023] [Accepted: 12/27/2023] [Indexed: 02/26/2024]
Abstract
Three-dimensional (3D) ultrasound can assess the margins of resected tongue carcinoma during surgery. Manual segmentation (MS) is time-consuming, labour-intensive, and subject to operator variability. This study aims to investigate use of a 3D deep learning model for fast intraoperative segmentation of tongue carcinoma in 3D ultrasound volumes. Additionally, it investigates the clinical effect of automatic segmentation. A 3D No New U-Net (nnUNet) was trained on 113 manually annotated ultrasound volumes of resected tongue carcinoma. The model was implemented on a mobile workstation and clinically validated on 16 prospectively included tongue carcinoma patients. Different prediction settings were investigated. Automatic segmentations with multiple islands were adjusted by selecting the best-representing island. The final margin status (FMS) based on automatic, semi-automatic, and manual segmentation was computed and compared with the histopathological margin. The standard 3D nnUNet resulted in the best-performing automatic segmentation with a mean (SD) Dice volumetric score of 0.65 (0.30), Dice surface score of 0.73 (0.26), average surface distance of 0.44 (0.61) mm, Hausdorff distance of 6.65 (8.84) mm, and prediction time of 8 seconds. FMS based on automatic segmentation had a low correlation with histopathology (r = 0.12, p = 0.67); MS resulted in a moderate but insignificant correlation with histopathology (r = 0.4, p = 0.12, n = 16). Implementing the 3D nnUNet yielded fast, automatic segmentation of tongue carcinoma in 3D ultrasound volumes. Correlation between FMS and histopathology obtained from these segmentations was lower than the moderate correlation between MS and histopathology.
Collapse
Affiliation(s)
- N M Bekedam
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands; Academic Centre of Dentistry Amsterdam, Vrije Universiteit, Gustav Mahlerlaan 3004, 1081 LA Amsterdam, The Netherlands.
| | - L H W Idzerda
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - M J A van Alphen
- Department of Head and Neck Surgery and Oncology, Verwelius 3D Lab, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - R L P van Veen
- Department of Head and Neck Surgery and Oncology, Verwelius 3D Lab, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - L H E Karssemakers
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - M B Karakullukcu
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - L E Smeele
- Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Brosch-Lenz JF, Delker A, Schmidt F, Tran-Gia J. On the Use of Artificial Intelligence for Dosimetry of Radiopharmaceutical Therapies. Nuklearmedizin 2023; 62:379-388. [PMID: 37827503 DOI: 10.1055/a-2179-6872] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
Routine clinical dosimetry along with radiopharmaceutical therapies is key for future treatment personalization. However, dosimetry is considered complex and time-consuming with various challenges amongst the required steps within the dosimetry workflow. The general workflow for image-based dosimetry consists of quantitative imaging, the segmentation of organs and tumors, fitting of the time-activity-curves, and the conversion to absorbed dose. This work reviews the potential and advantages of the use of artificial intelligence to improve speed and accuracy of every single step of the dosimetry workflow.
Collapse
Affiliation(s)
| | - Astrid Delker
- Department of Nuclear Medicine, LMU University Hospital, Munich, Germany
| | - Fabian Schmidt
- Department of Nuclear Medicine and Clinical Molecular Imaging, University Hospital Tuebingen, Tuebingen, Germany
- Department of Preclinical Imaging and Radiopharmacy, Werner Siemens Imaging Center, Tuebingen, Germany
| | - Johannes Tran-Gia
- Department of Nuclear Medicine, University Hospital Wuerzburg, Wuerzburg, Germany
| |
Collapse
|
5
|
Hossain MSA, Gul S, Chowdhury MEH, Khan MS, Sumon MSI, Bhuiyan EH, Khandakar A, Hossain M, Sadique A, Al-Hashimi I, Ayari MA, Mahmud S, Alqahtani A. Deep Learning Framework for Liver Segmentation from T1-Weighted MRI Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:8890. [PMID: 37960589 PMCID: PMC10650219 DOI: 10.3390/s23218890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 08/08/2023] [Accepted: 08/15/2023] [Indexed: 11/15/2023]
Abstract
The human liver exhibits variable characteristics and anatomical information, which is often ambiguous in radiological images. Machine learning can be of great assistance in automatically segmenting the liver in radiological images, which can be further processed for computer-aided diagnosis. Magnetic resonance imaging (MRI) is preferred by clinicians for liver pathology diagnosis over volumetric abdominal computerized tomography (CT) scans, due to their superior representation of soft tissues. The convenience of Hounsfield unit (HoU) based preprocessing in CT scans is not available in MRI, making automatic segmentation challenging for MR images. This study investigates multiple state-of-the-art segmentation networks for liver segmentation from volumetric MRI images. Here, T1-weighted (in-phase) scans are investigated using expert-labeled liver masks from a public dataset of 20 patients (647 MR slices) from the Combined Healthy Abdominal Organ Segmentation grant challenge (CHAOS). The reason for using T1-weighted images is that it demonstrates brighter fat content, thus providing enhanced images for the segmentation task. Twenty-four different state-of-the-art segmentation networks with varying depths of dense, residual, and inception encoder and decoder backbones were investigated for the task. A novel cascaded network is proposed to segment axial liver slices. The proposed framework outperforms existing approaches reported in the literature for the liver segmentation task (on the same test set) with a dice similarity coefficient (DSC) score and intersect over union (IoU) of 95.15% and 92.10%, respectively.
Collapse
Affiliation(s)
- Md. Sakib Abrar Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Sidra Gul
- Department of Computer Systems Engineering, University of Engineering and Technology Peshawar, Peshawar 25000, Pakistan
- Artificial Intelligence in Healthcare, IIPL, National Center of Artificial Intelligence, Peshawar 25000, Pakistan
| | | | | | | | - Enamul Haque Bhuiyan
- Center for Magnetic Resonance Research, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Maqsud Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
| | - Abdus Sadique
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
| | | | | | - Sakib Mahmud
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Abdulrahman Alqahtani
- Department of Medical Equipment Technology, College of Applied, Medical Science, Majmaah University, Majmaah City 11952, Saudi Arabia
- Department of Biomedical Technology, College of Applied Medical Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| |
Collapse
|
6
|
Li Y, Zou B, Dai P, Liao M, Bai HX, Jiao Z. AC-E Network: Attentive Context-Enhanced Network for Liver Segmentation. IEEE J Biomed Health Inform 2023; 27:4052-4061. [PMID: 37204947 DOI: 10.1109/jbhi.2023.3278079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Segmentation of liver from CT scans is essential in computer-aided liver disease diagnosis and treatment. However, the 2DCNN ignores the 3D context, and the 3DCNN suffers from numerous learnable parameters and high computational cost. In order to overcome this limitation, we propose an Attentive Context-Enhanced Network (AC-E Network) consisting of 1) an attentive context encoding module (ACEM) that can be integrated into the 2D backbone to extract 3D context without a sharp increase in the number of learnable parameters; 2) a dual segmentation branch including complemental loss making the network attend to both the liver region and boundary so that getting the segmented liver surface with high accuracy. Extensive experiments on the LiTS and the 3D-IRCADb datasets demonstrate that our method outperforms existing approaches and is competitive to the state-of-the-art 2D-3D hybrid method on the equilibrium of the segmentation precision and the number of model parameters.
Collapse
|
7
|
Wang H, Liu X, Song Y, Yin P, Zou J, Shi X, Yin Y, Li Z. Feasibility study of adaptive radiotherapy for esophageal cancer using artificial intelligence autosegmentation based on MR-Linac. Front Oncol 2023; 13:1172135. [PMID: 37361583 PMCID: PMC10289262 DOI: 10.3389/fonc.2023.1172135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 05/24/2023] [Indexed: 06/28/2023] Open
Abstract
Objective We proposed a scheme for automatic patient-specific segmentation in Magnetic Resonance (MR)-guided online adaptive radiotherapy based on daily updated, small-sample deep learning models to address the time-consuming delineation of the region of interest (ROI) in the adapt-to-shape (ATS) workflow. Additionally, we verified its feasibility in adaptive radiation therapy for esophageal cancer (EC). Methods Nine patients with EC who were treated with an MR-Linac were prospectively enrolled. The actual adapt-to-position (ATP) workflow and simulated ATS workflow were performed, the latter of which was embedded with a deep learning autosegmentation (AS) model. The first three treatment fractions of the manual delineations were used as input data to predict the next fraction segmentation, which was modified and then used as training data to update the model daily, forming a cyclic training process. Then, the system was validated in terms of delineation accuracy, time, and dosimetric benefit. Additionally, the air cavity in the esophagus and sternum were added to the ATS workflow (producing ATS+), and the dosimetric variations were assessed. Results The mean AS time was 1.40 [1.10-1.78 min]. The Dice similarity coefficient (DSC) of the AS model gradually approached 1; after four training sessions, the DSCs of all ROIs reached a mean value of 0.9 or more. Furthermore, the planning target volume (PTV) of the ATS plan showed a smaller heterogeneity index than that of the ATP plan. Additionally, V5 and V10 in the lungs and heart were greater in the ATS+ group than in the ATS group. Conclusion The accuracy and speed of artificial intelligence-based AS in the ATS workflow met the clinical radiation therapy needs of EC. This allowed the ATS workflow to achieve a similar speed to the ATP workflow while maintaining its dosimetric advantage. Fast and precise online ATS treatment ensured an adequate dose to the PTV while reducing the dose to the heart and lungs.
Collapse
Affiliation(s)
- Huadong Wang
- Department of Graduate, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xin Liu
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
- Department of Clinical Medicine, Southwestern Medical University, Luzhou, China
| | - Yajun Song
- Department of Graduate, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Peijun Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
- College of Physics and Electronic Science, Shandong Normal University, Jinan, China
| | - Jingmin Zou
- Department of Graduate, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xihua Shi
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Yong Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Zhenjiang Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| |
Collapse
|
8
|
Luu MH, Mai HS, Pham XL, Le QA, Le QK, Walsum TV, Le NH, Franklin D, Le VH, Moelker A, Chu DT, Trung NL. Quantification of liver-Lung shunt fraction on 3D SPECT/CT images for selective internal radiation therapy of liver cancer using CNN-based segmentations and non-rigid registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 233:107453. [PMID: 36921463 DOI: 10.1016/j.cmpb.2023.107453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 01/25/2023] [Accepted: 02/27/2023] [Indexed: 06/18/2023]
Abstract
PURPOSE Selective internal radiation therapy (SIRT) has been proven to be an effective treatment for hepatocellular carcinoma (HCC) patients. In clinical practice, the treatment planning for SIRT using 90Y microspheres requires estimation of the liver-lung shunt fraction (LSF) to avoid radiation pneumonitis. Currently, the manual segmentation method to draw a region of interest (ROI) of the liver and lung in 2D planar imaging of 99mTc-MAA and 3D SPECT/CT images is inconvenient, time-consuming and observer-dependent. In this study, we propose and evaluate a nearly automatic method for LSF quantification using 3D SPECT/CT images, offering improved performance compared with the current manual segmentation method. METHODS We retrospectively acquired 3D SPECT with non-contrast-enhanced CT images (nCECT) of 60 HCC patients from a SPECT/CT scanning machine, along with the corresponding diagnostic contrast-enhanced CT images (CECT). Our approach for LSF quantification is to use CNN-based methods for liver and lung segmentations in the nCECT image. We first apply 3D ResUnet to coarsely segment the liver. If the liver segmentation contains a large error, we dilate the coarse liver segmentation into the liver mask as a ROI in the nCECT image. Subsequently, non-rigid registration is applied to deform the liver in the CECT image to fit that obtained in the nCECT image. The final liver segmentation is obtained by segmenting the liver in the deformed CECT image using nnU-Net. In addition, the lung segmentations are obtained using 2D ResUnet. Finally, LSF quantitation is performed based on the number of counts in the SPECT image inside the segmentations. Evaluations and Results: To evaluate the liver segmentation accuracy, we used Dice similarity coefficient (DSC), asymmetric surface distance (ASSD), and max surface distance (MSD) and compared the proposed method to five well-known CNN-based methods for liver segmentation. Furthermore, the LSF error obtained by the proposed method was compared to a state-of-the-art method, modified Deepmedic, and the LSF quantifications obtained by manual segmentation. The results show that the proposed method achieved a DSC score for the liver segmentation that is comparable to other state-of-the-art methods, with an average of 0.93, and the highest consistency in segmentation accuracy, yielding a standard deviation of the DSC score of 0.01. The proposed method also obtains the lowest ASSD and MSD scores on average (2.6 mm and 31.5 mm, respectively). Moreover, for the proposed method, a median LSF error of 0.14% is obtained, which is a statically significant improvement to the state-of-the-art-method (p=0.004), and is much smaller than the median error in LSF manual determination by the medical experts using 2D planar image (1.74% and p<0.001). CONCLUSIONS A method for LSF quantification using 3D SPECT/CT images based on CNNs and non-rigid registration was proposed, evaluated and compared to state-of-the-art techniques. The proposed method can quantitatively determine the LSF with high accuracy and has the potential to be applied in clinical practice.
Collapse
Affiliation(s)
- Manh Ha Luu
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam; FET, VNU University of Engineering and Technology, Hanoi, Vietnam; Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands.
| | - Hong Son Mai
- Department of Nuclear Medicine, Hospital 108, Hanoi, Vietnam
| | - Xuan Loc Pham
- FET, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Quoc Anh Le
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Quoc Khanh Le
- Department of Nuclear Medicine, Hospital 108, Hanoi, Vietnam
| | - Theo van Walsum
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - Ngoc Ha Le
- Department of Nuclear Medicine, Hospital 108, Hanoi, Vietnam
| | - Daniel Franklin
- School of Electrical and Data Engineering, University of Technology Sydney, Sydney, Australia
| | - Vu Ha Le
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam; FET, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Adriaan Moelker
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - Duc Trinh Chu
- FET, VNU University of Engineering and Technology, Hanoi, Vietnam
| | - Nguyen Linh Trung
- AVITECH, VNU University of Engineering and Technology, Hanoi, Vietnam
| |
Collapse
|
9
|
Berbís MA, Paulano Godino F, Royuela del Val J, Alcalá Mata L, Luna A. Clinical impact of artificial intelligence-based solutions on imaging of the pancreas and liver. World J Gastroenterol 2023; 29:1427-1445. [PMID: 36998424 PMCID: PMC10044858 DOI: 10.3748/wjg.v29.i9.1427] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/13/2023] [Accepted: 02/27/2023] [Indexed: 03/07/2023] Open
Abstract
Artificial intelligence (AI) has experienced substantial progress over the last ten years in many fields of application, including healthcare. In hepatology and pancreatology, major attention to date has been paid to its application to the assisted or even automated interpretation of radiological images, where AI can generate accurate and reproducible imaging diagnosis, reducing the physicians’ workload. AI can provide automatic or semi-automatic segmentation and registration of the liver and pancreatic glands and lesions. Furthermore, using radiomics, AI can introduce new quantitative information which is not visible to the human eye to radiological reports. AI has been applied in the detection and characterization of focal lesions and diffuse diseases of the liver and pancreas, such as neoplasms, chronic hepatic disease, or acute or chronic pancreatitis, among others. These solutions have been applied to different imaging techniques commonly used to diagnose liver and pancreatic diseases, such as ultrasound, endoscopic ultrasonography, computerized tomography (CT), magnetic resonance imaging, and positron emission tomography/CT. However, AI is also applied in this context to many other relevant steps involved in a comprehensive clinical scenario to manage a gastroenterological patient. AI can also be applied to choose the most convenient test prescription, to improve image quality or accelerate its acquisition, and to predict patient prognosis and treatment response. In this review, we summarize the current evidence on the application of AI to hepatic and pancreatic radiology, not only in regard to the interpretation of images, but also to all the steps involved in the radiological workflow in a broader sense. Lastly, we discuss the challenges and future directions of the clinical application of AI methods.
Collapse
Affiliation(s)
- M Alvaro Berbís
- Department of Radiology, HT Médica, San Juan de Dios Hospital, Córdoba 14960, Spain
- Faculty of Medicine, Autonomous University of Madrid, Madrid 28049, Spain
| | | | | | - Lidia Alcalá Mata
- Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
| | - Antonio Luna
- Department of Radiology, HT Médica, Clínica las Nieves, Jaén 23007, Spain
| |
Collapse
|
10
|
Vergote VKJ, Verhoef G, Janssens A, Woei-A-Jin FJSH, Laenen A, Tousseyn T, Dierickx D, Deroose CM. [ 18F]FDG-PET/CT volumetric parameters can predict outcome in untreated mantle cell lymphoma. Leuk Lymphoma 2023; 64:161-170. [PMID: 36223113 DOI: 10.1080/10428194.2022.2131415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Several studies have shown a strong predictive value for pretreatment [18F]FDG-PET/CT metabolic parameters in different lymphoma subtypes. However, few publications exist concerning the role of metabolic parameters in mantle cell lymphoma (MCL). We retrospectively investigated the prognostic value of baseline metabolic tumor volume (MTV) and lesion dissemination in untreated MCL. We compared it to currently used prognostic factors such as stage, mantle cell lymphoma international prognostic index (MIPI) and KI-67. We report that a higher baseline MTV is a risk factor for worse overall survival (OS), progression-free survival (PFS), and disease-specific survival (DSS) in univariate analysis. In multivariate analysis, MTV was significantly associated with DSS, but not with OS and PFS. We found no correlation between lesion dissemination and outcome. The MIPI score remains the strongest predictor of outcome. These results show that MTV is an important prognostic tool and can improve patient risk stratification at staging of untreated MCL.
Collapse
Affiliation(s)
| | - Gregor Verhoef
- Hematology, University Hospitals Leuven, Leuven, Belgium
| | - Ann Janssens
- Hematology, University Hospitals Leuven, Leuven, Belgium
| | | | - Annouschka Laenen
- Biostatistics and Statistical Bioinformatics Center, Leuven, Belgium
| | | | - Daan Dierickx
- Hematology, University Hospitals Leuven, Leuven, Belgium
| | | |
Collapse
|
11
|
Li Z, Zhang W, Li B, Zhu J, Peng Y, Li C, Zhu J, Zhou Q, Yin Y. Patient-specific daily updated deep learning auto-segmentation for MRI-guided adaptive radiotherapy. Radiother Oncol 2022; 177:222-230. [PMID: 36375561 DOI: 10.1016/j.radonc.2022.11.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 10/31/2022] [Accepted: 11/06/2022] [Indexed: 11/13/2022]
Abstract
BACKGROUND AND PURPOSE Deep Learning (DL) technique has shown great potential but still has limited success in online contouring for MR-guided adaptive radiotherapy (MRgART). This study proposed a patient-specific DL auto-segmentation (DLAS) strategy using the patient's previous images and contours to update the model and improve segmentation accuracy and efficiency for MRgART. METHODS AND MATERIALS A prototype model was trained for each patient using the first set of MRI and corresponding contours as inputs. The patient-specific model was updated after each fraction with all the available fractional MRIs/contours, and then used to predict the segmentation for the next fraction. During model training, a variant was fitted under consistency constraints, limiting the differences in the volume, length and centroid between the predictions for the latest MRI within a reasonable range. The model performance was evaluated for both organ-at-risks and tumors auto-segmentation for a total of 6 abdominal/pelvic cases (each with at least 8 sets of MRIs/contours) underwent MRgART through Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD95), and was compared with deformable image registration (DIR) and frozen DL model (no updating after pre-training). The contouring time was also recorded and analyzed. RESULTS The proposed model achieved superior performance with higher mean DSC (0.90, 95 % CI: 0.88-0.95), as compared to DIR (0.63, 95 %CI: 0.59-0.68) and frozen DL models (0.74, 95 % CI: 0.71-0.79). As for tumors, the proposed method yielded a median DSC of 0.95, 95 % CI: 0.94-0.97, and a median HD95 of 1.63 mm, 95 % CI: 1.22 mm-2.06 mm. The contouring time was reduced significantly (p < 0.05) using the proposed method (73.4 ± 6.5 secs) compared to the manual process (12 ∼ 22 mins). The online ART time was reduced to 1650 ± 274 seconds with the proposed method, as compared to 3251.8 ± 447 seconds using the original workflow. CONCLUSION The proposed patient-specific DLAS method can significantly improve the segmentation accuracy and efficiency for longitudinal MRIs, thereby facilitating the routine practice of MRgART.
Collapse
Affiliation(s)
- Zhenjiang Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan 250117, Shandong Province, P.R.China.
| | - Wei Zhang
- Manteia Technologies Co.,Ltd, 1903, B Tower, Zijin Plaza, No.1811 Huandao East Road, Xiamen, 361001, China.
| | - Baosheng Li
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan 250117, Shandong Province, P.R.China.
| | - Jian Zhu
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan 250117, Shandong Province, P.R.China.
| | - Yinglin Peng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, China.
| | - Chengze Li
- Manteia Technologies Co.,Ltd, 1903, B Tower, Zijin Plaza, No.1811 Huandao East Road, Xiamen, 361001, China.
| | - Jennifer Zhu
- Department of biochemistry and molecular biology, University of British Columbia, Canada, 8 Edenstone View NW, Calgary AB, Canada T3A 3Z2.
| | - Qichao Zhou
- Manteia Technologies Co.,Ltd, 1903, B Tower, Zijin Plaza, No.1811 Huandao East Road, Xiamen, 361001, China.
| | - Yong Yin
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan 250117, Shandong Province, P.R.China.
| |
Collapse
|
12
|
nnU-Net Deep Learning Method for Segmenting Parenchyma and Determining Liver Volume From Computed Tomography Images. ANNALS OF SURGERY OPEN 2022; 3. [PMID: 36275876 PMCID: PMC9585534 DOI: 10.1097/as9.0000000000000155] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Background Recipient donor matching in liver transplantation can require precise estimations of liver volume. Currently utilized demographic-based organ volume estimates are imprecise and nonspecific. Manual image organ annotation from medical imaging is effective; however, this process is cumbersome, often taking an undesirable length of time to complete. Additionally, manual organ segmentation and volume measurement incurs additional direct costs to payers for either a clinician or trained technician to complete. Deep learning-based image automatic segmentation tools are well positioned to address this clinical need. Objectives To build a deep learning model that could accurately estimate liver volumes and create 3D organ renderings from computed tomography (CT) medical images. Methods We trained a nnU-Net deep learning model to identify liver borders in images of the abdominal cavity. We used 151 publicly available CT scans. For each CT scan, a board-certified radiologist annotated the liver margins (ground truth annotations). We split our image dataset into training, validation, and test sets. We trained our nnU-Net model on these data to identify liver borders in 3D voxels and integrated these to reconstruct a total organ volume estimate. Results The nnU-Net model accurately identified the border of the liver with a mean overlap accuracy of 97.5% compared with ground truth annotations. Our calculated volume estimates achieved a mean percent error of 1.92% + 1.54% on the test set. Conclusions Precise volume estimation of livers from CT scans is accurate using a nnU-Net deep learning architecture. Appropriately deployed, a nnU-Net algorithm is accurate and quick, making it suitable for incorporation into the pretransplant clinical decision-making workflow.
Collapse
|
13
|
Danieli R, Milano A, Gallo S, Veronese I, Lascialfari A, Indovina L, Botta F, Ferrari M, Cicchetti A, Raspanti D, Cremonesi M. Personalized Dosimetry in Targeted Radiation Therapy: A Look to Methods, Tools and Critical Aspects. J Pers Med 2022; 12:205. [PMID: 35207693 PMCID: PMC8874397 DOI: 10.3390/jpm12020205] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/13/2022] [Accepted: 01/14/2022] [Indexed: 12/10/2022] Open
Abstract
Targeted radiation therapy (TRT) is a strategy increasingly adopted for the treatment of different types of cancer. The urge for optimization, as stated by the European Council Directive (2013/59/EURATOM), requires the implementation of a personalized dosimetric approach, similar to what already happens in external beam radiation therapy (EBRT). The purpose of this paper is to provide a thorough introduction to the field of personalized dosimetry in TRT, explaining its rationale in the context of optimization and describing the currently available methodologies. After listing the main therapies currently employed, the clinical workflow for the absorbed dose calculation is described, based on works of the most experienced authors in the literature and recent guidelines. Moreover, the widespread software packages for internal dosimetry are presented and critical aspects discussed. Overall, a selection of the most important and recent articles about this topic is provided.
Collapse
Affiliation(s)
- Rachele Danieli
- Dipartimento di Fisica, Università degli Studi di Pavia, Via Bassi 6, 27100 Pavia, Italy;
| | - Alessia Milano
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo F. Vito 1, 00168 Roma, Italy;
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Università Cattolica del Sacro Cuore, Largo F. Vito 1, 00168 Roma, Italy
| | - Salvatore Gallo
- Dipartimento di Fisica “Aldo Pontremoli”, Università degli Studi di Milano, Via Celoria 16, 20133 Milano, Italy; (S.G.); (I.V.)
- INFN Sezione di Milano, Via Celoria 16, 20133 Milano, Italy
| | - Ivan Veronese
- Dipartimento di Fisica “Aldo Pontremoli”, Università degli Studi di Milano, Via Celoria 16, 20133 Milano, Italy; (S.G.); (I.V.)
- INFN Sezione di Milano, Via Celoria 16, 20133 Milano, Italy
| | - Alessandro Lascialfari
- INFN-Pavia Unit, Department of Physics, University of Pavia, Via Bassi 6, 27100 Pavia, Italy;
| | - Luca Indovina
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo F. Vito 1, 00168 Roma, Italy;
| | - Francesca Botta
- Medical Physics Unit, European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milano, Italy; (F.B.); (M.F.)
| | - Mahila Ferrari
- Medical Physics Unit, European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milano, Italy; (F.B.); (M.F.)
| | - Alessandro Cicchetti
- Prostate Cancer Program, Fondazione IRCCS Istituto Nazionale dei Tumori, Via Giacomo Venezian, 1, 20133 Milano, Italy;
| | - Davide Raspanti
- Temasinergie S.p.A., Via Marcello Malpighi 120, 48018 Faenza, Italy;
| | - Marta Cremonesi
- Radiation Research Unit, European Institute of Oncology IRCCS, Via Giuseppe Ripamonti 435, 20141 Milano, Italy;
| |
Collapse
|
14
|
Tang X, Jafargholi Rangraz E, Heeren R, Coudyzer W, Maleux G, Baete K, Verslype C, Gooding MJ, Deroose CM, Nuyts J. Segmentation-guided multi-modal registration of liver images for dose estimation in SIRT. EJNMMI Phys 2022; 9:3. [PMID: 35076801 PMCID: PMC8790002 DOI: 10.1186/s40658-022-00432-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 01/12/2022] [Indexed: 12/04/2022] Open
Abstract
Purpose Selective internal radiation therapy (SIRT) requires a good liver registration of multi-modality images to obtain precise dose prediction and measurement. This study investigated the feasibility of liver registration of CT and MR images, guided by segmentation of the liver and its landmarks. The influence of the resulting lesion registration on dose estimation was evaluated. Methods The liver segmentation was done with a convolutional neural network (CNN), and the landmarks were segmented manually. Our image-based registration software and its liver-segmentation-guided extension (CNN-guided) were tuned and evaluated with 49 CT and 26 MR images from 20 SIRT patients. Each liver registration was evaluated by the root mean square distance (RMSD) of mean surface distance between manually delineated liver contours and mass center distance between manually delineated landmarks (lesions, clips, etc.). The root mean square of RMSDs (RRMSD) was used to evaluate all liver registrations. The CNN-guided registration was further extended by incorporating landmark segmentations (CNN&LM-guided) to assess the value of additional landmark guidance. To evaluate the influence of segmentation-guided registration on dose estimation, mean dose and volume percentages receiving at least 70 Gy (V70) estimated on the 99mTc-labeled macro-aggregated albumin (99mTc-MAA) SPECT were computed, either based on lesions from the reference 99mTc-MAA CT (reference lesions) or from the registered floating CT or MR images (registered lesions) using the CNN- or CNN&LM-guided algorithms. Results The RRMSD decreased for the floating CTs and MRs by 1.0 mm (11%) and 3.4 mm (34%) using CNN guidance for the image-based registration and by 2.1 mm (26%) and 1.4 mm (21%) using landmark guidance for the CNN-guided registration. The quartiles for the relative mean dose difference (the V70 difference) between the reference and registered lesions and their correlations [25th, 75th; r] are as follows: [− 5.5% (− 1.3%), 5.6% (3.4%); 0.97 (0.95)] and [− 12.3% (− 2.1%), 14.8% (2.9%); 0.96 (0.97)] for the CNN&LM- and CNN-guided CT to CT registrations, [− 7.7% (− 6.6%), 7.0% (3.1%); 0.97 (0.90)] and [− 15.1% (− 11.3%), 2.4% (2.5%); 0.91 (0.78)] for the CNN&LM- and CNN-guided MR to CT registrations. Conclusion Guidance by CNN liver segmentations and landmarks markedly improves the performance of the image-based registration. The small mean dose change between the reference and registered lesions demonstrates the feasibility of applying the CNN&LM- or CNN-guided registration to volume-level dose prediction. The CNN&LM- and CNN-guided registrations for CTs can be applied to voxel-level dose prediction according to their small V70 change for most lesions. The CNN-guided MR to CT registration still needs to incorporate landmark guidance for smaller change of voxel-level dose estimation.
Collapse
|
15
|
Automated segmentation of magnetic resonance bone marrow signal: a feasibility study. Pediatr Radiol 2022; 52:1104-1114. [PMID: 35107593 PMCID: PMC9107442 DOI: 10.1007/s00247-021-05270-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 11/12/2021] [Accepted: 12/15/2021] [Indexed: 11/25/2022]
Abstract
BACKGROUND Manual assessment of bone marrow signal is time-consuming and requires meticulous standardisation to secure adequate precision of findings. OBJECTIVE We examined the feasibility of using deep learning for automated segmentation of bone marrow signal in children and adolescents. MATERIALS AND METHODS We selected knee images from 95 whole-body MRI examinations of healthy individuals and of children with chronic non-bacterial osteomyelitis, ages 6-18 years, in a longitudinal prospective multi-centre study cohort. Bone marrow signal on T2-weighted Dixon water-only images was divided into three color-coded intensity-levels: 1 = slightly increased; 2 = mildly increased; 3 = moderately to highly increased, up to fluid-like signal. We trained a convolutional neural network on 85 examinations to perform bone marrow segmentation. Four readers manually segmented a test set of 10 examinations and calculated ground truth using simultaneous truth and performance level estimation (STAPLE). We evaluated model and rater performance through Dice similarity coefficient and in consensus. RESULTS Consensus score of model performance showed acceptable results for all but one examination. Model performance and reader agreement had highest scores for level-1 signal (median Dice 0.68) and lowest scores for level-3 signal (median Dice 0.40), particularly in examinations where this signal was sparse. CONCLUSION It is feasible to develop a deep-learning-based model for automated segmentation of bone marrow signal in children and adolescents. Our model performed poorest for the highest signal intensity in examinations where this signal was sparse. Further improvement requires training on larger and more balanced datasets and validation against ground truth, which should be established by radiologists from several institutions in consensus.
Collapse
|
16
|
Gong Z, Guo C, Guo W, Zhao D, Tan W, Zhou W, Zhang G. A hybrid approach based on deep learning and level set formulation for liver segmentation in CT images. J Appl Clin Med Phys 2021; 23:e13482. [PMID: 34873831 PMCID: PMC8803306 DOI: 10.1002/acm2.13482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 10/26/2021] [Accepted: 11/05/2021] [Indexed: 11/14/2022] Open
Abstract
Accurate liver segmentation is essential for radiation therapy planning of hepatocellular carcinoma and absorbed dose calculation. However, liver segmentation is a challenging task due to the anatomical variability in both shape and size and the low contrast between liver and its surrounding organs. Thus we propose a convolutional neural network (CNN) for automated liver segmentation. In our method, fractional differential enhancement is firstly applied for preprocessing. Subsequently, an initial liver segmentation is obtained by using a CNN. Finally, accurate liver segmentation is achieved by the evolution of an active contour model. Experimental results show that the proposed method outperforms existing methods. One hundred fifty CT scans are evaluated for the experiment. For liver segmentation, Dice of 95.8%, true positive rate of 95.1%, positive predictive value of 93.2%, and volume difference of 7% are calculated. In addition, the values of these evaluation measures show that the proposed method is able to provide a precise and robust segmentation estimate, which can also assist the manual liver segmentation task.
Collapse
Affiliation(s)
- Zhaoxuan Gong
- School of Computer, Shenyang Aerospace University, Shenyang, China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Cui Guo
- School of Computer, Shenyang Aerospace University, Shenyang, China
| | - Wei Guo
- School of Computer, Shenyang Aerospace University, Shenyang, China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Dazhe Zhao
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Wei Zhou
- School of Computer, Shenyang Aerospace University, Shenyang, China
| | - Guodong Zhang
- School of Computer, Shenyang Aerospace University, Shenyang, China.,Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| |
Collapse
|
17
|
Gross M, Spektor M, Jaffe A, Kucukkaya AS, Iseke S, Haider SP, Strazzabosco M, Chapiro J, Onofrey JA. Improved performance and consistency of deep learning 3D liver segmentation with heterogeneous cancer stages in magnetic resonance imaging. PLoS One 2021; 16:e0260630. [PMID: 34852007 PMCID: PMC8635384 DOI: 10.1371/journal.pone.0260630] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 11/13/2021] [Indexed: 11/23/2022] Open
Abstract
PURPOSE Accurate liver segmentation is key for volumetry assessment to guide treatment decisions. Moreover, it is an important pre-processing step for cancer detection algorithms. Liver segmentation can be especially challenging in patients with cancer-related tissue changes and shape deformation. The aim of this study was to assess the ability of state-of-the-art deep learning 3D liver segmentation algorithms to generalize across all different Barcelona Clinic Liver Cancer (BCLC) liver cancer stages. METHODS This retrospective study, included patients from an institutional database that had arterial-phase T1-weighted magnetic resonance images with corresponding manual liver segmentations. The data was split into 70/15/15% for training/validation/testing each proportionally equal across BCLC stages. Two 3D convolutional neural networks were trained using identical U-net-derived architectures with equal sized training datasets: one spanning all BCLC stages ("All-Stage-Net": AS-Net), and one limited to early and intermediate BCLC stages ("Early-Intermediate-Stage-Net": EIS-Net). Segmentation accuracy was evaluated by the Dice Similarity Coefficient (DSC) on a dataset spanning all BCLC stages and a Wilcoxon signed-rank test was used for pairwise comparisons. RESULTS 219 subjects met the inclusion criteria (170 males, 49 females, 62.8±9.1 years) from all BCLC stages. Both networks were trained using 129 subjects: AS-Net training comprised 19, 74, 18, 8, and 10 BCLC 0, A, B, C, and D patients, respectively; EIS-Net training comprised 21, 86, and 22 BCLC 0, A, and B patients, respectively. DSCs (mean±SD) were 0.954±0.018 and 0.946±0.032 for AS-Net and EIS-Net (p<0.001), respectively. The AS-Net 0.956±0.014 significantly outperformed the EIS-Net 0.941±0.038 on advanced BCLC stages (p<0.001) and yielded similarly good segmentation performance on early and intermediate stages (AS-Net: 0.952±0.021; EIS-Net: 0.949±0.027; p = 0.107). CONCLUSION To ensure robust segmentation performance across cancer stages that is independent of liver shape deformation and tumor burden, it is critical to train deep learning models on heterogeneous imaging data spanning all BCLC stages.
Collapse
Affiliation(s)
- Moritz Gross
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Charité Center for Diagnostic and Interventional Radiology, Charité—Universitätsmedizin Berlin, Berlin, Germany
| | - Michael Spektor
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Ariel Jaffe
- Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Ahmet S. Kucukkaya
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Charité Center for Diagnostic and Interventional Radiology, Charité—Universitätsmedizin Berlin, Berlin, Germany
| | - Simon Iseke
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, Rostock University Medical Center, Rostock, Germany
| | - Stefan P. Haider
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Otorhinolaryngology, University Hospital of Ludwig Maximilians Universität München, Munich, Germany
| | - Mario Strazzabosco
- Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
| | - John A. Onofrey
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Urology, Yale University School of Medicine, New Haven, Connecticut, United States of America
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States of America
| |
Collapse
|
18
|
Xu P, Kim K, Koh J, Wu D, Rim Lee Y, Young Park S, Young Tak W, Liu H, Li Q. Efficient knowledge distillation for liver CT segmentation using growing assistant network. Phys Med Biol 2021; 66. [PMID: 34768246 DOI: 10.1088/1361-6560/ac3935] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 11/12/2021] [Indexed: 12/21/2022]
Abstract
Segmentation has been widely used in diagnosis, lesion detection, and surgery planning. Although deep learning (DL)-based segmentation methods currently outperform traditional methods, most DL-based segmentation models are computationally expensive and memory inefficient, which are not suitable for the intervention of liver surgery. To address this issue, a simple solution is to make a segmentation model very small for the fast inference time, however, there is a trade-off between the model size and performance. In this paper, we propose a DL-based real-time 3-D liver CT segmentation method, where knowledge distillation (KD) method, known as knowledge transfer from teacher to student models, is incorporated to compress the model while preserving the performance. Because it is well known that the knowledge transfer is inefficient when the disparity of teacher and student model sizes is large, we propose a growing teacher assistant network (GTAN) to gradually learn the knowledge without extra computational cost, which can efficiently transfer knowledge even with the large gap of teacher and student model sizes. In our results, dice similarity coefficient of the student model with KD improved 1.2% (85.9% to 87.1%) compared to the student model without KD, which is a similar performance of the teacher model using only 8% (100k) parameters. Furthermore, with a student model of 2% (30k) parameters, the proposed model using the GTAN improved the dice coefficient about 2% compared to the student model without KD, and the inference time is 13 ms per a 3-D image. Therefore, the proposed method has a great potential for intervention in liver surgery as well as in many real-time applications.
Collapse
Affiliation(s)
- Pengcheng Xu
- College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China.,Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America
| | - Kyungsang Kim
- Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America
| | - Jeongwan Koh
- Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America
| | - Dufan Wu
- Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America
| | - Yu Rim Lee
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
| | - Soo Young Park
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
| | - Won Young Tak
- Department of Internal Medicine, School of Medicine, Kyungpook National University, Daegu, Republic of Korea
| | - Huafeng Liu
- College of Optical Science and Engineering, Zhejiang University, Hangzhou, People's Republic of China
| | - Quanzheng Li
- Massachusetts General Hospital and Harvard Medical School, Radiology Department, 55 Fruit Street, Boston, MA 02114, United States of America
| |
Collapse
|
19
|
Brosch-Lenz J, Yousefirizi F, Zukotynski K, Beauregard JM, Gaudet V, Saboury B, Rahmim A, Uribe C. Role of Artificial Intelligence in Theranostics:: Toward Routine Personalized Radiopharmaceutical Therapies. PET Clin 2021; 16:627-641. [PMID: 34537133 DOI: 10.1016/j.cpet.2021.06.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
We highlight emerging uses of artificial intelligence (AI) in the field of theranostics, focusing on its significant potential to enable routine and reliable personalization of radiopharmaceutical therapies (RPTs). Personalized RPTs require patient-specific dosimetry calculations accompanying therapy. Additionally we discuss the potential to exploit biological information from diagnostic and therapeutic molecular images to derive biomarkers for absorbed dose and outcome prediction; toward personalization of therapies. We try to motivate the nuclear medicine community to expand and align efforts into making routine and reliable personalization of RPTs a reality.
Collapse
Affiliation(s)
- Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Katherine Zukotynski
- Department of Medicine and Radiology, McMaster University, 1200 Main Street West, Hamilton, Ontario L9G 4X5, Canada
| | - Jean-Mathieu Beauregard
- Department of Radiology and Nuclear Medicine, Cancer Research Centre, Université Laval, 2325 Rue de l'Université, Québec City, Quebec G1V 0A6, Canada; Department of Medical Imaging, Research Center (Oncology Axis), CHU de Québec - Université Laval, 2325 Rue de l'Université, Québec City, Quebec G1V 0A6, Canada
| | - Vincent Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Ontario N2L 3G1, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, 11th Floor, 2775 Laurel St, Vancouver, British Columbia V5Z 1M9, Canada; Department of Physics, University of British Columbia, 325 - 6224 Agricultural Road, Vancouver, British Columbia V6T 1Z1, Canada
| | - Carlos Uribe
- Department of Radiology, University of British Columbia, 11th Floor, 2775 Laurel St, Vancouver, British Columbia V5Z 1M9, Canada; Department of Functional Imaging, BC Cancer, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| |
Collapse
|
20
|
Abstract
ABSTRACT Artificial intelligence is poised to revolutionize medical image. It takes advantage of the high-dimensional quantitative features present in medical images that may not be fully appreciated by humans. Artificial intelligence has the potential to facilitate automatic organ segmentation, disease detection and characterization, and prediction of disease recurrence. This article reviews the current status of artificial intelligence in liver imaging and reviews the opportunities and challenges in clinical implementation.
Collapse
|
21
|
Xiang K, Jiang B, Shang D. The overview of the deep learning integrated into the medical imaging of liver: a review. Hepatol Int 2021; 15:868-880. [PMID: 34264509 DOI: 10.1007/s12072-021-10229-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/24/2021] [Indexed: 12/13/2022]
Abstract
Deep learning (DL) is a recently developed artificial intelligent method that can be integrated into numerous fields. For the imaging diagnosis of liver disease, several remarkable outcomes have been achieved with the application of DL currently. This advanced algorithm takes part in various sections of imaging processing such as liver segmentation, lesion delineation, disease classification, process optimization, etc. The DL optimized imaging diagnosis shows a broad prospect instead of the pathological biopsy for the advantages of convenience, safety, and inexpensiveness. In this paper, we reviewed the published representative DL-related hepatic imaging works, described the general situation of this new-rising technology in medical liver imaging and explored the future direction of DL development.
Collapse
Affiliation(s)
- Kailai Xiang
- Department of General Surgery, First Affiliated Hospital of Dalian Medical University, Dalian, 116011, Liaoning, China.,Clinical Laboratory of Integrative Medicine, First Affiliated Hospital of Dalian Medical University, Dalian, 116011, Liaoning, China
| | - Baihui Jiang
- Department of Ophthalmology, First Affiliated Hospital of Dalian Medical University, Dalian, 116011, Liaoning, China
| | - Dong Shang
- Department of General Surgery, First Affiliated Hospital of Dalian Medical University, Dalian, 116011, Liaoning, China. .,Clinical Laboratory of Integrative Medicine, First Affiliated Hospital of Dalian Medical University, Dalian, 116011, Liaoning, China.
| |
Collapse
|
22
|
Kim S, Lee P, Oh KT, Byun MS, Yi D, Lee JH, Kim YK, Ye BS, Yun MJ, Lee DY, Jeong Y. Deep learning-based amyloid PET positivity classification model in the Alzheimer's disease continuum by using 2-[ 18F]FDG PET. EJNMMI Res 2021; 11:56. [PMID: 34114091 PMCID: PMC8192639 DOI: 10.1186/s13550-021-00798-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 06/02/2021] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Considering the limited accessibility of amyloid position emission tomography (PET) in patients with dementia, we proposed a deep learning (DL)-based amyloid PET positivity classification model from PET images with 2-deoxy-2-[fluorine-18]fluoro-D-glucose (2-[18F]FDG). METHODS We used 2-[18F]FDG PET datasets from the Alzheimer's Disease Neuroimaging Initiative and Korean Brain Aging Study for the Early diagnosis and prediction of Alzheimer's disease for model development. Moreover, we used an independent dataset from another hospital. A 2.5-D deep learning architecture was constructed using 291 submodules and three axes images as the input. We conducted the voxel-wise analysis to assess the regions with substantial differences in glucose metabolism between the amyloid PET-positive and PET-negative participants. This facilitated an understanding of the deep model classification. In addition, we compared these regions with the classification probability from the submodules. RESULTS There were 686 out of 1433 (47.9%) and 50 out of 100 (50%) amyloid PET-positive participants in the training and internal validation datasets and the external validation datasets, respectively. With 50 times iterations of model training and validation, the model achieved an AUC of 0.811 (95% confidence interval (CI) of 0.803-0.819) and 0.798 (95% CI, 0.789-0.807) on the internal and external validation datasets, respectively. The area under the curve (AUC) was 0.860 when tested with the model with the highest value (0.864) on the external validation dataset. Moreover, it had 75.0% accuracy, 76.0% sensitivity, 74.0% specificity, and 75.0% F1-score. We found an overlap between the regions within the default mode network, thus generating high classification values. CONCLUSION The proposed model based on the 2-[18F]FDG PET imaging data and a DL framework might successfully classify amyloid PET positivity in clinical practice, without performing amyloid PET, which have limited accessibility.
Collapse
Affiliation(s)
- Suhong Kim
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- Korea Advanced Institute of Science and Technology (KAIST), KI for Health Science Technology, Daejeon, Republic of Korea
| | - Peter Lee
- Korea Advanced Institute of Science and Technology (KAIST), KI for Health Science Technology, Daejeon, Republic of Korea
| | - Kyeong Taek Oh
- Department of Medical Engineering, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Min Soo Byun
- Department of Neuropsychiatry, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Dahyun Yi
- Institute of Human Behavioral Medicine, Medical Research Center, Seoul National University, Seoul, Republic of Korea
| | - Jun Ho Lee
- Department of Neuropsychiatry, National Center for Mental Health, Seoul, Republic of Korea
| | - Yu Kyeong Kim
- Department of Nuclear Medicine, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
| | - Byoung Seok Ye
- Department of Neurology, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Mi Jin Yun
- Department of Nuclear Medicine, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | - Dong Young Lee
- Department of Neuropsychiatry, National Center for Mental Health, Seoul, Republic of Korea.
- Department of Psychiatry, Seoul National University College of Medicine, 101 Daehak-ro, Joungno-gu, Seoul, 03080, Republic of Korea.
- Department of Neuropsychiatry, Seoul National University Hospital, Seoul, Republic of Korea.
| | - Yong Jeong
- Graduate School of Medical Science and Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea.
- Korea Advanced Institute of Science and Technology (KAIST), KI for Health Science Technology, Daejeon, Republic of Korea.
| |
Collapse
|
23
|
Song L, Wang H, Wang ZJ. Bridging the Gap between 2D and 3D Contexts in CT Volume for Liver and Tumor Segmentation. IEEE J Biomed Health Inform 2021; 25:3450-3459. [PMID: 33905339 DOI: 10.1109/jbhi.2021.3075752] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automatic liver and tumor segmentation remain a challenging topic, which subjects to the exploration of 2D and 3D contexts in CT volume. Existing methods are either only focus on the 2D context by treating the CT volume as many independent image slices (but ignore the useful temporal information between adjacent slices), or just explore the 3D context lied in many little voxels (but damage the spatial detail in each slice). These factors lead an inadequate context exploration together for automatic liver and tumor segmentation. In this paper, we propose a novel full-context convolution neural network to bridge the gap between 2D and 3D contexts. The proposed network can utilize the temporal information along the Z axis in CT volume while retaining the spatial detail in each slice. Specifically, a 2D spatial network for intra-slice features extraction and a 3D temporal network for inter-slice features extraction are proposed separately and then are guided by the squeeze-and-excitation layer that allows the flow of 2D context and 3D temporal information. To address the severe class imbalance issue in the CT volume and meanwhile improve the segmentation performance, a loss function consisting of weighted cross-entropy and jaccard distance is proposed. During the network training, the 2D and 3D contexts are learned jointly in an end-to-end way. The proposed network achieves competitive results on the Liver Tumor Segmentation Challenge (LiTS) and the 3D-IRCADB datasets. This method should be a new promising paradigm to explore the contexts for liver and tumor segmentation.
Collapse
|
24
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
25
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
26
|
Jafargholi Rangraz E, Tang X, Van Laeken C, Maleux G, Dekervel J, Van Cutsem E, Verslype C, Baete K, Nuyts J, Deroose CM. Quantitative comparison of pre-treatment predictive and post-treatment measured dosimetry for selective internal radiation therapy using cone-beam CT for tumor and liver perfusion territory definition. EJNMMI Res 2020; 10:94. [PMID: 32797332 PMCID: PMC7427681 DOI: 10.1186/s13550-020-00675-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 07/17/2020] [Indexed: 11/21/2022] Open
Abstract
Background Selective internal radiation therapy (SIRT) is a promising treatment for unresectable hepatic malignancies. Predictive dose calculation based on a simulation using 99mTc-labeled macro-aggregated albumin (99mTc-MAA) before the treatment is considered as a potential tool for patient-specific treatment planning. Post-treatment dose measurement is mainly performed to confirm the planned absorbed dose to the tumor and non-tumor liver volumes. This study compared the predicted and measured absorbed dose distributions. Methods Thirty-one patients (67 tumors) treated by SIRT with resin microspheres were analyzed. Predicted and delivered absorbed dose was calculated using 99mTc-MAA-SPECT and 90Y-TOF-PET imaging. The voxel-level dose distribution was derived using the local deposition model. Liver perfusion territories and tumors have been delineated on contrast-enhanced CBCT images, which have been acquired during the 99mTc-MAA work-up. Several dose-volume histogram (DVH) parameters together with the mean dose for liver perfusion territories and non-tumoral and tumoral compartments were evaluated. Results A strong correlation between the predicted and measured mean dose for non-tumoral volume was observed (r = 0.937). The ratio of measured and predicted mean dose to this volume has a first, second, and third interquartile range of 0.83, 1.05, and 1.25. The difference between the measured and predicted mean dose did not exceed 11 Gy. The correlation between predicted and measured mean dose to the tumor was moderate (r = 0.623) with a mean difference of − 9.3 Gy. The ratio of measured and predicted tumor mean dose had a median of 1.01 with the first and third interquartile ranges of 0.58 and 1.59, respectively. Our results suggest that 99mTc-MAA-based dosimetry could predict under or over dosing of the non-tumoral liver parenchyma for almost all cases. For more than two thirds of the tumors, a predictive absorbed dose correctly indicated either good tumor dose coverage or under-dosing of the tumor. Conclusion Our results highlight the predictive value of 99mTc-MAA-based dose estimation to predict non-tumor liver irradiation, which can be applied to prescribe an optimized activity aiming at avoiding liver toxicity. Compared to non-tumoral tissue, a poorer agreement between predicted and measured absorbed dose is observed for tumors.
Collapse
Affiliation(s)
- Esmaeel Jafargholi Rangraz
- Nuclear Medicine, University Hospitals Leuven, Nuclear Medicine and Molecular Imaging, Department of Imaging & Pathology, Leuven, Belgium.
| | - Xikai Tang
- Nuclear Medicine, University Hospitals Leuven, Nuclear Medicine and Molecular Imaging, Department of Imaging & Pathology, Leuven, Belgium
| | | | - Geert Maleux
- Radiology Section, University Hospitals Leuven, Department of Imaging and Pathology, Leuven, Belgium
| | - Jeroen Dekervel
- Digestive Oncology, University Hospitals Leuven, Leuven, Belgium
| | - Eric Van Cutsem
- Digestive Oncology, University Hospitals Leuven, Leuven, Belgium
| | - Chris Verslype
- Digestive Oncology, University Hospitals Leuven, Leuven, Belgium
| | - Kristof Baete
- Nuclear Medicine, University Hospitals Leuven, Nuclear Medicine and Molecular Imaging, Department of Imaging & Pathology, Leuven, Belgium
| | - Johan Nuyts
- Nuclear Medicine, University Hospitals Leuven, Nuclear Medicine and Molecular Imaging, Department of Imaging & Pathology, Leuven, Belgium
| | - Christophe M Deroose
- Nuclear Medicine, University Hospitals Leuven, Nuclear Medicine and Molecular Imaging, Department of Imaging & Pathology, Leuven, Belgium
| |
Collapse
|