1
|
Salimi Y, Mansouri Z, Shiri I, Mainta I, Zaidi H. Deep Learning-Powered CT-Less Multitracer Organ Segmentation From PET Images: A Solution for Unreliable CT Segmentation in PET/CT Imaging. Clin Nucl Med 2025; 50:289-300. [PMID: 39883026 DOI: 10.1097/rlu.0000000000005685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2025]
Abstract
PURPOSE The common approach for organ segmentation in hybrid imaging relies on coregistered CT (CTAC) images. This method, however, presents several limitations in real clinical workflows where mismatch between PET and CT images are very common. Moreover, low-dose CTAC images have poor quality, thus challenging the segmentation task. Recent advances in CT-less PET imaging further highlight the necessity for an effective PET organ segmentation pipeline that does not rely on CT images. Therefore, the goal of this study was to develop a CT-less multitracer PET segmentation framework. PATIENTS AND METHODS We collected 2062 PET/CT images from multiple scanners. The patients were injected with either 18 F-FDG (1487) or 68 Ga-PSMA (575). PET/CT images with any kind of mismatch between PET and CT images were detected through visual assessment and excluded from our study. Multiple organs were delineated on CT components using previously trained in-house developed nnU-Net models. The segmentation masks were resampled to coregistered PET images and used to train 4 different deep learning models using different images as input, including noncorrected PET (PET-NC) and attenuation and scatter-corrected PET (PET-ASC) for 18 F-FDG (tasks 1 and 2, respectively using 22 organs) and PET-NC and PET-ASC for 68 Ga tracers (tasks 3 and 4, respectively, using 15 organs). The models' performance was evaluated in terms of Dice coefficient, Jaccard index, and segment volume difference. RESULTS The average Dice coefficient over all organs was 0.81 ± 0.15, 0.82 ± 0.14, 0.77 ± 0.17, and 0.79 ± 0.16 for tasks 1, 2, 3, and 4, respectively. PET-ASC models outperformed PET-NC models ( P < 0.05) for most of organs. The highest Dice values were achieved for the brain (0.93 to 0.96 in all 4 tasks), whereas the lowest values were achieved for small organs, such as the adrenal glands. The trained models showed robust performance on dynamic noisy images as well. CONCLUSIONS Deep learning models allow high-performance multiorgan segmentation for 2 popular PET tracers without the use of CT information. These models may tackle the limitations of using CT segmentation in PET/CT image quantification, kinetic modeling, radiomics analysis, dosimetry, or any other tasks that require organ segmentation masks.
Collapse
Affiliation(s)
- Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Isaac Shiri
- Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Ismini Mainta
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | |
Collapse
|
2
|
Alblas D, Suk J, Brune C, Yeung KK, Wolterink JM. SIRE: Scale-invariant, rotation-equivariant estimation of artery orientations using graph neural networks. Med Image Anal 2025; 101:103467. [PMID: 39842325 DOI: 10.1016/j.media.2025.103467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 12/18/2024] [Accepted: 01/09/2025] [Indexed: 01/24/2025]
Abstract
The orientation of a blood vessel as visualized in 3D medical images is an important descriptor of its geometry that can be used for centerline extraction and subsequent segmentation, labeling, and visualization. Blood vessels appear at multiple scales and levels of tortuosity, and determining the exact orientation of a vessel is a challenging problem. Recent works have used 3D convolutional neural networks (CNNs) for this purpose, but CNNs are sensitive to variations in vessel size and orientation. We present SIRE: a scale-invariant rotation-equivariant estimator for local vessel orientation. SIRE is modular and has strongly generalizing properties due to symmetry preservations. SIRE consists of a gauge equivariant mesh CNN (GEM-CNN) that operates in parallel on multiple nested spherical meshes with different sizes. The features on each mesh are a projection of image intensities within the corresponding sphere. These features are intrinsic to the sphere and, in combination with the gauge equivariant properties of GEM-CNN, lead to SO(3) rotation equivariance. Approximate scale invariance is achieved by weight sharing and use of a symmetric maximum aggregation function to combine predictions at multiple scales. Hence, SIRE can be trained with arbitrarily oriented vessels with varying radii to generalize to vessels with a wide range of calibres and tortuosity. We demonstrate the efficacy of SIRE using three datasets containing vessels of varying scales; the vascular model repository (VMR), the ASOCA coronary artery set, and an in-house set of abdominal aortic aneurysms (AAAs). We embed SIRE in a centerline tracker which accurately tracks large calibre AAAs, regardless of the data SIRE is trained with. Moreover, a tracker can use SIRE to track small-calibre tortuous coronary arteries, even when trained only with large-calibre, non-tortuous AAAs. Additional experiments are performed to verify the rotational equivariant and scale invariant properties of SIRE. In conclusion, by incorporating SO(3) and scale symmetries, SIRE can be used to determine orientations of vessels outside of the training domain, offering a robust and data-efficient solution to geometric analysis of blood vessels in 3D medical images.
Collapse
Affiliation(s)
- Dieuwertje Alblas
- Department of Applied Mathematics, Technical Medical Centre, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands.
| | - Julian Suk
- Department of Applied Mathematics, Technical Medical Centre, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands
| | - Christoph Brune
- Department of Applied Mathematics, Technical Medical Centre, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands
| | - Kak Khee Yeung
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Surgery, De Boelelaan 1117, 1081 HV Amsterdam, The Netherlands; Amsterdam Cardiovascular Sciences, Microcirculation, Amsterdam, The Netherlands
| | - Jelmer M Wolterink
- Department of Applied Mathematics, Technical Medical Centre, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands
| |
Collapse
|
3
|
Zhou J, Shanbhag AD, Han D, Marcinkiewicz AM, Buchwald M, Miller RJH, Killekar A, Manral N, Grodecki K, Geers J, Pieszko K, Yi J, Zhang W, Waechter P, Gransar H, Dey D, Berman DS, Slomka PJ. Automated proximal coronary artery calcium identification using artificial intelligence: advancing cardiovascular risk assessment. Eur Heart J Cardiovasc Imaging 2025; 26:471-480. [PMID: 39821011 DOI: 10.1093/ehjci/jeaf007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Revised: 11/07/2024] [Accepted: 12/27/2024] [Indexed: 01/19/2025] Open
Abstract
AIMS Identification of proximal coronary artery calcium (CAC) may improve prediction of major adverse cardiac events (MACE) beyond the CAC score, particularly in patients with low CAC burden. We investigated whether the proximal CAC can be detected on gated cardiac CT and whether it provides prognostic significance with artificial intelligence (AI). METHODS AND RESULTS A total of 2016 asymptomatic adults with baseline CAC CT scans from a single site were followed up for MACE for 14 years. An AI algorithm to classify CAC into proximal or not was created using expert annotations of total and proximal CAC and AI-derived cardiac structures. The algorithm was evaluated for prognostic significance on AI-derived CAC segmentation. In 303 subjects with expert annotations, the classification of proximal vs. non-proximal CAC reached an area under receiver operating curve of 0.93 [95% confidence interval (CI) 0.91-0.95]. For prognostic evaluation, in an additional 588 subjects with mild AI-derived CAC scores (CAC score 1-99), the AI proximal involvement was associated with worse MACE-free survival (P = 0.008) and higher risk of MACE when adjusting for CAC score alone [hazard ratio (HR) 2.28, 95% CI 1.16-4.48, P = 0.02] or CAC score and clinical risk factors (HR 2.12, 95% CI 1.03-4.36, P = 0.04). CONCLUSION The AI algorithm could identify proximal CAC on CAC CT. The proximal location had modest prognostic significance in subjects with mild CAC scores. The AI identification of proximal CAC can be integrated into automatic CAC scoring and improves the risk prediction of CAC CT.
Collapse
Affiliation(s)
- Jianhang Zhou
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Aakash D Shanbhag
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
- Signal and Image Processing Institute, Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Donghee Han
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Anna M Marcinkiewicz
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Mikolaj Buchwald
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Robert J H Miller
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
- Department of Cardiac Sciences, University of Calgary, Calgary, Alberta, Canada
| | - Aditya Killekar
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Nipun Manral
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Kajetan Grodecki
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
- 1st Department of Cardiology, Medical University of Warsaw, Warsaw, Poland
| | - Jolien Geers
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
- Department of Cardiology, Centrum voor Hart- en Vaatziekten, Universitair Ziekenhuis Brussel, Vrije Universiteit Brussel, Brussels, Belgium
| | - Konrad Pieszko
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
- Department of Interventional Cardiology and Cardiac Surgery, Collegium Medicum, University of Zielona Góra, Zielona Góra, Poland
| | - Jirong Yi
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Wenhao Zhang
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Parker Waechter
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Heidi Gransar
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Damini Dey
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Daniel S Berman
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| | - Piotr J Slomka
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, 6500 Wilshire Boulevard, Los Angeles, CA 90048, USA
| |
Collapse
|
4
|
Ding W, Wang H, Qiao X, Li B, Huang Q. A deep learning method for total-body dynamic PET imaging with dual-time-window protocols. Eur J Nucl Med Mol Imaging 2025; 52:1448-1459. [PMID: 39688700 DOI: 10.1007/s00259-024-07012-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Accepted: 12/02/2024] [Indexed: 12/18/2024]
Abstract
PURPOSE Prolonged scanning durations are one of the primary barriers to the widespread clinical adoption of dynamic Positron Emission Tomography (PET). In this paper, we developed a deep learning algorithm that capable of predicting dynamic images from dual-time-window protocols, thereby shortening the scanning time. METHODS This study includes 70 patients (mean age ± standard deviation, 53.61 ± 13.53 years; 32 males) diagnosed with pulmonary nodules or breast nodules between 2022 to 2024. Each patient underwent a 65-min dynamic total-body [18F]FDG PET/CT scan. Acquisitions using early-stop protocols and dual-time-window protocols were simulated to reduce the scanning time. To predict the missing frames, we developed a bidirectional sequence-to-sequence model with attention mechanism (Bi-AT-Seq2Seq); and then compared the model with unidirectional or non-attentional models in terms of Mean Absolute Error (MAE), Bias, Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity (SSIM) of predicted frames. Furthermore, we reported the comparison of concordance correlation coefficient (CCC) of the kinetic parameters between the proposed method and traditional methods. RESULTS The Bi-AT-Seq2Seq significantly outperform unidirectional or non-attentional models in terms of MAE, Bias, PSNR, and SSIM. Using a dual-time-window protocol, which includes a 10-min early scan followed by a 5-min late scan, improves the four metrics of predicted dynamic images by 37.31%, 36.24%, 7.10%, and 0.014% respectively, compared to the early-stop protocol with a 15-min acquisition. The CCCs of tumor' kinetic parameters estimated with recovered full time-activity-curves (TACs) is higher than those with abbreviated TACs. CONCLUSION The proposed algorithm can accurately generate a complete dynamic acquisition (65 min) from dual-time-window protocols (10 + 5 min).
Collapse
Affiliation(s)
- Wenxiang Ding
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China
- Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Hanzhong Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Xiaoya Qiao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China.
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Qiu Huang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
5
|
Sturla F, Caimi A, Giugno L, Pasqualin G, Tissir K, Secchi F, Redaelli A, Carminati M, Votta E. Planning transcatheter pulmonary valve implantation in the dysfunctional native RVOT: A semi-automated pipeline for dynamic analysis based on 4D-CT imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 260:108569. [PMID: 39721125 DOI: 10.1016/j.cmpb.2024.108569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2024] [Revised: 12/05/2024] [Accepted: 12/16/2024] [Indexed: 12/28/2024]
Abstract
BACKGROUND AND OBJECTIVE Dysfunction of the right ventricular outflow tract (RVOT) is a common long-term complication following surgical repair in patients with congenital heart disease. Transcatheter pulmonary valve implantation (TPVI) offers a viable alternative to surgical pulmonary valve replacement (SPVR) for treating pulmonary regurgitation but not all RVOT anatomies are suitable for TPVI. To identify a suitable landing zone (LZ) for TPVI, three-dimensional multiphase (4D) computed tomography (CT) is used to evaluate the size, shape, and dynamic behavior of the RVOT throughout the cardiac cycle. However, manually extracting measurements from multiplanar CT reformats is operator-dependent and time-consuming. Leveraging an optical-flow (OF) algorithm, we proposed a novel semi-automated pipeline for dynamic and comprehensive geometrical analysis of the RVOT anatomy. METHODS Upon 4D-CT availability, at a pre-defined reference time-point, the patient-specific anatomy is semi-automatically segmented to generate the corresponding three-dimensional surface, which is navigated through a graphical user interface to define the mid-section of the potential LZ. Based on the axial length of the intended device, the proximal and distal LZ cross-sections are automatically identified. An OF-based algorithm is used to track the three LZ cross-sections frame by frame throughout the cardiac cycle, taking RVOT out-of-plane motion into account to update RVOT contours on each cross-section and to elaborate LZ geometrical changes. Finally, LZ time-dependent geometrical features are quantified and extracted. RESULTS The pipeline was successfully applied to a retrospective cohort of patients, with OF-based tracking reporting excellent agreement (r2 = 0.99) compared to manual processing, with a bias < 1% for both LZ area and perimeter, while also significantly improving time efficiency. CT-derived measurements extracted from LZ mid-section were the most influential covariates affecting the likelihood of TPVI feasibility. Among these, the minimum perimeter outperformed all other geometric LZ parameters in classifying patients as suitable for either TPVI or SPVR and achieved the highest area under the curve of 0.99, with accuracy and precision of 0.93 and 0.92, respectively. CONCLUSIONS Dynamic OF-based quantification of key RVOT geometric parameters can enhance and expedite the selection process for TPVI candidates and guide optimal valve sizing during TPVI planning.
Collapse
Affiliation(s)
- Francesco Sturla
- 3D and Computer Simulation Laboratory, IRCCS Policlinico San Donato, San Donato Milanese, Italy; Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy.
| | - Alessandro Caimi
- Deparment of Civil Engineering and Architecture, Università degli Studi di Pavia, Pavia, Italy
| | - Luca Giugno
- Department of Pediatric and Adult Congenital Cardiology, IRCCS Policlinico San Donato, San Donato Milanese, Italy
| | - Giulia Pasqualin
- Department of Pediatric and Adult Congenital Cardiology, IRCCS Policlinico San Donato, San Donato Milanese, Italy
| | - Karima Tissir
- Unit of Radiology, IRCCS Policlinico San Donato, San Donato Milanese, Italy
| | - Francesco Secchi
- Unit of Cardiovascular Imaging, IRCCS Multimedica, Sesto San Giovanni, Italy; Department of Biomedical Sciences for Health, Università degli Studi di Milano, Milano, Italy
| | - Alberto Redaelli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Mario Carminati
- Department of Pediatric and Adult Congenital Cardiology, IRCCS Policlinico San Donato, San Donato Milanese, Italy
| | - Emiliano Votta
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| |
Collapse
|
6
|
Zhao H, Zhang X, Huang B, Shi X, Xiao L, Li Z. Application of machine learning methods for predicting esophageal variceal bleeding in patients with cirrhosis. Eur Radiol 2025; 35:1440-1450. [PMID: 39708084 DOI: 10.1007/s00330-024-11311-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 10/20/2024] [Accepted: 11/24/2024] [Indexed: 12/23/2024]
Abstract
OBJECTIVE To develop and compare machine learning models based on CT morphology features, serum biomarkers, and basic physical conditions to predict esophageal variceal bleeding. MATERIALS AND METHODS Two hundred twenty-four cirrhotic patients with esophageal variceal bleeding and non-bleeding were included in the retrospective study. Clinical and serum biomarkers were used in our study. In addition, the open-access segmentation model was used to generate segmentation masks of the liver and spleen. Four machine learning models based on selected features are used for building prediction models, and the diagnostic performances of models were measured using the receiver operator characteristic analysis. RESULTS Two hundred twenty-four cirrhosis patients with esophageal varices, including 112 patients with bleeding (mean age 52.8 ± 11.5 years, range 18-80 years) and 112 patients with non-bleeding (mean age 57.3 ± 10.5 years, range 34-85 years). The two groups showed significant differences in standardized spleen volume, fibrinogen, alanine aminotransferase, aspartate aminotransferase, D-dimer, platelet, and age. The ratio of the training set to the test set was 8:2 in our research, and the 5-fold cross-validation was used in the research. The AUCs of linear regression, random forest, support vector machine, and adaptive boosting were, respectively, 0.742, 0.854, 0.719, and 0.821 in the training set. For the test set, the AUCs of models were, respectively, 0.763, 0.818, 0.648, and 0.804. CONCLUSIONS Our study used CT morphological measurements, serum biomarkers, and age to build machine learning models, and the random forest and adaptive boosting had potential added value in predictive model construction. KEY POINTS Question Esophageal variceal bleeding is an intractable complication of liver cirrhosis. Early prediction and prevention of esophageal variceal bleeding is important for patients with liver cirrhosis. Findings It was feasible and clinically meaningful to construct machine learning models based on CT morphology features, serum biomarkers, and physical conditions to predict variceal bleeding. Clinical relevance Our study may provide a promising tool with which clinicians can conduct therapeutic decisions on fewer invasive procedures for the prediction of esophageal variceal bleeding.
Collapse
Affiliation(s)
- Haichen Zhao
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Xiaoya Zhang
- College of Computer Science and Technology of Qingdao University, Qingdao, China
| | - Baoxiang Huang
- College of Computer Science and Technology of Qingdao University, Qingdao, China
| | - Xiaojuan Shi
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Longyang Xiao
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Zhiming Li
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China.
| |
Collapse
|
7
|
Weber T, Dexl J, Rügamer D, Ingrisch M. Post-Training Network Compression for 3D Medical Image Segmentation: Reducing Computational Efforts via Tucker Decomposition. Radiol Artif Intell 2025; 7:e240353. [PMID: 39812583 DOI: 10.1148/ryai.240353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2025]
Abstract
Purpose To investigate whether the computational effort of three-dimensional CT-based multiorgan segmentation with TotalSegmentator can be reduced via Tucker decomposition-based network compression. Materials and Methods In this retrospective study, Tucker decomposition was applied to the convolutional kernels of the TotalSegmentator model, an nnU-Net model trained on a comprehensive CT dataset for automatic segmentation of 117 anatomic structures. The proposed approach reduced the floating-point operations and memory required during inference, offering an adjustable trade-off between computational efficiency and segmentation quality. This study used the publicly available TotalSegmentator dataset containing 1228 segmented CT scans and a test subset of 89 CT scans and used various downsampling factors to explore the relationship between model size, inference speed, and segmentation accuracy. Segmentation performance was evaluated using the Dice score. Results The application of Tucker decomposition to the TotalSegmentator model substantially reduced the model parameters and floating-point operations across various compression ratios, with limited loss in segmentation accuracy. Up to 88.17% of the model's parameters were removed, with no evidence of differences in performance compared with the original model for 113 of 117 classes after fine-tuning. Practical benefits varied across different graphics processing unit architectures, with more distinct speedups on less powerful hardware. Conclusion The study demonstrated that post hoc network compression via Tucker decomposition presents a viable strategy for reducing the computational demand of medical image segmentation models without substantially impacting model accuracy. Keywords: Deep Learning, Segmentation, Network Compression, Convolution, Tucker Decomposition Supplemental material is available for this article. © RSNA, 2025.
Collapse
Affiliation(s)
- Tobias Weber
- From the Department of Radiology, University Hospital, LMU Munich, Marchioninistr 15, 81377 Munich, Germany (T.W., J.D., M.I.); Department of Statistics, LMU Munich, Munich, Germany (T.W., D.R.); and Munich Center for Machine Learning, Munich, Germany (T.W., J.D., D.R., M.I.)
| | - Jakob Dexl
- From the Department of Radiology, University Hospital, LMU Munich, Marchioninistr 15, 81377 Munich, Germany (T.W., J.D., M.I.); Department of Statistics, LMU Munich, Munich, Germany (T.W., D.R.); and Munich Center for Machine Learning, Munich, Germany (T.W., J.D., D.R., M.I.)
| | - David Rügamer
- From the Department of Radiology, University Hospital, LMU Munich, Marchioninistr 15, 81377 Munich, Germany (T.W., J.D., M.I.); Department of Statistics, LMU Munich, Munich, Germany (T.W., D.R.); and Munich Center for Machine Learning, Munich, Germany (T.W., J.D., D.R., M.I.)
| | - Michael Ingrisch
- From the Department of Radiology, University Hospital, LMU Munich, Marchioninistr 15, 81377 Munich, Germany (T.W., J.D., M.I.); Department of Statistics, LMU Munich, Munich, Germany (T.W., D.R.); and Munich Center for Machine Learning, Munich, Germany (T.W., J.D., D.R., M.I.)
| |
Collapse
|
8
|
Weiss J, Bernatz S, Johnson J, Thiriveedhi V, Mak RH, Fedorov A, Lu MT, Aerts HJWL. Opportunistic assessment of steatotic liver disease in lung cancer screening eligible individuals. J Intern Med 2025; 297:276-288. [PMID: 39868889 PMCID: PMC11846076 DOI: 10.1111/joim.20053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/28/2025]
Abstract
BACKGROUND Steatotic liver disease (SLD) is a potentially reversible condition but often goes unnoticed with the risk for end-stage liver disease. PURPOSE To opportunistically estimate SLD on lung screening chest computed tomography (CT) and investigate its prognostic value in heavy smokers participating in the National Lung Screening Trial (NLST). MATERIAL AND METHODS We used a deep learning model to segment the liver on non-contrast-enhanced chest CT scans of 19,774 NLST participants (age 61.4 ± 5.0 years; 41.2% female) at baseline and on the 1-year follow-up scan if no cancer was detected. SLD was defined as hepatic fat fraction (HFF) ≥5% derived from Hounsfield unit measures of the segmented liver. Participants with SLD were categorized as lean (body mass index [BMI] < 25 kg/m2) and overweight (BMI ≥ 25 kg/m2). The primary outcome was all-cause mortality. Cox proportional hazard regression assessed the association between (1) SLD and mortality at baseline and (2) the association between a change in HFF and mortality within 1 year. RESULTS There were 5.1% (1000/19,760) all-cause deaths over a median follow-up of 6 (range, 0.8-6) years. At baseline, SLD was associated with increased mortality in lean but not in overweight/obese participants as compared to participants without SLD (hazard ratio [HR] adjusted for risk factors: 1.93 [95% confidence interval 1.52-2.45]; p = 0.001). Individuals with an increase in HFF within 1 year had a significantly worse outcome than participants with stable HFF (HR adjusted for risk factors: 1.29 [1.01-1.65]; p = 0.04). CONCLUSION SLD is an independent predictor for long-term mortality in heavy smokers beyond known clinical risk factors.
Collapse
Affiliation(s)
- Jakob Weiss
- Artificial Intelligence in Medicine (AIM) ProgramMass General BrighamHarvard Medical SchoolHarvard Institutes of Medicine (HIM)BostonMassachusettsUSA
- Department of Radiation OncologyBrigham and Women's HospitalDana‐Farber Cancer Institute, Harvard Medical SchoolBostonMassachusettsUSA
- Department of RadiologyBrigham and Women's HospitalDana‐Farber Cancer InstituteHarvard Medical SchoolBostonMassachusettsUSA
- Department of Diagnostic and Interventional RadiologyFaculty of MedicineUniversity Medical Center FreiburgUniversity of FreiburgFreiburgGermany
| | - Simon Bernatz
- Artificial Intelligence in Medicine (AIM) ProgramMass General BrighamHarvard Medical SchoolHarvard Institutes of Medicine (HIM)BostonMassachusettsUSA
- Department of Radiation OncologyBrigham and Women's HospitalDana‐Farber Cancer Institute, Harvard Medical SchoolBostonMassachusettsUSA
- Radiology and Nuclear MedicineCARIM & GROWMaastricht UniversityMaastrichtThe Netherlands
| | - Justin Johnson
- Artificial Intelligence in Medicine (AIM) ProgramMass General BrighamHarvard Medical SchoolHarvard Institutes of Medicine (HIM)BostonMassachusettsUSA
- Department of Radiation OncologyBrigham and Women's HospitalDana‐Farber Cancer Institute, Harvard Medical SchoolBostonMassachusettsUSA
| | - Vamsi Thiriveedhi
- Department of RadiologyBrigham and Women's HospitalDana‐Farber Cancer InstituteHarvard Medical SchoolBostonMassachusettsUSA
| | - Raymond H. Mak
- Artificial Intelligence in Medicine (AIM) ProgramMass General BrighamHarvard Medical SchoolHarvard Institutes of Medicine (HIM)BostonMassachusettsUSA
- Department of Radiation OncologyBrigham and Women's HospitalDana‐Farber Cancer Institute, Harvard Medical SchoolBostonMassachusettsUSA
| | - Andriy Fedorov
- Department of RadiologyBrigham and Women's HospitalDana‐Farber Cancer InstituteHarvard Medical SchoolBostonMassachusettsUSA
| | - Michael T. Lu
- Artificial Intelligence in Medicine (AIM) ProgramMass General BrighamHarvard Medical SchoolHarvard Institutes of Medicine (HIM)BostonMassachusettsUSA
- Cardiovascular Imaging Research CenterMassachusetts General HospitalHarvard Medical SchoolBostonMassachusettsUSA
| | - Hugo J. W. L. Aerts
- Artificial Intelligence in Medicine (AIM) ProgramMass General BrighamHarvard Medical SchoolHarvard Institutes of Medicine (HIM)BostonMassachusettsUSA
- Department of Radiation OncologyBrigham and Women's HospitalDana‐Farber Cancer Institute, Harvard Medical SchoolBostonMassachusettsUSA
- Department of RadiologyBrigham and Women's HospitalDana‐Farber Cancer InstituteHarvard Medical SchoolBostonMassachusettsUSA
- Radiology and Nuclear MedicineCARIM & GROWMaastricht UniversityMaastrichtThe Netherlands
| |
Collapse
|
9
|
Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, Buchner JA, Nguyen MQ, Etzel L, Weidner J, Metz MC, Wiestler B, Schnabel J, Rueckert D, Combs SE, Peeken JC. Deep learning for autosegmentation for radiotherapy treatment planning: State-of-the-art and novel perspectives. Strahlenther Onkol 2025; 201:236-254. [PMID: 39105745 PMCID: PMC11839850 DOI: 10.1007/s00066-024-02262-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/13/2024] [Indexed: 08/07/2024]
Abstract
The rapid development of artificial intelligence (AI) has gained importance, with many tools already entering our daily lives. The medical field of radiation oncology is also subject to this development, with AI entering all steps of the patient journey. In this review article, we summarize contemporary AI techniques and explore the clinical applications of AI-based automated segmentation models in radiotherapy planning, focusing on delineation of organs at risk (OARs), the gross tumor volume (GTV), and the clinical target volume (CTV). Emphasizing the need for precise and individualized plans, we review various commercial and freeware segmentation tools and also state-of-the-art approaches. Through our own findings and based on the literature, we demonstrate improved efficiency and consistency as well as time savings in different clinical scenarios. Despite challenges in clinical implementation such as domain shifts, the potential benefits for personalized treatment planning are substantial. The integration of mathematical tumor growth models and AI-based tumor detection further enhances the possibilities for refining target volumes. As advancements continue, the prospect of one-stop-shop segmentation and radiotherapy planning represents an exciting frontier in radiotherapy, potentially enabling fast treatment with enhanced precision and individualization.
Collapse
Affiliation(s)
- Ayhan Can Erdur
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
| | - Daniel Rusche
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Daniel Scholz
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Johannes Kiechle
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
| | - Óscar Llorián-Salvador
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department for Bioinformatics and Computational Biology - i12, Technical University of Munich, Boltzmannstraße 3, 85748, Garching, Bavaria, Germany
- Institute of Organismic and Molecular Evolution, Johannes Gutenberg University Mainz (JGU), Hüsch-Weg 15, 55128, Mainz, Rhineland-Palatinate, Germany
| | - Josef A Buchner
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Mai Q Nguyen
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
| | - Jonas Weidner
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Marie-Christin Metz
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Benedikt Wiestler
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Julia Schnabel
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
- Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Ingolstädter Landstraße 1, 85764, Neuherberg, Bavaria, Germany
- School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, WC2R 2LS, London, London, UK
| | - Daniel Rueckert
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Faculty of Engineering, Department of Computing, Imperial College London, Exhibition Rd, SW7 2BX, London, London, UK
| | - Stephanie E Combs
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| |
Collapse
|
10
|
Möller H, Graf R, Schmitt J, Keinert B, Schön H, Atad M, Sekuboyina A, Streckenbach F, Kofler F, Kroencke T, Bette S, Willich SN, Keil T, Niendorf T, Pischon T, Endemann B, Menze B, Rueckert D, Kirschke JS. SPINEPS-automatic whole spine segmentation of T2-weighted MR images using a two-phase approach to multi-class semantic and instance segmentation. Eur Radiol 2025; 35:1178-1189. [PMID: 39470797 PMCID: PMC11836161 DOI: 10.1007/s00330-024-11155-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 09/02/2024] [Accepted: 10/14/2024] [Indexed: 11/01/2024]
Abstract
OBJECTIVES Introducing SPINEPS, a deep learning method for semantic and instance segmentation of 14 spinal structures (ten vertebra substructures, intervertebral discs, spinal cord, spinal canal, and sacrum) in whole-body sagittal T2-weighted turbo spin echo images. MATERIAL AND METHODS This local ethics committee-approved study utilized a public dataset (train/test 179/39 subjects, 137 female), a German National Cohort (NAKO) subset (train/test 1412/65 subjects, mean age 53, 694 female), and an in-house dataset (test 10 subjects, mean age 70, 5 female). SPINEPS is a semantic segmentation model, followed by a sliding window approach utilizing a second model to create instance masks from the semantic ones. Segmentation evaluation metrics included the Dice score and average symmetrical surface distance (ASSD). Statistical significance was assessed using the Wilcoxon signed-rank test. RESULTS On the public dataset, SPINEPS outperformed a nnUNet baseline on every structure and metric (e.g., an average over vertebra instances: dice 0.933 vs 0.911, p < 0.001, ASSD 0.21 vs 0.435, p < 0.001). SPINEPS trained on automated annotations of the NAKO achieves an average global Dice score of 0.918 on the combined NAKO and in-house test split. Adding the training data from the public dataset outperforms this (average instance-wise Dice score over the vertebra substructures 0.803 vs 0.778, average global Dice score 0.931 vs 0.918). CONCLUSION SPINEPS offers segmentation of 14 spinal structures in T2w sagittal images. It provides a semantic mask and an instance mask separating the vertebrae and intervertebral discs. This is the first publicly available algorithm to enable this segmentation. KEY POINTS Question No publicly available automatic approach can yield semantic and instance segmentation masks for the whole spine (including posterior elements) in T2-weighted sagittal TSE images. Findings Segmenting semantically first and then instance-wise outperforms a baseline trained directly on instance segmentation. The developed model produces high-resolution MRI segmentations for the whole spine. Clinical relevance This study introduces an automatic approach to whole spine segmentation, including posterior elements, in arbitrary fields of view T2w sagittal MR images, enabling easy biomarker extraction, automatic localization of pathologies and degenerative diseases, and quantifying analyses as downstream research.
Collapse
Affiliation(s)
- Hendrik Möller
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine and Health, Technical University of Munich, Munich, Germany.
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany.
| | - Robert Graf
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine and Health, Technical University of Munich, Munich, Germany
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
| | - Joachim Schmitt
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine and Health, Technical University of Munich, Munich, Germany
| | - Benjamin Keinert
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine and Health, Technical University of Munich, Munich, Germany
| | - Hanna Schön
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine and Health, Technical University of Munich, Munich, Germany
- Department of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, University Medical Center Rostock, Rostock, Germany
| | - Matan Atad
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine and Health, Technical University of Munich, Munich, Germany
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
| | - Anjany Sekuboyina
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine and Health, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Felix Streckenbach
- Department of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, University Medical Center Rostock, Rostock, Germany
| | - Florian Kofler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine and Health, Technical University of Munich, Munich, Germany
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- Helmholtz AI, Helmholtz Munich, Neuherberg, Germany
- TranslaTUM-Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Thomas Kroencke
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Augsburg, Germany
- Centre for Advanced Analytics and Predictive Sciences, Augsburg University, Augsburg, Germany
| | - Stefanie Bette
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Augsburg, Germany
| | - Stefan N Willich
- Institute of Social Medicine, Epidemiology and Health Economics, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Thomas Keil
- Institute of Social Medicine, Epidemiology and Health Economics, Charité-Universitätsmedizin Berlin, Berlin, Germany
- Institute of Clinical Epidemiology and Biometry, University of Würzburg, Würzburg, Germany
- State Institute of Health I, Bavarian Health and Food Safety Auhtority, Erlangen, Germany
| | - Thoralf Niendorf
- Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany
| | - Tobias Pischon
- Molecular Epidemiology Research Group, Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin, Germany
- Biobank Technology Platform, Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin, Germany
- Charité-Universitätsmedizin Berlin, Corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Beate Endemann
- Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Daniel Rueckert
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- Department of Computing, Imperial College London, London, UK
| | - Jan S Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine and Health, Technical University of Munich, Munich, Germany
| |
Collapse
|
11
|
Jiang S, Xu J, Wang W, Tao B, Wu Y, Chen X. NURBS curve shape prior-guided multiscale attention network for automatic segmentation of the inferior alveolar nerve. Comput Med Imaging Graph 2025; 120:102485. [PMID: 39793528 DOI: 10.1016/j.compmedimag.2024.102485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 12/09/2024] [Accepted: 12/30/2024] [Indexed: 01/13/2025]
Abstract
Accurate segmentation of the inferior alveolar nerve (IAN) within Cone-Beam Computed Tomography (CBCT) images is critical for the precise planning of oral and maxillofacial surgeries, especially to avoid IAN damage. Existing methods often fail due to the low contrast of the IAN and the presence of artifacts, which can cause segmentation discontinuities. To address these challenges, this paper proposes a novel approach that employs Non-Uniform Rational B-Spline (NURBS) curve shape priors into a multiscale attention network for the automatic segmentation of the IAN. Firstly, an automatic method for generating non-uniform rational B-spline (NURBS) shape prior is proposed and introduced into the segmentation network, which significantly enhancing the continuity and accuracy of IAN segmentation. Then a multiscale attention segmentation network, incorporating a dilation selective attention module is developed, to improve the network's feature extraction capacity. The proposed approach is validated on both in-house and public datasets, showcasing superior performance compared to established benchmarks, achieving 80.29±11.04% dice coefficient (Dice) and 68.14±12.06% intersection of union (IoU), the 95% Hausdorff distance (95HD) reaches 1.61±6.14 mm and mean surface distance (MSD) reaches 0.64±2.16 mm on private dataset. On public dataset, the Dice reaches 80.69±4.93%, IoU reaches 67.86±6.73%, 95HD reaches 1.04±0.95 mm, and MSD reaches 0.42±0.34 mm. Compared to state-of-the-art networks, the proposed approach out-performs in both voxel accuracy and surface distance. It offers significant potential to improve doctors' efficiency in segmentation tasks and holds promise for applications in dental surgery planning. The source codes are available at https://github.com/SJTUjsl/NURBS_IAN.git.
Collapse
Affiliation(s)
- Shuanglin Jiang
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, 999077, Hong Kong, China
| | - Wenyin Wang
- Department of Second Dental Center, Ninth People's Hospital Affiliated with Shanghai Jiao Tong University School of Medicine, Shanghai Key Laboratory of Stomatology, National Clinical Research Center of Stomatology, Shanghai, 200240, China
| | - Baoxin Tao
- Department of Second Dental Center, Ninth People's Hospital Affiliated with Shanghai Jiao Tong University School of Medicine, Shanghai Key Laboratory of Stomatology, National Clinical Research Center of Stomatology, Shanghai, 200240, China
| | - Yiqun Wu
- Department of Second Dental Center, Ninth People's Hospital Affiliated with Shanghai Jiao Tong University School of Medicine, Shanghai Key Laboratory of Stomatology, National Clinical Research Center of Stomatology, Shanghai, 200240, China.
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
12
|
Li Y, Wynne JF, Wu Y, Qiu RLJ, Tian S, Wang T, Patel PR, Yu DS, Yang X. Automatic medical imaging segmentation via self-supervising large-scale convolutional neural networks. Radiother Oncol 2025; 204:110711. [PMID: 39798701 DOI: 10.1016/j.radonc.2025.110711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2024] [Revised: 12/02/2024] [Accepted: 01/04/2025] [Indexed: 01/15/2025]
Abstract
PURPOSE This study aims to develop a robust, large-scale deep learning model for medical image segmentation, leveraging self-supervised learning to overcome the limitations of supervised learning and data variability in clinical settings. METHODS AND MATERIALS We curated a substantial multi-center CT dataset for self-supervised pre-training using masked image modeling with sparse submanifold convolution. We designed a series of Sparse Submanifold U-Nets (SS-UNets) of varying sizes and performed self-supervised pre-training. We fine-tuned the SS-UNets on the TotalSegmentator dataset. The evaluation encompassed robustness tests on four unseen datasets and transferability assessments on three additional datasets. RESULTS Our SS-UNets exhibited superior performance in comparison to state-of-the-art self-supervised methods, demonstrating higher Dice Similarity Coefficient (DSC) and Surface Dice Coefficient (SDC) metrics. SS-UNet-B achieved 84.3 % DSC and 88.0 % SDC in TotalSegmentator. We further demonstrated the scalability of our networks, with segmentation performance increasing with model size, demonstrated from 58 million to 1.4 billion parameters:4.6 % DSC and 3.2 % SDC improvement in TotalSegmentator from SS-UNet-B to SS-UNet-H. CONCLUSIONS We demonstrate the efficacy of self-supervised learning for medical image segmentation in the CT, MRI and PET domains. Our approach significantly reduces reliance on extensively labeled data, mitigates risks of overfitting, and enhances model generalizability. Future applications may allow accurate segmentation of organs and lesions across several imaging domains, potentially streamlining cancer detection and radiotherapy treatment planning.
Collapse
Affiliation(s)
- Yuheng Li
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA; Department of Biomedical Engineering, Emory University and Georgia Institute of Technology Atlanta, GA 30308, USA
| | - Jacob F Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Yizhou Wu
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Pretesh R Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - David S Yu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA; Department of Biomedical Engineering, Emory University and Georgia Institute of Technology Atlanta, GA 30308, USA.
| |
Collapse
|
13
|
Borges MG, Gruenwaldt J, Barsanelli DM, Ishikawa KE, Stuart SR. Automatic segmentation of cardiac structures can change the way we evaluate dose limits for radiotherapy in the left breast. J Med Imaging Radiat Sci 2025; 56:101844. [PMID: 39740303 DOI: 10.1016/j.jmir.2024.101844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 12/03/2024] [Accepted: 12/12/2024] [Indexed: 01/02/2025]
Abstract
PURPOSE Radiotherapy is a crucial part of breast cancer treatment. Precision in dose assessment is essential to minimize side effects. Traditionally, anatomical structures are delineated manually, a time-consuming process subject to variability. automatic segmentation, including methods based on multiple atlases and deep learning, offers a promising alternative. For the radiotherapy treatment of the left breast, the RTOG 1005 protocol highlights the importance of cardiac delineation and the need to minimize cardiac exposure to radiation. Our study aims to evaluate dose distribution in auto-segmented substructures and establish models to correlate them with dose in the cardiac area. METHODS AND MATERIALS Anatomical structures were auto-segmented using TotalSegmentator and Limbus AI. The relationship between the volume of the cardiac area and of organs at risk was assessed using log-linear regressions. RESULTS The mean dose distribution was considerable for LAD (left anterior descending coronary artery), heart, and left ventricle. The volumetric distribution of organs at risk is evaluated for specific RTOG 1005 isodoses. We highlight the greater variability in the absolute volumetric evaluation. Log-linear regression models are presented to estimate dose constraint parameters. We highlight a greater number of highly correlated comparisons for absolute dose-volume assessment. CONCLUSIONS Dose-volume assessment protocols in patients with left breast cancer often neglect cardiac substructures. However, automatic tools can overcome these technical difficulties. In this study, we correlated the dose in the cardiac area with the doses in specific substructures and suggested limits for planning evaluation. Our data also indicates that statistical models could be applied in the assessment of those substructures where an automatic segmentation tool is not available. Our data also shows a benefit in reporting absolute dose-volume thresholds for future cause-effect assessments.
Collapse
Affiliation(s)
- Murilo Guimarães Borges
- Department of Medical Physics, Centre for Biomedical Engineering (CEB), University of Campinas, Rua Alexander Fleming, 163, Cidade Universitária, 13083-881 Campinas, SP, Brazil; Hospital da Mulher Prof. Dr. José Aristodemo Pinotti (CAISM), University of Campinas, R. Alexander Fleming, 101, Cidade Universitária, 13083-881 Campinas, SP, Brazil.
| | - Joyce Gruenwaldt
- Centro de Oncologia Campinas (COC), R. Alberto de Salvo, 311, Barão Geraldo, 13084-759 Campinas, SP, Brazil; Department of Radiotherapy, Hospital das Clínicas, University of Campinas, R. Vital Brasil, 251, Cidade Universitária, 13083-888 Campinas, SP, Brazil
| | - Danilo Matheus Barsanelli
- Centro de Oncologia Campinas (COC), R. Alberto de Salvo, 311, Barão Geraldo, 13084-759 Campinas, SP, Brazil
| | - Karina Emy Ishikawa
- Centro de Oncologia Campinas (COC), R. Alberto de Salvo, 311, Barão Geraldo, 13084-759 Campinas, SP, Brazil
| | - Silvia Radwanski Stuart
- Department of Radiotherapy, Instituto Brasileiro de Controle do Câncer (IBCC), Avenida Alcântara Machado, 2576, Mooca, 03102-002 São Paulo, SP, Brazil; Department of Radiotherapy, Instituto de Radiologia do Hospital das Clínicas - HCFMUSP (InRad), Hospital das Clínicas, University of São Paulo, Rua Doutor Ovídio Pires de Campos, 75, Portaria 1, Cerqueira César, 05403-010 São Paulo, SP, Brazil
| |
Collapse
|
14
|
Lopez-Ramirez F, Yasrab M, Tixier F, Kawamoto S, Fishman EK, Chu LC. The Role of AI in the Evaluation of Neuroendocrine Tumors: Current State of the Art. Semin Nucl Med 2025:S0001-2998(25)00012-1. [PMID: 40023682 DOI: 10.1053/j.semnuclmed.2025.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2025] [Accepted: 02/07/2025] [Indexed: 03/04/2025]
Abstract
Advancements in Artificial Intelligence (AI) are driving a paradigm shift in the field of medical diagnostics, integrating new developments into various aspects of the clinical workflow. Neuroendocrine neoplasms are a diverse and heterogeneous group of tumors that pose significant diagnostic and management challenges due to their variable clinical presentations and biological behavior. Innovative approaches are essential to overcome these challenges and improve the current standard of care. AI-driven applications, particularly in imaging workflows, hold promise for enhancing tumor detection, classification, and grading by leveraging advanced radiomics and deep learning techniques. This article reviews the current and emerging applications of AI computer vision in the care of neuroendocrine neoplasms, focusing on its integration into imaging workflows, diagnostics, prognostic modeling, and therapeutic planning.
Collapse
Affiliation(s)
- Felipe Lopez-Ramirez
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Mohammad Yasrab
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Florent Tixier
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Satomi Kawamoto
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Elliot K Fishman
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Linda C Chu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland.
| |
Collapse
|
15
|
Li J, Zhou Z, Yang J, Pepe A, Gsaxner C, Luijten G, Qu C, Zhang T, Chen X, Li W, Wodzinski M, Friedrich P, Xie K, Jin Y, Ambigapathy N, Nasca E, Solak N, Melito GM, Vu VD, Memon AR, Schlachta C, De Ribaupierre S, Patel R, Eagleson R, Chen X, Mächler H, Kirschke JS, de la Rosa E, Christ PF, Li HB, Ellis DG, Aizenberg MR, Gatidis S, Küstner T, Shusharina N, Heller N, Andrearczyk V, Depeursinge A, Hatt M, Sekuboyina A, Löffler MT, Liebl H, Dorent R, Vercauteren T, Shapey J, Kujawa A, Cornelissen S, Langenhuizen P, Ben-Hamadou A, Rekik A, Pujades S, Boyer E, Bolelli F, Grana C, Lumetti L, Salehi H, Ma J, Zhang Y, Gharleghi R, Beier S, Sowmya A, Garza-Villarreal EA, Balducci T, Angeles-Valdez D, Souza R, Rittner L, Frayne R, Ji Y, Ferrari V, Chatterjee S, Dubost F, Schreiber S, Mattern H, Speck O, Haehn D, John C, Nürnberger A, Pedrosa J, Ferreira C, Aresta G, Cunha A, Campilho A, Suter Y, Garcia J, Lalande A, Vandenbossche V, Van Oevelen A, Duquesne K, Mekhzoum H, Vandemeulebroucke J, Audenaert E, Krebs C, van Leeuwen T, Vereecke E, Heidemeyer H, Röhrig R, Hölzle F, Badeli V, Krieger K, Gunzer M, Chen J, van Meegdenburg T, Dada A, Balzer M, Fragemann J, Jonske F, Rempe M, Malorodov S, Bahnsen FH, Seibold C, Jaus A, Marinov Z, Jaeger PF, Stiefelhagen R, Santos AS, Lindo M, Ferreira A, Alves V, Kamp M, Abourayya A, Nensa F, Hörst F, Brehmer A, Heine L, Hanusrichter Y, Weßling M, Dudda M, Podleska LE, Fink MA, Keyl J, Tserpes K, Kim MS, Elhabian S, Lamecker H, Zukić D, Paniagua B, Wachinger C, Urschler M, Duong L, Wasserthal J, Hoyer PF, Basu O, Maal T, Witjes MJH, Schiele G, Chang TC, Ahmadi SA, Luo P, Menze B, Reyes M, Deserno TM, Davatzikos C, Puladi B, Fua P, Yuille AL, Kleesiek J, Egger J. MedShapeNet - a large-scale dataset of 3D medical shapes for computer vision. BIOMED ENG-BIOMED TE 2025; 70:71-90. [PMID: 39733351 DOI: 10.1515/bmt-2024-0396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 09/21/2024] [Indexed: 12/31/2024]
Abstract
OBJECTIVES The shape is commonly used to describe the objects. State-of-the-art algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from the growing popularity of ShapeNet (51,300 models) and Princeton ModelNet (127,915 models). However, a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instruments is missing. METHODS We present MedShapeNet to translate data-driven vision algorithms to medical applications and to adapt state-of-the-art vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. We present use cases in classifying brain tumors, skull reconstructions, multi-class anatomy completion, education, and 3D printing. RESULTS By now, MedShapeNet includes 23 datasets with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. CONCLUSIONS MedShapeNet contains medical shapes from anatomy and surgical instruments and will continue to collect data for benchmarks and applications. The project page is: https://medshapenet.ikim.nrw/.
Collapse
Affiliation(s)
- Jianning Li
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
| | - Zongwei Zhou
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jiancheng Yang
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| | - Antonio Pepe
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Gijs Luijten
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Essen, Germany
| | - Chongyu Qu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Tiezheng Zhang
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Xiaoxi Chen
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Wenxuan Li
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Marek Wodzinski
- Department of Measurement and Electronics, AGH University of Science and Technology, Krakow, Poland
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | - Paul Friedrich
- Center for Medical Image Analysis & Navigation (CIAN), Department of Biomedical Engineering, University of Basel, Allschwil, Switzerland
| | - Kangxian Xie
- Department of Computer Science and Engineering, University at Buffalo, SUNY, NY, 14260, USA
| | - Yuan Jin
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, Hangzhou, Zhejiang, China
| | - Narmada Ambigapathy
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Enrico Nasca
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Naida Solak
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
| | - Gian Marco Melito
- Institute of Mechanics, Graz University of Technology, Graz, Austria
| | - Viet Duc Vu
- Department of Diagnostic and Interventional Radiology, University Hospital Giessen, Justus-Liebig-University Giessen, Giessen, Germany
| | - Afaque R Memon
- Department of Mechanical Engineering, Mehran University of Engineering and Technology, Jamshoro, Sindh, Pakistan
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Christopher Schlachta
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Sandrine De Ribaupierre
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Rajnikant Patel
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Roy Eagleson
- Canadian Surgical Technologies & Advanced Robotics (CSTAR), University Hospital, London, Canada
| | - Xiaojun Chen
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Institute of Biomedical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Heinrich Mächler
- Department of Cardiac Surgery, Medical University Graz, Graz, Austria
| | - Jan Stefan Kirschke
- Geschäftsführender Oberarzt Abteilung für Interventionelle und Diagnostische Neuroradiologie, Universitätsklinikum der Technischen Universität München, München, Germany
| | - Ezequiel de la Rosa
- icometrix, Leuven, Belgium
- Department of Informatics, Technical University of Munich, Garching bei München, Germany
| | | | - Hongwei Bran Li
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - David G Ellis
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, USA
| | - Michele R Aizenberg
- Department of Neurosurgery, University of Nebraska Medical Center, Omaha, NE, USA
| | - Sergios Gatidis
- University Hospital of Tuebingen Diagnostic and Interventional Radiology Medical Image and Data Analysis (MIDAS.lab), Tuebingen, Germany
| | - Thomas Küstner
- University Hospital of Tuebingen Diagnostic and Interventional Radiology Medical Image and Data Analysis (MIDAS.lab), Tuebingen, Germany
| | - Nadya Shusharina
- Division of Radiation Biophysics, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | | | - Vincent Andrearczyk
- Institute of Informatics, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
| | - Adrien Depeursinge
- Institute of Informatics, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital (CHUV), Lausanne, Switzerland
| | - Mathieu Hatt
- LaTIM, INSERM UMR 1101, Univ Brest, Brest, France
| | - Anjany Sekuboyina
- Department of Informatics, Technical University of Munich, Garching bei München, Germany
| | | | - Hans Liebl
- Department of Neuroradiology, Klinikum Rechts der Isar, Munich, Germany
| | - Reuben Dorent
- King's College London, Strand, London, UK
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | | | | | | | - Stefan Cornelissen
- Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
- Video Coding & Architectures Research Group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Patrick Langenhuizen
- Elisabeth-TweeSteden Hospital, Tilburg, Netherlands
- Video Coding & Architectures Research Group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Achraf Ben-Hamadou
- Centre de Recherche en Numérique de Sfax, Laboratory of Signals, Systems, Artificial Intelligence and Networks, Sfax, Tunisia
- Udini, Aix-en-Provence, France
| | - Ahmed Rekik
- Centre de Recherche en Numérique de Sfax, Laboratory of Signals, Systems, Artificial Intelligence and Networks, Sfax, Tunisia
- Udini, Aix-en-Provence, France
| | - Sergi Pujades
- Inria, Université Grenoble Alpes, CNRS, Grenoble, France
| | - Edmond Boyer
- Inria, Université Grenoble Alpes, CNRS, Grenoble, France
| | - Federico Bolelli
- "Enzo Ferrari" Department of Engineering, University of Modena and Reggio Emilia, Modena, Italy
| | - Costantino Grana
- "Enzo Ferrari" Department of Engineering, University of Modena and Reggio Emilia, Modena, Italy
| | - Luca Lumetti
- "Enzo Ferrari" Department of Engineering, University of Modena and Reggio Emilia, Modena, Italy
| | - Hamidreza Salehi
- Department of Artificial Intelligence in Medical Sciences, Faculty of Advanced Technologies in Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Jun Ma
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada
- Peter Munk Cardiac Centre, University Health Network, Toronto, ON, Canada
- Vector Institute, Toronto, ON, Canada
| | - Yao Zhang
- Shanghai AI Laboratory, Shanghai, People's Republic of China
| | - Ramtin Gharleghi
- School of Mechanical and Manufacturing Engineering, UNSW, Sydney, NSW, Australia
| | - Susann Beier
- School of Mechanical and Manufacturing Engineering, UNSW, Sydney, NSW, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, UNSW, Sydney, NSW, Australia
| | | | - Thania Balducci
- Institute of Neurobiology, Universidad Nacional Autónoma de México Campus Juriquilla, Querétaro, Mexico
| | - Diego Angeles-Valdez
- Institute of Neurobiology, Universidad Nacional Autónoma de México Campus Juriquilla, Querétaro, Mexico
- Department of Biomedical Sciences of Cells and Systems, Cognitive Neuroscience Center, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Roberto Souza
- Advanced Imaging and Artificial Intelligence Lab, Electrical and Software Engineering Department, The Hotchkiss Brain Institute, University of Calgary, Calgary, Canada
| | - Leticia Rittner
- Medical Image Computing Lab, School of Electrical and Computer Engineering (FEEC), University of Campinas, Campinas, Brazil
| | - Richard Frayne
- Radiology and Clinical Neurosciences Departments, The Hotchkiss Brain Institute, University of Calgary, Calgary, Canada
- Seaman Family MR Research Centre, Foothills Medical Center, Calgary, Canada
| | - Yuanfeng Ji
- University of Hongkong, Pok Fu Lam, Hong Kong, People's Republic of China
| | - Vincenzo Ferrari
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
- EndoCAS Center, Department of Translational Research and of New Surgical and Medical Technologies, University of Pisa, Pisa, Italy
| | - Soumick Chatterjee
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
- Genomics Research Centre, Human Technopole, Milan, Italy
| | | | - Stefanie Schreiber
- German Centre for Neurodegenerative Disease, Magdeburg, Germany
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Department of Neurology, Medical Faculty, University Hospital of Magdeburg, Magdeburg, Germany
| | - Hendrik Mattern
- German Centre for Neurodegenerative Disease, Magdeburg, Germany
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Department of Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Oliver Speck
- German Centre for Neurodegenerative Disease, Magdeburg, Germany
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Department of Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Daniel Haehn
- University of Massachusetts Boston, Boston, MA, USA
| | | | - Andreas Nürnberger
- Centre for Behavioural Brain Sciences, Magdeburg, Germany
- Data and Knowledge Engineering Group, Faculty of Computer Science, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - João Pedrosa
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Faculty of Engineering, University of Porto (FEUP), Porto, Portugal
| | - Carlos Ferreira
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Faculty of Engineering, University of Porto (FEUP), Porto, Portugal
| | - Guilherme Aresta
- Christian Doppler Lab for Artificial Intelligence in Retina, Department of Ophthalmology and Optometry, Medical University of Vienna, Vienna, Austria
| | - António Cunha
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Universidade of Trás-os-Montes and Alto Douro (UTAD), Vila Real, Portugal
| | - Aurélio Campilho
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
- Faculty of Engineering, University of Porto (FEUP), Porto, Portugal
| | - Yannick Suter
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Jose Garcia
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA
| | - Alain Lalande
- ICMUB Laboratory, Faculty of Medicine, CNRS UMR 6302, University of Burgundy, Dijon, France
- Medical Imaging Department, University Hospital of Dijon, Dijon, France
| | | | - Aline Van Oevelen
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium
| | - Kate Duquesne
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium
| | - Hamza Mekhzoum
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium
| | - Jef Vandemeulebroucke
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium
| | - Emmanuel Audenaert
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium
| | - Claudia Krebs
- Department of Cellular and Physiological Sciences, Life Sciences Centre, University of British Columbia, Vancouver, BC, Canada
| | - Timo van Leeuwen
- Department of Development & Regeneration, KU Leuven Campus Kulak, Kortrijk, Belgium
| | - Evie Vereecke
- Department of Development & Regeneration, KU Leuven Campus Kulak, Kortrijk, Belgium
| | - Hauke Heidemeyer
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
| | - Vahid Badeli
- Institute of Fundamentals and Theory in Electrical Engineering, Graz University of Technology, Graz, Austria
| | - Kathrin Krieger
- Leibniz-Institut für Analytische Wissenschaften-ISAS-e.V., Dortmund, Germany
| | - Matthias Gunzer
- Leibniz-Institut für Analytische Wissenschaften-ISAS-e.V., Dortmund, Germany
- Institute for Experimental Immunology and Imaging, University Hospital, University Duisburg-Essen, Essen, Germany
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften-ISAS-e.V., Dortmund, Germany
| | - Timo van Meegdenburg
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Faculty of Statistics, Technical University Dortmund, Dortmund, Germany
| | - Amin Dada
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Miriam Balzer
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Jana Fragemann
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Frederic Jonske
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Moritz Rempe
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Stanislav Malorodov
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Fin H Bahnsen
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Constantin Seibold
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Alexander Jaus
- Computer Vision for Human-Computer Interaction Lab, Department of Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Zdravko Marinov
- Computer Vision for Human-Computer Interaction Lab, Department of Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Paul F Jaeger
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Heidelberg, Germany
- Helmholtz Imaging, DKFZ Heidelberg, Heidelberg, Germany
| | - Rainer Stiefelhagen
- Computer Vision for Human-Computer Interaction Lab, Department of Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Ana Sofia Santos
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - Mariana Lindo
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - André Ferreira
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - Victor Alves
- Center Algoritmi, LASI, University of Minho, Braga, Portugal
| | - Michael Kamp
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
- Institute for Neuroinformatics, Ruhr University Bochum, Bochum, Germany
- Department of Data Science & AI, Monash University, Clayton, VIC, Australia
| | - Amr Abourayya
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute for Neuroinformatics, Ruhr University Bochum, Bochum, Germany
| | - Felix Nensa
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
| | - Fabian Hörst
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Alexander Brehmer
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Lukas Heine
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Yannik Hanusrichter
- Department of Tumour Orthopaedics and Revision Arthroplasty, Orthopaedic Hospital Volmarstein, Wetter, Germany
- Center for Musculoskeletal Surgery, University Hospital of Essen, Essen, Germany
| | - Martin Weßling
- Department of Tumour Orthopaedics and Revision Arthroplasty, Orthopaedic Hospital Volmarstein, Wetter, Germany
- Center for Musculoskeletal Surgery, University Hospital of Essen, Essen, Germany
| | - Marcel Dudda
- Department of Trauma, Hand and Reconstructive Surgery, University Hospital Essen, Essen, Germany
- Department of Orthopaedics and Trauma Surgery, BG-Klinikum Duisburg, University of Duisburg-Essen, Essen , Germany
| | - Lars E Podleska
- Department of Tumor Orthopedics and Sarcoma Surgery, University Hospital Essen (AöR), Essen, Germany
| | - Matthias A Fink
- Clinic for Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Julius Keyl
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Konstantinos Tserpes
- Department of Informatics and Telematics, Harokopio University of Athens, Tavros, Greece
| | - Moon-Sung Kim
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Shireen Elhabian
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, USA
| | | | - Dženan Zukić
- Medical Computing, Kitware Inc., Carrboro, NC, USA
| | | | - Christian Wachinger
- Lab for Artificial Intelligence in Medical Imaging, Department of Radiology, Technical University Munich, Munich, Germany
| | - Martin Urschler
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Graz, Austria
| | - Luc Duong
- Department of Software and IT Engineering, Ecole de Technologie Superieure, Montreal, Quebec, Canada
| | - Jakob Wasserthal
- Clinic of Radiology & Nuclear Medicine, University Hospital Basel, Basel, Switzerland
| | - Peter F Hoyer
- Pediatric Clinic II, University Children's Hospital Essen, University Duisburg-Essen, Essen, Germany
| | - Oliver Basu
- Pediatric Clinic III, University Children's Hospital Essen, University Duisburg-Essen, Essen, Germany
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Essen, Germany
| | - Thomas Maal
- Radboudumc 3D-Lab , Department of Oral and Maxillofacial Surgery , Radboud University Nijmegen Medical Centre, Nijmegen , The Netherlands
| | - Max J H Witjes
- 3D Lab, Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, Groningen, the Netherlands
| | - Gregor Schiele
- Intelligent Embedded Systems Lab, University of Duisburg-Essen, Bismarckstraße 90, 47057 Duisburg, Germany
| | | | | | - Ping Luo
- University of Hongkong, Pok Fu Lam, Hong Kong, People's Republic of China
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Thomas M Deserno
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Braunschweig, Germany
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics , Penn Neurodegeneration Genomics Center , University of Pennsylvania, Philadelphia , PA , USA ; and Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| | - Pascal Fua
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
| | - Alan L Yuille
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Essen, Germany
- Department of Physics, TU Dortmund University, Dortmund, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory (Cafe), Graz, Austria
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Essen, Germany
| |
Collapse
|
16
|
Lee HI, Son J, Cho B, Goh Y, Jung J, Park JH, Chie EK, Kim KS, Kim YH, Kang HC, Yoon SM. Development and validation of a prediction model for cardiac events in patients with hepatocellular carcinoma undergoing stereotactic body radiation therapy. Int J Radiat Oncol Biol Phys 2025:S0360-3016(25)00155-5. [PMID: 39993541 DOI: 10.1016/j.ijrobp.2025.02.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2024] [Revised: 01/23/2025] [Accepted: 02/12/2025] [Indexed: 02/26/2025]
Abstract
PURPOSE To develop and validate a prediction model for major adverse cardiac events (MACE) in hepatocellular carcinoma (HCC) patients treated with stereotactic body radiation therapy (SBRT). METHODS AND MATERIALS We retrospectively identified 1893 HCC patients who received SBRT at two institutions, with one serving as the development cohort (n=1473) and the other as the validation cohort (n=420). MACE was defined as any cardiac event classified as grade 3 or higher, according to the Common Terminology Criteria for Adverse Events, version 5.0. We evaluated 15 clinical and 88 dosimetric parameters using bootstrapped forward selection and area under the curve (AUC) to identify significant predictors for MACE. Based on these factors, we constructed the Cardiac Event Index (CEI) model, categorizing patients into distinct risk groups. Model performance was assessed for discrimination, efficiency, and calibration. RESULTS The occurrence rate of MACE was 5.8% in the development cohort and 6.7% in the validation cohort. Five parameters were selected for predicting MACE and were incorporated into the CEI model using the following equation: CEI = age score + hypertension + current smoking + (2 × history of cardiac disease) + (0.05 × heart-V5 [%]), which yielded an AUC of 0.770 for MACE and 0.750 for coronary artery disease. The CEI model stratified patients into low-, intermediate-, and high-risk groups that had MACE incidence rates of 0.4%, 4.9%, and 22.8%, respectively. The impact of heart-V5 on MACE was minimal in low- and intermediate-risk groups but pronounced in the high-risk group. In the validation cohort, the CEI model yielded an AUC of 0.809 for MACE and 0.793 for coronary artery disease. CONCLUSIONS The CEI model demonstrated robust performance in predicting MACE, revealing the significant influence of clinical factors and the minimal impact of SBRT. This model can inform evidence-based decisions regarding cardiac dose optimization in SBRT planning.
Collapse
Affiliation(s)
- Hye In Lee
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Jaeman Son
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Byungchul Cho
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Youngmoon Goh
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jinhong Jung
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jin-Hong Park
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Eui Kyu Chie
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea; Department of Radiation Oncology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Kyung Su Kim
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea; Department of Radiation Oncology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Young-Hak Kim
- Division of Cardiology, Heart Institute, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Hyun-Cheol Kang
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea; Department of Radiation Oncology, Seoul National University College of Medicine, Seoul, Republic of Korea.
| | - Sang Min Yoon
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
17
|
Kulkarni AM, Kruse D, Harper K, Lam E, Osman H, Ansari DH, Sivanesan U, Bashir MR, Costa AF, McInnes M, van der Pol CB. Current State of Evidence for Use of MRI in LI-RADS. J Magn Reson Imaging 2025. [PMID: 39981949 DOI: 10.1002/jmri.29748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2024] [Revised: 02/07/2025] [Accepted: 02/08/2025] [Indexed: 02/22/2025] Open
Abstract
The American College of Radiology Liver Imaging Reporting and Data System (LI-RADS) is the preeminent framework for classification and risk stratification of liver observations on imaging in patients at high risk for hepatocellular carcinoma. In this review, the pathogenesis of hepatocellular carcinoma and the use of MRI in LI-RADS is discussed, including specifically the LI-RADS diagnostic algorithm, its components, and its reproducibility with reference to the latest supporting evidence. The LI-RADS treatment response algorithms are reviewed, including the more recent radiation treatment response algorithm. The application of artificial intelligence, points of controversy, LI-RADS relative to other liver imaging systems, and possible future directions are explored. After reading this article, the reader will have an understanding of the foundation and application of LI-RADS as well as possible future directions.
Collapse
Affiliation(s)
- Ameya Madhav Kulkarni
- Department of Medical Imaging, Hamilton Health Sciences, McMaster University, Hamilton, Ontario, Canada
- Department of Diagnostic Imaging, Juravinski Hospital and Cancer Centre, Hamilton Health Sciences, Hamilton, Ontario, Canada
| | - Danielle Kruse
- Departments of Radiology and Medicine, Duke University Medical Center, Durham, North Carolina, USA
| | - Kelly Harper
- Department of Radiology, The Ottawa Hospital, University of Ottawa, Ottawa, Ontario, Canada
| | - Eric Lam
- Ottawa Hospital Research Institute Clinical Epidemiology Program, Ottawa, Ontario, Canada
| | - Hoda Osman
- Ottawa Hospital Research Institute Clinical Epidemiology Program, Ottawa, Ontario, Canada
| | - Danyaal H Ansari
- Ottawa Hospital Research Institute Clinical Epidemiology Program, Ottawa, Ontario, Canada
| | - Umaseh Sivanesan
- Department of Diagnostic Radiology, Kingston Health Sciences Centre, Kingston General Hospital, Kingston, Ontario, Canada
| | - Mustafa R Bashir
- Departments of Radiology and Medicine, Duke University Medical Center, Durham, North Carolina, USA
- Center for Advanced Magnetic Resonance Development, Duke University Medical Center, Durham, North Carolina, USA
| | - Andreu F Costa
- Queen Elizabeth II Health Sciences Centre and Dalhousie University, Halifax, Nova Scotia, Canada
| | - Matthew McInnes
- Department of Radiology, The Ottawa Hospital, University of Ottawa, Ottawa, Ontario, Canada
- Ottawa Hospital Research Institute Clinical Epidemiology Program, Ottawa, Ontario, Canada
| | - Christian B van der Pol
- Department of Medical Imaging, Hamilton Health Sciences, McMaster University, Hamilton, Ontario, Canada
- Department of Diagnostic Imaging, Juravinski Hospital and Cancer Centre, Hamilton Health Sciences, Hamilton, Ontario, Canada
| |
Collapse
|
18
|
Wang H, Qiao X, Ding W, Chen G, Miao Y, Guo R, Zhu X, Cheng Z, Xu J, Li B, Huang Q. Robust and generalizable artificial intelligence for multi-organ segmentation in ultra-low-dose total-body PET imaging: a multi-center and cross-tracer study. Eur J Nucl Med Mol Imaging 2025:10.1007/s00259-025-07156-8. [PMID: 39969540 DOI: 10.1007/s00259-025-07156-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Accepted: 02/12/2025] [Indexed: 02/20/2025]
Abstract
PURPOSE Positron Emission Tomography (PET) is a powerful molecular imaging tool that visualizes radiotracer distribution to reveal physiological processes. Recent advances in total-body PET have enabled low-dose, CT-free imaging; however, accurate organ segmentation using PET-only data remains challenging. This study develops and validates a deep learning model for multi-organ PET segmentation across varied imaging conditions and tracers, addressing critical needs for fully PET-based quantitative analysis. MATERIALS AND METHODS This retrospective study employed a 3D deep learning-based model for automated multi-organ segmentation on PET images acquired under diverse conditions, including low-dose and non-attenuation-corrected scans. Using a dataset of 798 patients from multiple centers with varied tracers, model robustness and generalizability were evaluated via multi-center and cross-tracer tests. Ground-truth labels for 23 organs were generated from CT images, and segmentation accuracy was assessed using the Dice similarity coefficient (DSC). RESULTS In the multi-center dataset from four different institutions, our model achieved average DSC values of 0.834, 0.825, 0.819, and 0.816 across varying dose reduction factors and correction conditions for FDG PET images. In the cross-tracer dataset, the model reached average DSC values of 0.737, 0.573, 0.830, 0.661, and 0.708 for DOTATATE, FAPI, FDG, Grazytracer, and PSMA, respectively. CONCLUSION The proposed model demonstrated effective, fully PET-based multi-organ segmentation across a range of imaging conditions, centers, and tracers, achieving high robustness and generalizability. These findings underscore the model's potential to enhance clinical diagnostic workflows by supporting ultra-low dose PET imaging. CLINICAL TRIAL NUMBER Not applicable. This is a retrospective study based on collected data, which has been approved by the Research Ethics Committee of Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine.
Collapse
Affiliation(s)
- Hanzhong Wang
- Department of Nuclear Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoya Qiao
- Department of Nuclear Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Wenxiang Ding
- Department of Nuclear Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Gaoyu Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ying Miao
- Department of Nuclear Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Rui Guo
- Department of Nuclear Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhaoping Cheng
- Department of PET/CT, The First Affiliated Hospital of Shandong First Medical University, Jinan, China
| | - Jiehua Xu
- Department of Nuclear Medicine, Zhuhai Clinical Medical College of Jinan University (Zhuhai People's Hospital), Zhuhai, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China.
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China.
| | - Qiu Huang
- Department of Nuclear Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China.
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
19
|
Haubold J, Pollok OB, Holtkamp M, Salhöfer L, Schmidt CS, Bojahr C, Straus J, Schaarschmidt BM, Borys K, Kohnke J, Wen Y, Opitz M, Umutlu L, Forsting M, Friedrich CM, Nensa F, Hosch R. Moving Beyond CT Body Composition Analysis: Using Style Transfer for Bringing CT-Based Fully-Automated Body Composition Analysis to T2-Weighted MRI Sequences. Invest Radiol 2025:00004424-990000000-00294. [PMID: 39961134 DOI: 10.1097/rli.0000000000001162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2025]
Abstract
OBJECTIVES Deep learning for body composition analysis (BCA) is gaining traction in clinical research, offering rapid and automated ways to measure body features like muscle or fat volume. However, most current methods prioritize computed tomography (CT) over magnetic resonance imaging (MRI). This study presents a deep learning approach for automatic BCA using MR T2-weighted sequences. METHODS Initial BCA segmentations (10 body regions and 4 body parts) were generated by mapping CT segmentations from body and organ analysis (BOA) model to synthetic MR images created using an in-house trained CycleGAN. In total, 30 synthetic data pairs were used to train an initial nnU-Net V2 in 3D, and this preliminary model was then applied to segment 120 real T2-weighted MRI sequences from 120 patients (46% female) with a median age of 56 (interquartile range, 17.75), generating early segmentation proposals. These proposals were refined by human annotators, and nnU-Net V2 2D and 3D models were trained using 5-fold cross-validation on this optimized dataset of real MR images. Performance was evaluated using Sørensen-Dice, Surface Dice, and Hausdorff Distance metrics including 95% confidence intervals for cross-validation and ensemble models. RESULTS The 3D ensemble segmentation model achieved the highest Dice scores for the body region classes: bone 0.926 (95% confidence interval [CI], 0.914-0.937), muscle 0.968 (95% CI, 0.961-0.975), subcutaneous fat 0.98 (95% CI, 0.971-0.986), nervous system 0.973 (95% CI, 0.965-0.98), thoracic cavity 0.978 (95% CI, 0.969-0.984), abdominal cavity 0.989 (95% CI, 0.986-0.991), mediastinum 0.92 (95% CI, 0.901-0.936), pericardium 0.945 (95% CI, 0.924-0.96), brain 0.966 (95% CI, 0.927-0.989), and glands 0.905 (95% CI, 0.886-0.921). Furthermore, body part 2D ensemble model reached the highest Dice scores for all labels: arms 0.952 (95% CI, 0.937-0.965), head + neck 0.965 (95% CI, 0.953-0.976), legs 0.978 (95% CI, 0.968-0.988), and torso 0.99 (95% CI, 0.988-0.991). The overall average Dice across body parts (2D = 0.971, 3D = 0.969, P = ns) and body regions (2D = 0.935, 3D = 0.955, P < 0.001) ensemble models indicates stable performance across all classes. CONCLUSIONS The presented approach facilitates efficient and automated extraction of BCA parameters from T2-weighted MRI sequences, providing precise and detailed body composition information across various regions and body parts.
Collapse
Affiliation(s)
- Johannes Haubold
- From the Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany (J.H., O.B.P., M.H., L.S., C.B., J.S., B.M.S., K.B., J.K., M.O., L.U., M.F., F.N., R.H.); Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany (J.H., O.B.P., M.H., L.S., C.S.S., C.B., J.S., K.B., J.K., Y.W., M.O., L.U., M.F., F.N., R.H.); Institute for Transfusion Medicine, University Hospital Essen, Essen, Germany (C.S.S.); Center of Sleep and Telemedicine, University Hospital Essen-Ruhrlandklinik, Essen, Germany (C.S.S.); Data Integration Center, Central IT Department, University Hospital Essen, Essen, Germany (Y.W.); Department of Computer Science, University of Applied Sciences and Arts Dortmund (FHDO), Dortmund, Germany (C.M.F.); and Institute for Medical Informatics, Biometry, and Epidemiology (IMIBE), University Hospital Essen, Essen, Germany (C.M.F.)
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
20
|
Bui MHH, Robert A, Etxebeste A, Rit S. Detection and correction of translational motion in SPECT with exponential data consistency conditions. Phys Med Biol 2025; 70:055003. [PMID: 39883955 DOI: 10.1088/1361-6560/adb09a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2024] [Accepted: 01/30/2025] [Indexed: 02/01/2025]
Abstract
Objective.Rigid patient motion can cause artifacts in single photon emission computed tomography (SPECT) images, compromising the diagnosis and treatment planning. Exponential data consistency conditions (eDCCs) are mathematical equations describing the redundancy of exponential SPECT measurements. It has been recently shown that eDCCs can be used to detect patient motion in SPECT projections. This study aimed at developing a fully data-driven method based on eDCCs to estimate and correct for translational motion in SPECT.Approach.If all activity is encompassed inside a convex regionKof constant attenuation, eDCCs can be derived from SPECT projections and can be used to verify the pairwise consistency of these projections. Our method assumes a single patient translation between two detector gantry positions. The proposed method estimates both the three-dimensional shift and the motion index, i.e. the index of the first gantry position after motion occurred. The estimation minimizes the eDCCs between the subset of projections before the motion index and the subset of motion-corrected projections after the motion index.Results.We evaluated the proposed method using Monte Carlo simulated and experimental data of a NEMA IEC phantom and simulated projections of a liver patient. The method's robustness was assessed by applying various motion vectors and motion indices. Motion detection and correction with eDCCs were sensitive to movements above 3 mm. The accuracy of the estimation was below the 2.39 mm pixel spacing with good precision in all studied cases. The proposed method led to a significant improvement in the quality of reconstructed SPECT images. The activity recovery coefficient relative to the SPECT image without motion was above 90% on average over the six spheres of the NEMA IEC phantom and 97% for the liver patient. For example, for a(2,2,2)cm translation in the middle of the liver acquisition, the activity recovery coefficient was improved from 35% (non-corrected projections) to 99% (motion-corrected projections).Significance.The study proposed and demonstrated the good performance of translational motion detection and correction with eDCCs in SPECT acquisition data.
Collapse
Affiliation(s)
- My Hoang Hoa Bui
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373 Lyon, France
| | - Antoine Robert
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373 Lyon, France
| | - Ane Etxebeste
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373 Lyon, France
| | - Simon Rit
- INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69373 Lyon, France
| |
Collapse
|
21
|
Torkaman M, Jemaa S, Fredrickson J, Fernandez Coimbra A, De Crespigny A, Carano RAD. Comparative analysis of intestinal tumor segmentation in PET CT scans using organ based and whole body deep learning. BMC Med Imaging 2025; 25:52. [PMID: 39962481 PMCID: PMC11834234 DOI: 10.1186/s12880-025-01587-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Accepted: 02/10/2025] [Indexed: 02/20/2025] Open
Abstract
BACKGROUND 18-Fluoro-deoxyglucose positron emission tomography/computed tomography (FDG-PET/CT) is a valuable imaging tool widely used in the management of cancer patients. Deep learning models excel at segmenting highly metabolic tumors but face challenges in regions with complex anatomy and normal cell uptake, such as the gastro-intestinal tract. Despite these challenges, it remains important to achieve accurate segmentation of gastro-intestinal tumors. METHODS Here, we present an international multicenter comparative study between a novel organ-focused approach and a whole-body training method to evaluate the effectiveness of training data homogeneity in accurately identifying gastro-intestinal tumors. In the organ-focused method, the training data is limited to cases with intestinal tumors which makes the network trained with more homogeneous data and with stronger presence of intestinal tumor signals. The whole body approach extracts the intestinal tumors from the results of a model trained on the whole-body scans. Both approaches were trained using diffuse large B cell (DLBCL) patients from a large multi-center clinical trial (NCT01287741). RESULTS We report an improved mean(±std) Dice score of 0.78(±0.21) for the organ-based approach on the hold-out set, compared to 0.63(±0.30) for the whole-body approach, with the p-value of less than 0.0001. At the lesion level, the proposed organ-based approach also shows increased precision, recall, and F1-score. An independent trial was used to evaluate the generalizability of the proposed method to non-Hodgkin's lymphoma (NHL) patients with follicular lymphoma (FL). CONCLUSION Given the variability in structure and metabolism across tissues in the body, our quantitative findings suggest organ-focused training enhances intestinal tumor segmentation by leveraging tissue homogeneity in the training data, contrasting with the whole-body training approach, which, by its very nature, is a more heterogeneous data set.
Collapse
Affiliation(s)
| | | | | | | | | | - Richard A D Carano
- Genentech, Inc, South San Francisco, CA, USA
- F. Hoffman-La Roche Ltd, Basel, Switzerland
| |
Collapse
|
22
|
Okada N, Inoue S, Liu C, Mitarai S, Nakagawa S, Matsuzawa Y, Fujimi S, Yamamoto G, Kuroda T. Unified total body CT image with multiple organ specific windowings: validating improved diagnostic accuracy and speed in trauma cases. Sci Rep 2025; 15:5654. [PMID: 39955327 PMCID: PMC11830084 DOI: 10.1038/s41598-024-83346-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Accepted: 12/13/2024] [Indexed: 02/17/2025] Open
Abstract
Total-body CT scans are useful in saving trauma patients; however, interpreting numerous images with varied window settings slows injury detection. We developed an algorithm for "unified total-body CT image with multiple organ-specific windowings (Uni-CT)", and assessing its impact on physician accuracy and speed in trauma CT interpretation. From November 7, 2008, to June 19, 2020, 40 cases of total-body CT images for blunt trauma with multiple injuries, were collected from the emergency department of Osaka General Medical Center and randomly divided into two groups. In half of the cases, the Uni-CT algorithm using semantic segmentation assigned visibility-friendly window settings to each organ. Four physicians with varying levels of experience interpreted 20 cases using the algorithm and 20 cases in conventional settings. The performance was analyzed based on the accuracy, sensitivity, specificity of the target findings, and diagnosis speed. In the proposal and conventional groups, patients had an average of 2.6 and 2.5 targeting findings, mean ages of 51.8 and 57.7 years, and male proportions of 60% and 45%, respectively. The agreement rate for physicians' diagnoses was κ = 0.70. Average accuracy, sensitivity, and specificity of target findings were 84.8%, 74.3%, 96.9% and 85.5%, 81.2%, 91.5%, respectively, with no significant differences. Diagnostic speed per case averaged 71.9 and 110.4 s in each group (p < 0.05). The Uni-CT algorithm improved the diagnostic speed of total-body CT for trauma, maintaining accuracy comparable to that of conventional methods.
Collapse
Affiliation(s)
- Naoki Okada
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan.
- Osaka General Medical Center, Osaka-shi, Osaka, Japan.
| | | | - Chang Liu
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| | - Sho Mitarai
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| | | | | | | | - Goshiro Yamamoto
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| | - Tomohiro Kuroda
- Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto-shi, Kyoto, 606-8501, Japan
| |
Collapse
|
23
|
Hinck D, Segeroth M, Miazza J, Berdajs D, Bremerich J, Wasserthal J, Pradella M. Automatic Segmentation of Cardiovascular Structures on Chest CT Data Sets: An Update of the TotalSegmentator. Eur J Radiol 2025; 185:112006. [PMID: 39983596 DOI: 10.1016/j.ejrad.2025.112006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Revised: 01/30/2025] [Accepted: 02/13/2025] [Indexed: 02/23/2025]
Abstract
INTRODUCTION Quantitative analysis is an important factor in radiological routine. Recently the TotalSegmentator was released, a free-to-use segmentation tool with over 104 structures included. Our aim was to add missing and enhance previously included cardiovascular (CV) structures to potentially help find new insights into diseases such as aortic aneurysms in future studies. The TotalSegmentator data set with 1613 CT scans (mean age 63.6 ± 15.9 (SD); 675 female), was used. CT scans were selected from clinical routine including various protocols and pathologies. The data set was split in training (1472), validation (57) and testing (84). Segmentations were performed in dedicated imaging software using an iterative approach for training to reduce segmentation workload. Eleven structures were added, and segmentations of six structures were enhanced. The Dice similarity score (DICE) and the Normalized surface distance (NSD) were calculated on an internal and external data set. The external validation was performed on the Dongyang data set. The Mann Whitney U test was performed to evaluate the performance increase on the previously included structures. RESULTS Median DICE [IQR] and NSD [IQR] were 0.967 [0.020] and 1.000 [0.000], respectively. DICE (p < 0.001) and NSD (p < 0.001) significantly increased for 5/6 structures. On evaluation using the external data set, DICE and NSD were 0.970 [0.020] and 1.000 [0.000]. CONCLUSION Accurate segmentations and enhanced segmentations of previously included CV structures were successfully implemented. This suggests further usage in research studies while still running on conventional computers with or without a dedicated graphics processing unit.
Collapse
Affiliation(s)
- Daniel Hinck
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Martin Segeroth
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Jules Miazza
- Department of Cardiac Surgery, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Denis Berdajs
- Department of Cardiac Surgery, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Jens Bremerich
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Jakob Wasserthal
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| | - Maurice Pradella
- Department of Radiology, University Hospital Basel, University of Basel, Petersgraben 4, 4031 Basel, Switzerland.
| |
Collapse
|
24
|
Katseena Yawson A, Sallem H, Seidensaal K, Welzel T, Klüter S, Maria Paul K, Dorsch S, Beyer C, Debus J, Jäkel O, Bauer J, Giske K. Enhancing U-Net-based Pseudo-CT generation from MRI using CT-guided bone segmentation for radiation treatment planning in head & neck cancer patients. Phys Med Biol 2025; 70:045018. [PMID: 39898433 DOI: 10.1088/1361-6560/adb124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Accepted: 01/31/2025] [Indexed: 02/04/2025]
Abstract
Objective.This study investigates the effects of various training protocols on enhancing the precision of MRI-only Pseudo-CT generation for radiation treatment planning and adaptation in head & neck cancer patients. It specifically tackles the challenge of differentiating bone from air, a limitation that frequently results in substantial deviations in the representation of bony structures on Pseudo-CT images.Approach.The study included 25 patients, utilizing pre-treatment MRI-CT image pairs. Five cases were randomly selected for testing, with the remaining 20 used for model training and validation. A 3D U-Net deep learning model was employed, trained on patches of size 643with an overlap of 323. MRI scans were acquired using the Dixon gradient echo (GRE) technique, and various contrasts were explored to improve Pseudo-CT accuracy, including in-phase, water-only, and combined water-only and fat-only images. Additionally, bone extraction from the fat-only image was integrated as an additional channel to better capture bone structures on Pseudo-CTs. The evaluation involved both image quality and dosimetric metrics.Main results.The generated Pseudo-CTs were compared with their corresponding registered target CTs. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) for the base model using combined water-only and fat-only images were 19.20 ± 5.30 HU and 57.24 ± 1.44 dB, respectively. Following the integration of an additional channel using a CT-guided bone segmentation, the model's performance improved, achieving MAE and PSNR of 18.32 ± 5.51 HU and 57.82 ± 1.31 dB, respectively. The measured results are statistically significant, with ap-value<0.05. The dosimetric assessment confirmed that radiation treatment planning on Pseudo-CT achieved accuracy comparable to conventional CT.Significance.This study demonstrates improved accuracy in bone representation on Pseudo-CTs achieved through a combination of water-only, fat-only and extracted bone images; thus, enhancing feasibility of MRI-based simulation for radiation treatment planning.
Collapse
Affiliation(s)
- Ama Katseena Yawson
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Habiba Sallem
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University Hospital, Department of Radiation Oncology, Heidelberg, Germany
| | - Katharina Seidensaal
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University Hospital, Department of Radiation Oncology, Heidelberg, Germany
| | - Thomas Welzel
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University Hospital, Department of Radiation Oncology, Heidelberg, Germany
| | - Sebastian Klüter
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University Hospital, Department of Radiation Oncology, Heidelberg, Germany
| | - Katharina Maria Paul
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University Hospital, Department of Radiation Oncology, Heidelberg, Germany
| | - Stefan Dorsch
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University Hospital, Department of Radiation Oncology, Heidelberg, Germany
| | - Cedric Beyer
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University Hospital, Department of Radiation Oncology, Heidelberg, Germany
| | - Jürgen Debus
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University Hospital, Department of Radiation Oncology, Heidelberg, Germany
- Heidelberg Ion Therapy Center (HIT), Heidelberg, Germany
| | - Oliver Jäkel
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg Ion Therapy Center (HIT), Heidelberg, Germany
| | - Julia Bauer
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University Hospital, Department of Radiation Oncology, Heidelberg, Germany
| | - Kristina Giske
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
| |
Collapse
|
25
|
Kim SS, Seo H, Choi K, Kim S, Han K, Kim YY, Seo N, Chung JJ, Lim JS. Artificial Intelligence Model for Detection of Colorectal Cancer on Routine Abdominopelvic CT Examinations: A Training and External-Testing Study. AJR Am J Roentgenol 2025. [PMID: 39936855 DOI: 10.2214/ajr.24.32396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2025]
Abstract
Background: Radiologists are prone to missing some colorectal cancers (CRCs) on routine abdominopelvic CT examinations that are in fact detectable on the images. Objective: To develop an artificial intelligence (AI) model to detect CRC on routine abdominopelvic CT examinations, performed without bowel preparation. Methods: This retrospective study included 3945 patients (2275 men, 1670 women; mean age, 62 years): a training set of 2662 patients from Severance Hospital with CRC who underwent routine contrast-enhanced abdominopelvic CT before treatment between January 2010 and December 2014; and internal (841 patients from Severance Hospital) and external (442 patients from Gangnam Severance Hospital) test sets of patients who underwent routine contrast-enhanced abdominopelvic CT for any indication and colonoscopy within a 2-month interval between January 2018 and June 2018. A radiologist, accessing colonoscopy reports, determined which CRCs were visible on CT and placed bounding boxes around lesions on all slices showing CRC, serving as the reference standard. A contemporary transformer-based object detection network was adapted and trained to create an AI model (https://github.com/boktae7/colorectaltumor) to automatically detect CT-visible CRC on unprocessed DICOM slices. AI performance was evaluated using alternative free-response ROC analysis, per-lesion sensitivity, and per-patient specificity; performance in the external test set was compared to that of two radiologist readers. Clinical radiology reports were also reviewed. Results: In the internal (93 CT-visible CRCs in 92 patients) and external (26 CT-visible CRCs in 26 patients) test sets, AI had AUC of 0.867 and 0.808, sensitivity of 79.6% and 80.8%, and specificity of 91.2% and 90.9%, respectively. In the external test set, the two radiologists had sensitivities of 73.1% and 80.8% (p=.74 and p>.99 vs AI) and specificities of 98.3% and 98.6% (both p<.001 vs AI); AI correctly detected five of nine CRCs missed by at least one reader. The clinical radiology reports raised suspicion for 75.9% of CRCs in the external test set. Conclusion: The findings demonstrate the AI model's utility for automated detection of CRC on routine abdominopelvic CT examinations. Clinical Impact: The AI model could help reduce the frequency of missed CRCs on routine examinations performed for reasons unrelated to CRC detection.
Collapse
Affiliation(s)
- Seung-Seob Kim
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Korea
| | - Hyunseok Seo
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Korea
| | - Kihwan Choi
- Department of Applied Artificial Intelligence, Seoul National University of Science and Technology, Korea
| | - Sungwon Kim
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Korea
| | - Kyunghwa Han
- Department of Radiology, Research Institute of Radiological Science, Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Korea
| | - Yeun-Yoon Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Korea
| | - Nieun Seo
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Korea
| | - Jae-Joon Chung
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Korea
| | - Joon Seok Lim
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Korea
| |
Collapse
|
26
|
Lu Z, Hu J, Chen G, Jiang H, Shih CT, Afshar-Oromieh A, Rominger A, Shi K, Mok GSP. Automatic bone marrow segmentation for precise [ 177Lu]Lu-PSMA-617 dosimetry. Med Phys 2025. [PMID: 39935268 DOI: 10.1002/mp.17684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Revised: 01/22/2025] [Accepted: 01/31/2025] [Indexed: 02/13/2025] Open
Abstract
BACKGROUND Bone marrow (BM) is the dominant dose-limiting organ in [177Lu]Lu-PSMA-617 therapy for patients with metastasized castration resistant prostate cancer, where BM dosimetry is challenging due to segmentation. PURPOSE We aim to develop an automatic image-based segmentation method on peri-therapeutic sequential [177Lu]Lu-PSMA-617 images for personalized BM dosimetry. METHODS Quantitative SPECT/CT imaging at 2, 20, 40 and 60 (n = 14)/200 (n = 16) h post [177Lu]Lu-PSMA-617 injection were analyzed for 10 patients with 30 treatment cycles. X-means clustering was applied on the deep learning-based segmented lumbar spines CT images to classify the BM region. A single threshold method, two empirical segmentation methods (one sphere and five spheres), and gold standard manual segmentation were also implemented. The Dice similarity coefficient between BM masks of the X-means clustering and single threshold method was calculated as compared to the gold standard. BM mean absorbed dose (Dmean) was obtained for different segmentation methods. Absolute errors and Bland-Altman analysis were also evaluated for BM Dmean derived from evaluated segmentation methods compared with the gold standard. Wilcoxon signed-rank test was performed for statistical evaluation. BM Dmean was correlated with the change of platelets and white blood cells (WBC) pre- and post-treatment using Pearson correlation analysis. RESULTS In 30 cycles of 10 patients, the average Dice is 0.76 ± 0.18 for the X-means clustering method, as compared to 0.61 ± 0.19 for the single threshold method. The gold standard yields mean BM Dmean of 0.46 ± 0.69 Gy. The X-means clustering method exhibits significantly (p < 0.01) lower mean absolute BM Dmean (25.34 ± 64.48%) errors, followed by the single threshold (32.46 ± 69.49%), one sphere (50.53 ± 35.40%), and five spheres (72.73 ± 115.97%) methods. Bland-Altman analysis reveals that the X-means clustering method has a smaller Dmean difference (0.0330 Gy) compared to the single threshold (0.0512 Gy), one sphere (-0.1903 Gy), and five spheres (0.2108 Gy) methods. Stronger correlations (r ≤ -0.65) are found between platelets/WBC changes and BM Dmean from both the gold standard and X-means clustering methods than other methods. CONCLUSIONS X-means clustering is feasible to segment the BM based on the CT images of peri-therapy SPECT/CT and shows advantages compared with the single threshold and empirical sphere segmentation methods.
Collapse
Affiliation(s)
- Zhonglin Lu
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Taipa, Macau SAR, China
| | - Jiaxi Hu
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Gefei Chen
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
| | - Han Jiang
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
- PET-CT Center, Fujian Medical University Union Hospital, Fuzhou, China
| | - Cheng-Ting Shih
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan
| | - Ali Afshar-Oromieh
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
- Center for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Taipa, Macau SAR, China
- Ministry of Education Frontiers Science Center for Precision Oncology, Faculty of Health Science, University of Macau, Taipa, Macau SAR, China
| |
Collapse
|
27
|
Chatterjee D, Singh S, Enriquez E, Arbab-Zadeh A, Lima JAC, Ambale Venkatesh B. Automated detection and quantification of aortic calcification in coronary CT angiography using deep learning: A comparative study of manual and automated scoring methods. J Cardiovasc Comput Tomogr 2025:S1934-5925(25)00043-7. [PMID: 39955204 DOI: 10.1016/j.jcct.2025.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/16/2025] [Accepted: 02/07/2025] [Indexed: 02/17/2025]
Abstract
BACKGROUND Aortic calcification, often incidentally detected during coronary artery calcium (CAC) scans, is underutilized in cardiovascular risk assessments due to manual quantification challenges. This study evaluates a deep learning model for automating aortic calcification detection and quantification in coronary CT angiography (CTA) images. We validate against manual assessments and compare the association of manual and automated assessments with incident major adverse cardiovascular events (MACE). METHODS A deep learning algorithm was applied to CAC scans from 670 participants in the CORE320 and CORE64 studies. Aortic calcification in the aortic root, ascending, and descending aorta was quantified manually and automatically. Concordance correlation coefficients (CCC) assessed agreement, and Cox regression and ROC analyses evaluated association with incident MACE. RESULTS Automated scoring demonstrated high concordance with manual methods (CCC: 0.926-0.992), supporting its reliability in assessing aortic calcifications. ROC analysis revealed that the automated method was as effective as the manual technique in predicting MACE (p > 0.05). CONCLUSION Automated aortic calcification scoring is a reliable alternative to manual methods, offering consistency and efficiency in the analysis of incidental findings on CAC scans.
Collapse
Affiliation(s)
| | | | | | - Armin Arbab-Zadeh
- Department of Cardiovascular Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Joao A C Lima
- Department of Cardiovascular Medicine, Johns Hopkins University, Baltimore, MD, USA
| | | |
Collapse
|
28
|
Pomohaci MD, Grasu MC, Băicoianu-Nițescu AŞ, Enache RM, Lupescu IG. Systematic Review: AI Applications in Liver Imaging with a Focus on Segmentation and Detection. Life (Basel) 2025; 15:258. [PMID: 40003667 PMCID: PMC11856300 DOI: 10.3390/life15020258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2024] [Revised: 02/02/2025] [Accepted: 02/05/2025] [Indexed: 02/27/2025] Open
Abstract
The liver is a frequent focus in radiology due to its diverse pathology, and artificial intelligence (AI) could improve diagnosis and management. This systematic review aimed to assess and categorize research studies on AI applications in liver radiology from 2018 to 2024, classifying them according to areas of interest (AOIs), AI task and imaging modality used. We excluded reviews and non-liver and non-radiology studies. Using the PRISMA guidelines, we identified 6680 articles from the PubMed/Medline, Scopus and Web of Science databases; 1232 were found to be eligible. A further analysis of a subgroup of 329 studies focused on detection and/or segmentation tasks was performed. Liver lesions were the main AOI and CT was the most popular modality, while classification was the predominant AI task. Most detection and/or segmentation studies (48.02%) used only public datasets, and 27.65% used only one public dataset. Code sharing was practiced by 10.94% of these articles. This review highlights the predominance of classification tasks, especially applied to liver lesion imaging, most often using CT imaging. Detection and/or segmentation tasks relied mostly on public datasets, while external testing and code sharing were lacking. Future research should explore multi-task models and improve dataset availability to enhance AI's clinical impact in liver imaging.
Collapse
Affiliation(s)
- Mihai Dan Pomohaci
- Department 8: Radiology, Discipline of Radiology, Medical Imaging and Interventional Radiology I, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (M.D.P.); (A.-Ș.B.-N.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania;
| | - Mugur Cristian Grasu
- Department 8: Radiology, Discipline of Radiology, Medical Imaging and Interventional Radiology I, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (M.D.P.); (A.-Ș.B.-N.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania;
| | - Alexandru-Ştefan Băicoianu-Nițescu
- Department 8: Radiology, Discipline of Radiology, Medical Imaging and Interventional Radiology I, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (M.D.P.); (A.-Ș.B.-N.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania;
| | - Robert Mihai Enache
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania;
| | - Ioana Gabriela Lupescu
- Department 8: Radiology, Discipline of Radiology, Medical Imaging and Interventional Radiology I, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (M.D.P.); (A.-Ș.B.-N.)
- Department of Radiology and Medical Imaging, Fundeni Clinical Institute, 022328 Bucharest, Romania;
| |
Collapse
|
29
|
Tölle M, Garthe P, Scherer C, Seliger JM, Leha A, Krüger N, Simm S, Martin S, Eble S, Kelm H, Bednorz M, André F, Bannas P, Diller G, Frey N, Groß S, Hennemuth A, Kaderali L, Meyer A, Nagel E, Orwat S, Seiffert M, Friede T, Seidler T, Engelhardt S. Real world federated learning with a knowledge distilled transformer for cardiac CT imaging. NPJ Digit Med 2025; 8:88. [PMID: 39915633 PMCID: PMC11802793 DOI: 10.1038/s41746-025-01434-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Accepted: 01/02/2025] [Indexed: 02/09/2025] Open
Abstract
Federated learning is a renowned technique for utilizing decentralized data while preserving privacy. However, real-world applications often face challenges like partially labeled datasets, where only a few locations have certain expert annotations, leaving large portions of unlabeled data unused. Leveraging these could enhance transformer architectures' ability in regimes with small and diversely annotated sets. We conduct the largest federated cardiac CT analysis to date (n = 8, 104) in a real-world setting across eight hospitals. Our two-step semi-supervised strategy distills knowledge from task-specific CNNs into a transformer. First, CNNs predict on unlabeled data per label type and then the transformer learns from these predictions with label-specific heads. This improves predictive accuracy and enables simultaneous learning of all partial labels across the federation, and outperforms UNet-based models in generalizability on downstream tasks. Code and model weights are made openly available for leveraging future cardiac CT analysis.
Collapse
Affiliation(s)
- Malte Tölle
- DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany.
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany.
- Heidelberg University, Heidelberg, Germany.
- Informatics for Life Institute, Heidelberg, Germany.
| | - Philipp Garthe
- Clinic for Cardiology III, University Hospital Münster, Münster, Germany
| | - Clemens Scherer
- DZHK (German Centre for Cardiovascular Research), partner site Munich, Munich, Germany
- Department of Medicine I, LMU University Hospital, LMU Munich, Munich, Germany
| | - Jan Moritz Seliger
- DZHK (German Centre for Cardiovascular Research), partner site Hamburg/Kiel/Lübeck, Hamburg, Germany
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Andreas Leha
- DZHK (German Centre for Cardiovascular Research), partner site Lower Saxony, Göttingen, Germany
- Department of Medical Statistics, University Medical Center Göttingen, Göttingen, Germany
| | - Nina Krüger
- DZHK (German Centre for Cardiovascular Research), partner site Berlin, Berlin, Germany
- Deutsches Herzzentrum der Charité (DHZC), Institute of Computer-assisted Cardiovascular Medicine, Berlin, Germany
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Stefan Simm
- DZHK (German Centre for Cardiovascular Research), partner site Greifswald, Greifswald, Germany
- Institute of Bioinformatics, University Medicine Greifswald, Greifswald, Germany
| | - Simon Martin
- DZHK (German Centre for Cardiovascular Research), partner site RhineMain, Frankfurt, Germany
- Institute for Experimental and Translational Cardiovascular Imaging, Goethe University, Frankfurt am Main, Germany
| | - Sebastian Eble
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
| | - Halvar Kelm
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
| | - Moritz Bednorz
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
| | - Florian André
- DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
- Heidelberg University, Heidelberg, Germany
- Informatics for Life Institute, Heidelberg, Germany
| | - Peter Bannas
- DZHK (German Centre for Cardiovascular Research), partner site Hamburg/Kiel/Lübeck, Hamburg, Germany
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Gerhard Diller
- Clinic for Cardiology III, University Hospital Münster, Münster, Germany
| | - Norbert Frey
- DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
- Heidelberg University, Heidelberg, Germany
- Informatics for Life Institute, Heidelberg, Germany
| | - Stefan Groß
- DZHK (German Centre for Cardiovascular Research), partner site Greifswald, Greifswald, Germany
- Institute of Bioinformatics, University Medicine Greifswald, Greifswald, Germany
| | - Anja Hennemuth
- DZHK (German Centre for Cardiovascular Research), partner site Berlin, Berlin, Germany
- Deutsches Herzzentrum der Charité (DHZC), Institute of Computer-assisted Cardiovascular Medicine, Berlin, Germany
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Lars Kaderali
- DZHK (German Centre for Cardiovascular Research), partner site Greifswald, Greifswald, Germany
- Institute of Bioinformatics, University Medicine Greifswald, Greifswald, Germany
| | - Alexander Meyer
- DZHK (German Centre for Cardiovascular Research), partner site Berlin, Berlin, Germany
- Deutsches Herzzentrum der Charité (DHZC), Institute of Computer-assisted Cardiovascular Medicine, Berlin, Germany
| | - Eike Nagel
- DZHK (German Centre for Cardiovascular Research), partner site RhineMain, Frankfurt, Germany
- Institute for Experimental and Translational Cardiovascular Imaging, Goethe University, Frankfurt am Main, Germany
| | - Stefan Orwat
- Clinic for Cardiology III, University Hospital Münster, Münster, Germany
| | - Moritz Seiffert
- DZHK (German Centre for Cardiovascular Research), partner site Hamburg/Kiel/Lübeck, Hamburg, Germany
- Department of Cardiology, University Heart and Vascular Center Hamburg, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Tim Friede
- DZHK (German Centre for Cardiovascular Research), partner site Lower Saxony, Göttingen, Germany
- Department of Medical Statistics, University Medical Center Göttingen, Göttingen, Germany
| | - Tim Seidler
- DZHK (German Centre for Cardiovascular Research), partner site Lower Saxony, Göttingen, Germany
- Department of Cardiology, University Medicine Göttingen, Göttingen, Germany
- Department of Cardiology, Campus Kerckhoff of the Justus-Liebig-University at Gießen, Kerckhoff-Clinic, Gießen, Germany
| | - Sandy Engelhardt
- DZHK (German Centre for Cardiovascular Research), partner site Heidelberg/Mannheim, Heidelberg, Germany
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
- Heidelberg University, Heidelberg, Germany
- Informatics for Life Institute, Heidelberg, Germany
| |
Collapse
|
30
|
Wang H, Wang Y, Xue Q, Zhang Y, Qiao X, Lin Z, Zheng J, Zhang Z, Yang Y, Zhang M, Huang Q, Huang Y, Cao T, Wang J, Li B. Optimizing MR-based attenuation correction in hybrid PET/MR using deep learning: validation with a flatbed insert and consistent patient positioning. Eur J Nucl Med Mol Imaging 2025:10.1007/s00259-025-07086-5. [PMID: 39912939 DOI: 10.1007/s00259-025-07086-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2024] [Accepted: 01/11/2025] [Indexed: 02/07/2025]
Abstract
PURPOSE To address the challenges of verifying MR-based attenuation correction (MRAC) in PET/MR due to CT positional mismatches and alignment issues, this study utilized a flatbed insert and arms-down positioning during PET/CT scans to achieve precise MR-CT matching for accurate MRAC evaluation. METHODS A validation dataset of 21 patients underwent whole-body [18F]FDG PET/CT followed by [18F]FDG PET/MR. A flatbed insert ensured consistent positioning, allowing direct comparison of four MRAC methods-four-tissue and five-tissue models with discrete and continuous μ-maps-against CT-based attenuation correction (CTAC). A deep learning-based framework, trained on a dataset of 300 patients, was used to generate synthesized-CTs from MR images, forming the basis for all MRAC methods. Quantitative analyses were conducted at the whole-body, region of interest, and lesion levels, with lesion-distance analysis evaluating the impact of bone proximity on standardized uptake value (SUV) quantification. RESULTS Distinct differences were observed among MRAC methods in spine and femur regions. Joint histogram analysis showed MRAC-4 (continuous μ-map) closely aligned with CTAC. Lesion-distance analysis revealed MRAC-4 minimized bone-induced SUV interference (r = 0.01, p = 0.8643). However, tissues prone to bone segmentation interference, such as the spine and liver, exhibited greater SUV variability and lower reproducibility in MRAC-4 compared to MRAC-2 (2D bone segmentation, discrete μ-map) and MRAC-3 (3D bone segmentation, discrete μ-map). CONCLUSION Using a flatbed insert, this study validated MRAC with high precision. Continuous μ-value MRAC method (MRAC-4) demonstrated superior accuracy and minimized bone-related SUV errors but faced challenges in reproducibility, particularly in bone-rich regions.
Collapse
Affiliation(s)
- Hanzhong Wang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Yue Wang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qiaoyi Xue
- Central Research Institute, United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Yu Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaoya Qiao
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Zengping Lin
- Central Research Institute, United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Jiaxu Zheng
- Central Research Institute, United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Zheng Zhang
- Central Research Institute, United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Yang Yang
- Beijing United Imaging Research Institute of Intelligent Imaging, Beijing, China
| | - Min Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qiu Huang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yanqi Huang
- Central Research Institute, United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Tuoyu Cao
- Central Research Institute, United Imaging Healthcare Group Co., Ltd, Shanghai, China
| | - Jin Wang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
31
|
Yang W, Dong Z, Xu M, Xu L, Geng D, Li Y, Wang P. Optimizing transformer-based network via advanced decoder design for medical image segmentation. Biomed Phys Eng Express 2025; 11:025024. [PMID: 39869936 DOI: 10.1088/2057-1976/adaec7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2024] [Accepted: 01/27/2025] [Indexed: 01/29/2025]
Abstract
U-Net is widely used in medical image segmentation due to its simple and flexible architecture design. To address the challenges of scale and complexity in medical tasks, several variants of U-Net have been proposed. In particular, methods based on Vision Transformer (ViT), represented by Swin UNETR, have gained widespread attention in recent years. However, these improvements often focus on the encoder, overlooking the crucial role of the decoder in optimizing segmentation details. This design imbalance limits the potential for further enhancing segmentation performance. To address this issue, we analyze the roles of various decoder components, including upsampling method, skip connection, and feature extraction module, as well as the shortcomings of existing methods. Consequently, we propose Swin DER (i.e.,SwinUNETRDecoderEnhanced andRefined), by specifically optimizing the design of these three components. Swin DER performs upsampling using learnable interpolation algorithm called offset coordinate neighborhood weighted up sampling (Onsampling) and replaces traditional skip connection with spatial-channel parallel attention gate (SCP AG). Additionally, Swin DER introduces deformable convolution along with attention mechanism in the feature extraction module of the decoder. Our model design achieves excellent results, surpassing other state-of-the-art methods on both the Synapse dataset and the MSD brain tumor segmentation task. Code is available at:https://github.com/WillBeanYang/Swin-DER.
Collapse
Affiliation(s)
- Weibin Yang
- School of Information Science and Engineering, Shandong University, Tsingtao, 266237, People's Republic of China
| | - Zhiqi Dong
- School of Information Science and Engineering, Shandong University, Tsingtao, 266237, People's Republic of China
| | - Mingyuan Xu
- School of Information Science and Engineering, Shandong University, Tsingtao, 266237, People's Republic of China
| | - Longwei Xu
- School of Information Science and Engineering, Shandong University, Tsingtao, 266237, People's Republic of China
| | - Dehua Geng
- School of Information Science and Engineering, Shandong University, Tsingtao, 266237, People's Republic of China
| | - Yusong Li
- School of Information Science and Engineering, Shandong University, Tsingtao, 266237, People's Republic of China
| | - Pengwei Wang
- School of Information Science and Engineering, Shandong University, Tsingtao, 266237, People's Republic of China
| |
Collapse
|
32
|
Liao W, Luo X, Li L, Xu J, He Y, Huang H, Zhang S. Automatic cervical lymph nodes detection and segmentation in heterogeneous computed tomography images using deep transfer learning. Sci Rep 2025; 15:4250. [PMID: 39905029 PMCID: PMC11794882 DOI: 10.1038/s41598-024-84804-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2024] [Accepted: 12/27/2024] [Indexed: 02/06/2025] Open
Abstract
To develop a deep learning model using transfer learning for automatic detection and segmentation of neck lymph nodes (LNs) in computed tomography (CT) images, the study included 11,013 annotated LNs with a short-axis diameter ≥ 3 mm from 626 head and neck cancer patients across four hospitals. The nnUNet model was used as a baseline, pre-trained on a large-scale head and neck dataset, and then fine-tuned with 4,729 LNs from hospital A for detection and segmentation. Validation was conducted on an internal testing cohort (ITC A) and three external testing cohorts (ETCs B, C, and D), with 1684 and 4600 LNs, respectively. Detection was evaluated via sensitivity, positive predictive value (PPV), and false positive rate per case (FP/vol), while segmentation was assessed using the Dice similarity coefficient (DSC) and Hausdorff distance (HD95). For detection, the sensitivity, PPV, and FP/vol in ITC A were 54.6%, 69.0%, and 3.4, respectively. In ETCs, the sensitivity ranged from 45.7% at 3.9 FP/vol to 63.5% at 5.8 FP/vol. Segmentation achieved a mean DSC of 0.72 in ITC A and 0.72 to 0.74 in ETCs, as well as a mean HD95 of 3.78 mm in ITC A and 2.73 mm to 2.85 mm in ETCs. No significant sensitivity difference was found between contrast-enhanced and unenhanced CT images (p = 0.502) or repeated CT images (p = 0.815) during adaptive radiotherapy. The model's segmentation accuracy was comparable to that of experienced oncologists. The model shows promise in automatically detecting and segmenting neck LNs in CT images, potentially reducing oncologists' segmentation workload.
Collapse
Affiliation(s)
- Wenjun Liao
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Cancer Hospital Affiliate to School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610041, China
| | - Xiangde Luo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lu Li
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Cancer Hospital Affiliate to School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610041, China
| | - Jinfeng Xu
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Yuan He
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 23000, Anhui, China
| | - Hui Huang
- Cancer Center, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 610072, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Cancer Hospital Affiliate to School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610041, China.
| |
Collapse
|
33
|
Vries HS, van Praagh GD, Nienhuis PH, Alic L, Slart RHJA. A Machine Learning Model Based on Radiomic Features as a Tool to Identify Active Giant Cell Arteritis on [ 18F]FDG-PET Images During Follow-Up. Diagnostics (Basel) 2025; 15:367. [PMID: 39941297 PMCID: PMC11817507 DOI: 10.3390/diagnostics15030367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2024] [Revised: 01/24/2025] [Accepted: 01/29/2025] [Indexed: 02/16/2025] Open
Abstract
Objective: To investigate the feasibility of a machine learning (ML) model based on radiomic features to identify active giant cell arteritis (GCA) in the aorta and differentiate it from atherosclerosis in follow-up [18F]FDG-PET/CT images for therapy monitoring. Methods: To train the ML model, 64 [18F]FDG-PET scans of 34 patients with proven GCA and 34 control subjects with type 2 diabetes mellitus were retrospectively included. The aorta was delineated into the ascending, arch, descending, and abdominal aorta. From each segment, 95 features were extracted. All segments were randomly split into a training/validation (n = 192; 80%) and test set (n = 46; 20%). In total, 441 ML models were trained, using combinations of seven feature selection methods, seven classifiers, and nine different numbers of features. The performance was assessed by area under the curve (AUC). The best performing ML model was compared to the clinical report of nuclear medicine physicians in 19 follow-up scans (7 active GCA, 12 inactive GCA). For explainability, an occlusion map was created to illustrate the important regions of the aorta for the decision of the ML model. Results: The ten-feature model with ANOVA as the feature selector and random forest classifier demonstrated the highest performance (AUC = 0.92 ± 0.01). Compared with the clinical report, this model showed a higher PPV (0.83 vs. 0.80), NPV (0.85 vs. 0.79), and accuracy (0.84 vs. 0.79) in the detection of active GCA in follow-up scans. Conclusions: The current radiomics ML model was able to identify active GCA and differentiate GCA from atherosclerosis in follow-up [18F]FDG-PET/CT scans. This demonstrates the potential of the ML model as a monitoring tool in challenging [18F]FDG-PET scans of GCA patients.
Collapse
Affiliation(s)
- Hanne S. Vries
- Department of Nuclear Medicine and Molecular Imaging, University Medical Centre Groningen, University of Groningen, 9700 RB Groningen, The Netherlands
- Department of Magnetic Detection & Imaging, Technical Medical Centre, Faculty of Science and Technology, University of Twente, 7522 NH Enschede, The Netherlands
| | - Gijs D. van Praagh
- Department of Nuclear Medicine and Molecular Imaging, University Medical Centre Groningen, University of Groningen, 9700 RB Groningen, The Netherlands
| | - Pieter H. Nienhuis
- Department of Nuclear Medicine and Molecular Imaging, University Medical Centre Groningen, University of Groningen, 9700 RB Groningen, The Netherlands
| | - Lejla Alic
- Department of Magnetic Detection & Imaging, Technical Medical Centre, Faculty of Science and Technology, University of Twente, 7522 NH Enschede, The Netherlands
| | - Riemer H. J. A. Slart
- Department of Nuclear Medicine and Molecular Imaging, University Medical Centre Groningen, University of Groningen, 9700 RB Groningen, The Netherlands
| |
Collapse
|
34
|
Tölle M, Burger L, Kelm H, André F, Bannas P, Diller G, Frey N, Garthe P, Groß S, Hennemuth A, Kaderali L, Krüger N, Leha A, Martin S, Meyer A, Nagel E, Orwat S, Scherer C, Seiffert M, Seliger JM, Simm S, Friede T, Seidler T, Engelhardt S. Multi-modal dataset creation for federated learning with DICOM-structured reports. Int J Comput Assist Radiol Surg 2025:10.1007/s11548-025-03327-y. [PMID: 39899185 DOI: 10.1007/s11548-025-03327-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 01/14/2025] [Indexed: 02/04/2025]
Abstract
Purpose Federated training is often challenging on heterogeneous datasets due to divergent data storage options, inconsistent naming schemes, varied annotation procedures, and disparities in label quality. This is particularly evident in the emerging multi-modal learning paradigms, where dataset harmonization including a uniform data representation and filtering options are of paramount importance.Methods DICOM-structured reports enable the standardized linkage of arbitrary information beyond the imaging domain and can be used within Python deep learning pipelines with highdicom. Building on this, we developed an open platform for data integration with interactive filtering capabilities, thereby simplifying the process of creation of patient cohorts over several sites with consistent multi-modal data.Results In this study, we extend our prior work by showing its applicability to more and divergent data types, as well as streamlining datasets for federated training within an established consortium of eight university hospitals in Germany. We prove its concurrent filtering ability by creating harmonized multi-modal datasets across all locations for predicting the outcome after minimally invasive heart valve replacement. The data include imaging and waveform data (i.e., computed tomography images, electrocardiography scans) as well as annotations (i.e., calcification segmentations, and pointsets), and metadata (i.e., prostheses and pacemaker dependency).Conclusion Structured reports bridge the traditional gap between imaging systems and information systems. Utilizing the inherent DICOM reference system arbitrary data types can be queried concurrently to create meaningful cohorts for multi-centric data analysis. The graphical interface as well as example structured report templates are available at https://github.com/Cardio-AI/fl-multi-modal-dataset-creation .
Collapse
Affiliation(s)
- Malte Tölle
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany.
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany.
- Informatics for Life Institute, Heidelberg, Germany.
| | - Lukas Burger
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
- Informatics for Life Institute, Heidelberg, Germany
| | - Halvar Kelm
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
- Informatics for Life Institute, Heidelberg, Germany
| | - Florian André
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
- Informatics for Life Institute, Heidelberg, Germany
| | - Peter Bannas
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Gerhard Diller
- Clinic for Cardiology III, University Hospital Münster, Münster, Germany
| | - Norbert Frey
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
- Informatics for Life Institute, Heidelberg, Germany
| | - Philipp Garthe
- Clinic for Cardiology III, University Hospital Münster, Münster, Germany
| | - Stefan Groß
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Institute of Bioinformatics, University Medicine Greifswald, Greifswald, Germany
| | - Anja Hennemuth
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Deutsches Herzzentrum der Charité (DHZC), Institute of Computer-assisted Cardiovascular Medicine, Berlin, Germany
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Berlin, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Lars Kaderali
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Institute of Bioinformatics, University Medicine Greifswald, Greifswald, Germany
| | - Nina Krüger
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Deutsches Herzzentrum der Charité (DHZC), Institute of Computer-assisted Cardiovascular Medicine, Berlin, Germany
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Berlin, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Andreas Leha
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Medical Statistics, University Medical Center Göttingen, Göttingen, Germany
| | - Simon Martin
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Institute for Experimental and Translational Cardiovascular Imaging, Goethe University, Frankfurt am Main, Germany
| | - Alexander Meyer
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Deutsches Herzzentrum der Charité (DHZC), Institute of Computer-assisted Cardiovascular Medicine, Berlin, Germany
| | - Eike Nagel
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Institute for Experimental and Translational Cardiovascular Imaging, Goethe University, Frankfurt am Main, Germany
| | - Stefan Orwat
- Clinic for Cardiology III, University Hospital Münster, Münster, Germany
| | - Clemens Scherer
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Medicine I, LMU University Hospital, LMU Munich, Munich, Germany
| | - Moritz Seiffert
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Cardiology, University Heart and Vascular Center Hamburg, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Jan Moritz Seliger
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Stefan Simm
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Institute of Bioinformatics, University Medicine Greifswald, Greifswald, Germany
| | - Tim Friede
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Tim Seidler
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Cardiology, University Medicine Göttingen, Göttingen, Germany
- Department of Cardiology, Kerckhoff-Clinic, Campus Kerckhoff of the Justus-Liebig-Universität Gießen, Gießen, Germany
| | - Sandy Engelhardt
- DZHK (German Centre for Cardiovascular Research, All Partner Sites), Munich, Germany
- Department of Cardiology, Angiology and Pneumology, Heidelberg University Hospital, Heidelberg, Germany
- Informatics for Life Institute, Heidelberg, Germany
| |
Collapse
|
35
|
Yuan J, Li B, Zhang C, Wang J, Huang B, Ma L. Machine Learning-Based CT Radiomics Model to Predict the Risk of Hip Fragility Fracture. Acad Radiol 2025:S1076-6332(25)00065-0. [PMID: 39904664 DOI: 10.1016/j.acra.2025.01.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2024] [Revised: 01/13/2025] [Accepted: 01/19/2025] [Indexed: 02/06/2025]
Abstract
RATIONALE AND OBJECTIVES This research aimed to develop a combined model based on proximal femur attenuation values and radiomics features at routine CT to predict hip fragility fracture using machine learning methods. METHOD A total of 254 patients (training cohort, n=132; test cohort 1, n=56;test cohort 2, n=66) who underwent hip or pelvic CT scans were included. Three different machine learning methods were used to build the Support Vector Machine (SVM) model, Logistic Regression (LR) model and Random Forest (RF) model respectively. The method that exhibited the best performance in the training cohort and test cohort 1 was selected to represent the radiomics model for subsequent studies. The mean CT Hounsfield unit of three-dimensional CT images at the proximal femur was extracted to construct the mean CTHU model. Multivariate logistic regression was performed using mean CT Hounsfield unit together with radiomics features, and the combined model was subsequently developed with a visualized nomogram. RESULTS Among the radiomics models based on three machine learning methods, the LR model showed the best performance in the training cohort (AUC=0.875, 95% CI=0.806-0.926) and in the test cohort 1 (AUC=0.851, 95% CI=0.730-0.932). Compared to the mean CT model and the LR model, the combined model showed superior discriminatory power in the training cohort (AUC=0.934, 95% CI=0.895-0.972), the test cohort 1 (AUC=0.893, 95% CI=0.812-0.974) and the test cohort 2 (AUC=0.851, 95% CI=0.742-0.927). CONCLUSION The combined model, based on the mean CT Hounsfield unit of the proximal femur and radiomics features, can provide an accurate quantitative imaging basis for individualized risk prediction of hip fragility fracture.
Collapse
Affiliation(s)
- Jinglei Yuan
- Department of Medical Imaging, the First Affiliated Hospital of Guangdong Pharmaceutical University, Guangzhou 510080, China (J.Y., B.L., J.W., L.M.)
| | - Bing Li
- Department of Medical Imaging, the First Affiliated Hospital of Guangdong Pharmaceutical University, Guangzhou 510080, China (J.Y., B.L., J.W., L.M.)
| | - Chu Zhang
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China (C.Z., B.H.); Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging,School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China (C.Z., B.H.)
| | - Jing Wang
- Department of Medical Imaging, the First Affiliated Hospital of Guangdong Pharmaceutical University, Guangzhou 510080, China (J.Y., B.L., J.W., L.M.)
| | - Bingsheng Huang
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China (C.Z., B.H.); Guangdong Key Laboratory of Biomedical Measurements and Ultrasound Imaging,School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen 518060, China (C.Z., B.H.)
| | - Liheng Ma
- Department of Medical Imaging, the First Affiliated Hospital of Guangdong Pharmaceutical University, Guangzhou 510080, China (J.Y., B.L., J.W., L.M.).
| |
Collapse
|
36
|
Yamazaki M, Watanabe S, Tominaga M, Yagi T, Goto Y, Yanagimura N, Arita M, Ohtsubo A, Tanaka T, Nozaki K, Saida Y, Kondo R, Kikuchi T, Ishikawa H. 18F-FDG-PET/CT Uptake by Noncancerous Lung as a Predictor of Interstitial Lung Disease Induced by Immune Checkpoint Inhibitors. Acad Radiol 2025; 32:1026-1035. [PMID: 39227217 DOI: 10.1016/j.acra.2024.08.043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 08/04/2024] [Accepted: 08/20/2024] [Indexed: 09/05/2024]
Abstract
RATIONALE AND OBJECTIVES Immune checkpoint inhibitors (ICIs) have improved lung cancer prognosis; however, ICI-related interstitial lung disease (ILD) is fatal and difficult to predict. Herein, we hypothesized that pre-existing lung inflammation on radiological imaging can be a potential risk factor for ILD onset. Therefore, we investigated the association between high uptake in noncancerous lung (NCL) on 18F- FDG-PET/CT and ICI-ILD in lung cancer. METHODS Patients with primary lung cancer who underwent FDG-PET/CT within three months prior to ICI therapy were retrospectively included. Artificial intelligence was utilized for extracting the NCL regions (background lung) from the lung contralateral to the primary tumor. FDG uptake by the NCL was assessed via the SUVmax (NCL-SUVmax), SUVmean (NCL-SUVmean), and total glycolytic activity (NCL-TGA)defined as NCL-SUVmean×NCL volume [mL]. NCL-SUVmean and NCL-TGA were calculated using the following four SUV thresholds: 0.5, 1.0, 1.5, and 2.0. RESULTS Of the 165 patients, 28 (17.0%) developed ILD. Univariate analysis showed that high values of NCL-SUVmax, NCL-SUVmean2.0 (SUV threshold=2.0), and NCL-TGA1.0 (SUV threshold=1.0) were significantly associated with ILD onset (all p = 0.003). Multivariate analysis adjusted for age, tumor FDG uptake, and pre-existing interstitial lung abnormalities revealed that a high NCL-TGA1.0 (≥149.45) was independently associated with ILD onset (odds ratio, 6.588; p = 0.002). Two-year cumulative incidence of ILD was significantly higher in the high NCL-TGA1.0 group than in the low group (58.4% vs. 14.4%; p < 0.001). CONCLUSION High uptake of NCL on FDG-PET/CT is correlated with ICI-ILD development, which could serve as a risk stratification tool before ICI therapy in primary lung cancer.
Collapse
Affiliation(s)
- Motohiko Yamazaki
- Department of Radiology and Radiation Oncology, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Satoshi Watanabe
- Department of Respiratory Medicine and Infectious Diseases, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan.
| | - Masaki Tominaga
- Department of Radiology and Radiation Oncology, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Takuya Yagi
- Department of Radiology and Radiation Oncology, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Yukari Goto
- Department of Radiology and Radiation Oncology, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Naohiro Yanagimura
- Department of Respiratory Medicine and Infectious Diseases, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Masashi Arita
- Department of Respiratory Medicine and Infectious Diseases, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Aya Ohtsubo
- Department of Respiratory Medicine and Infectious Diseases, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Tomohiro Tanaka
- Department of Respiratory Medicine and Infectious Diseases, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Koichiro Nozaki
- Department of Respiratory Medicine and Infectious Diseases, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Yu Saida
- Department of Respiratory Medicine and Infectious Diseases, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Rie Kondo
- Department of Respiratory Medicine and Infectious Diseases, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Toshiaki Kikuchi
- Department of Respiratory Medicine and Infectious Diseases, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| | - Hiroyuki Ishikawa
- Department of Radiology and Radiation Oncology, Niigata University Graduate School of Medical and Dental Sciences, 1-757 Asahimachi-dori, Chuouku, Niigata 951-8510, Japan
| |
Collapse
|
37
|
Harb SF, Ali A, Yousuf M, Elshazly S, Farag A. G-SET-DCL: a guided sequential episodic training with dual contrastive learning approach for colon segmentation. Int J Comput Assist Radiol Surg 2025; 20:279-287. [PMID: 39789205 DOI: 10.1007/s11548-024-03319-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 12/30/2024] [Indexed: 01/12/2025]
Abstract
PURPOSE This article introduces a novel deep learning approach to substantially improve the accuracy of colon segmentation even with limited data annotation, which enhances the overall effectiveness of the CT colonography pipeline in clinical settings. METHODS The proposed approach integrates 3D contextual information via guided sequential episodic training in which a query CT slice is segmented by exploiting its previous labeled CT slice (i.e., support). Segmentation starts by detecting the rectum using a Markov Random Field-based algorithm. Then, supervised sequential episodic training is applied to the remaining slices, while contrastive learning is employed to enhance feature discriminability, thereby improving segmentation accuracy. RESULTS The proposed method, evaluated on 98 abdominal scans of prepped patients, achieved a Dice coefficient of 97.3% and a polyp information preservation accuracy of 98.28%. Statistical analysis, including 95% confidence intervals, underscores the method's robustness and reliability. Clinically, this high level of accuracy is vital for ensuring the preservation of critical polyp details, which are essential for accurate automatic diagnostic evaluation. The proposed method performs reliably in scenarios with limited annotated data. This is demonstrated by achieving a Dice coefficient of 97.15% when the model was trained on a smaller number of annotated CT scans (e.g., 10 scans) than the testing dataset (e.g., 88 scans). CONCLUSIONS The proposed sequential segmentation approach achieves promising results in colon segmentation. A key strength of the method is its ability to generalize effectively, even with limited annotated datasets-a common challenge in medical imaging.
Collapse
Affiliation(s)
- Samir Farag Harb
- Computer Vision and Image Processing Lab., UofL, Louisville, KY, 40292, USA.
- Higher Technological Institute, 10th of Ramadan City, Egypt.
| | - Asem Ali
- Computer Vision and Image Processing Lab., UofL, Louisville, KY, 40292, USA
| | - Mohamed Yousuf
- Computer Vision and Image Processing Lab., UofL, Louisville, KY, 40292, USA
- Faculty of Engineering, Ain Shams University, Cairo, Egypt
| | - Salwa Elshazly
- Kentucky Imaging Technologies, LLC, Louisville, KY, 40245, USA
| | - Aly Farag
- Computer Vision and Image Processing Lab., UofL, Louisville, KY, 40292, USA
| |
Collapse
|
38
|
Krieger K, Egger J, Kleesiek J, Gunzer M, Chen J. Multisensory Extended Reality Applications Offer Benefits for Volumetric Biomedical Image Analysis in Research and Medicine. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:646-655. [PMID: 38862851 PMCID: PMC11811323 DOI: 10.1007/s10278-024-01094-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 02/13/2024] [Accepted: 02/14/2024] [Indexed: 06/13/2024]
Abstract
3D data from high-resolution volumetric imaging is a central resource for diagnosis and treatment in modern medicine. While the fast development of AI enhances imaging and analysis, commonly used visualization methods lag far behind. Recent research used extended reality (XR) for perceiving 3D images with visual depth perception and touch but used restrictive haptic devices. While unrestricted touch benefits volumetric data examination, implementing natural haptic interaction with XR is challenging. The research question is whether a multisensory XR application with intuitive haptic interaction adds value and should be pursued. In a study, 24 experts for biomedical images in research and medicine explored 3D medical shapes with 3 applications: a multisensory virtual reality (VR) prototype using haptic gloves, a simple VR prototype using controllers, and a standard PC application. Results of standardized questionnaires showed no significant differences between all application types regarding usability and no significant difference between both VR applications regarding presence. Participants agreed to statements that VR visualizations provide better depth information, using the hands instead of controllers simplifies data exploration, the multisensory VR prototype allows intuitive data exploration, and it is beneficial over traditional data examination methods. While most participants mentioned manual interaction as the best aspect, they also found it the most improvable. We conclude that a multisensory XR application with improved manual interaction adds value for volumetric biomedical data examination. We will proceed with our open-source research project ISH3DE (Intuitive Stereoptic Haptic 3D Data Exploration) to serve medical education, therapeutic decisions, surgery preparations, or research data analysis.
Collapse
Affiliation(s)
- Kathrin Krieger
- Biospectroscopy, Leibniz-Institut for Analytical Science-ISAS-e.V., Bunsen-Kirchhoff-Str. 11, Dortmund, 44139, NRW, Germany.
- Neuroinformatics Group, Faculity of Technology, Bielefeld University, Inspiration 1, Bielefeld, 33619, NRW, Germany.
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, University of Duisburg-Essen, Girardetstr. 2, Essen, 45131, NRW, Germany
- Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, Essen, 45147, NRW, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, University of Duisburg-Essen, Girardetstr. 2, Essen, 45131, NRW, Germany
| | - Matthias Gunzer
- Biospectroscopy, Leibniz-Institut for Analytical Science-ISAS-e.V., Bunsen-Kirchhoff-Str. 11, Dortmund, 44139, NRW, Germany
- Institute for Experimental Immunology and Imaging, University Hospital Essen, University of Duisburg-Essen, Hufelandstr. 55, Essen, 45147, NRW, Germany
| | - Jianxu Chen
- Biospectroscopy, Leibniz-Institut for Analytical Science-ISAS-e.V., Bunsen-Kirchhoff-Str. 11, Dortmund, 44139, NRW, Germany
| |
Collapse
|
39
|
Salimi Y, Shiri I, Mansouri Z, Zaidi H. Development and validation of fully automated robust deep learning models for multi-organ segmentation from whole-body CT images. Phys Med 2025; 130:104911. [PMID: 39899952 DOI: 10.1016/j.ejmp.2025.104911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 12/02/2024] [Accepted: 01/24/2025] [Indexed: 02/05/2025] Open
Abstract
PURPOSE This study aimed to develop a deep-learning framework to generate multi-organ masks from CT images in adult and pediatric patients. METHODS A dataset consisting of 4082 CT images and ground-truth manual segmentation from various databases, including 300 pediatric cases, were collected. In strategy#1, the manual segmentation masks provided by public databases were split into training (90%) and testing (10% of each database named subset #1) cohort. The training set was used to train multiple nnU-Net networks in five-fold cross-validation (CV) for 26 separate organs. In the next step, the trained models from strategy #1 were used to generate missing organs for the entire dataset. This generated data was then used to train a multi-organ nnU-Net segmentation model in a five-fold CV (strategy#2). Models' performance were evaluated in terms of Dice coefficient (DSC) and other well-established image segmentation metrics. RESULTS The lowest CV DSC for strategy#1 was 0.804 ± 0.094 for adrenal glands while average DSC > 0.90 were achieved for 17/26 organs. The lowest DSC for strategy#2 (0.833 ± 0.177) was obtained for the pancreas, whereas DSC > 0.90 was achieved for 13/19 of the organs. For all mutual organs included in subset #1 and subset #2, our model outperformed the TotalSegmentator models in both strategies. In addition, our models outperformed the TotalSegmentator models on subset #3. CONCLUSIONS Our model was trained on images with significant variability from different databases, producing acceptable results on both pediatric and adult cases, making it well-suited for implementation in clinical setting.
Collapse
Affiliation(s)
- Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital CH-1211 Geneva, Switzerland; Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital CH-1211 Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
40
|
Steinhelfer L, Jungmann F, Nickel M, Kaissis G, Hofer ML, Tauber R, Schmaderer C, Rauscher I, Haller B, Makowski MR, Eiber M, Braren RF. Automated CT Measurement of Total Kidney Volume for Predicting Renal Function Decline after 177Lu Prostate-specific Membrane Antigen-I&T Radioligand Therapy. Radiology 2025; 314:e240427. [PMID: 39998377 DOI: 10.1148/radiol.240427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/26/2025]
Abstract
Background Lutetium 177 (177Lu) prostate-specific membrane antigen (PSMA) radioligand therapy is a novel treatment option for metastatic castration-resistant prostate cancer. Evidence suggests nephrotoxicity is a delayed adverse effect in a considerable proportion of patients. Purpose To identify predictive markers for clinically significant deterioration of renal function in patients undergoing 177Lu-PSMA-I&T radioligand therapy. Materials and Methods This retrospective study analyzed patients who underwent at least four cycles of 177Lu-PSMA-I&T therapy between December 2015 and May 2022. Total kidney volume (TKV) at 3 and 6 months after treatment was extracted from CT images using TotalSegmentator, a deep learning segmentation model based on the nnU-Net framework. A decline in estimated glomerular filtration rate (eGFR) of 30% or greater was defined as clinically significant, indicating a higher risk of end-stage renal disease. Two-sided t tests and Mann-Whitney U tests were used to compare baseline nephrotoxic risk factors, changes in eGFR and TKV, prior treatments, and the number of 177Lu-PSMA-I&T cycles between patients with and without clinically significant eGFR decline at 12 months. Threshold values to differentiate between these two patient groups were identified using receiver operating characteristic curve analysis and the Youden index. Results A total of 121 patients (mean age, 76 years ± 7 [SD]) who underwent four or more cycles of 177Lu-PSMA-I&T therapy with 12 months of follow-up were included. A 10% or greater decrease in TKV at 6 months predicted 30% or greater eGFR decline at 12 months (area under the receiver operating characteristic curve, 0.90 [95% CI: 0.85, 0.96]; P < .001), surpassing other parameters. Baseline risk factors (ρ = 0.01; P = .88), prior treatments (ρ = -0.06; P = .50), and number of 177Lu-PSMA-I&T cycles (ρ = 0.08; P = .36) did not correlate with relative eGFR percentage decrease at 12 months. Conclusion Automated TKV assessment on standard-of-care CT images predicted deterioration of renal function 12 months after 177Lu-PSMA-I&T therapy initiation in metastatic castration-resistant prostate cancer. Its better performance than early relative eGFR change highlights its potential as a noninvasive marker when treatment decisions are pending. © RSNA, 2025 Supplemental material is available for this article.
Collapse
Affiliation(s)
- Lisa Steinhelfer
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Friederike Jungmann
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Manuel Nickel
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Georgios Kaissis
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Marie-Luise Hofer
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Robert Tauber
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Christoph Schmaderer
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Isabel Rauscher
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Bernhard Haller
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Marcus R Makowski
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Matthias Eiber
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| | - Rickmer F Braren
- From the Institute for Diagnostic and Interventional Radiology, (L.S., F.J., G.K., M.L.H., M.R.M., R.F.B.), Institute of AI and Informatics in Medicine (M.N., G.K., B.H.), Department of Urology (R.T.), Department of Nephrology (C.S.), and Department of Nuclear Medicine (I.R., M.E.), School of Medicine, Technical University of Munich, Klinikum Rechts der Isar, Ismaninger Str 22, 81675 Munich, Germany; and German Cancer Consortium (DKTK) Partner Site Munich, Technical University of Munich, Munich, Germany (M.E., R.F.B.)
| |
Collapse
|
41
|
Tzanis E, Damilakis J. A machine learning-based pipeline for multi-organ/tissue patient-specific radiation dosimetry in CT. Eur Radiol 2025; 35:919-928. [PMID: 39136706 DOI: 10.1007/s00330-024-11002-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 06/29/2024] [Accepted: 07/18/2024] [Indexed: 02/01/2025]
Abstract
OBJECTIVES To develop a machine learning-based pipeline for multi-organ/tissue personalized radiation dosimetry in CT. MATERIALS AND METHODS For the study, 95 chest CT scans and 85 abdominal CT scans were collected retrospectively. For each CT scan, a personalized Monte Carlo (MC) simulation was carried out. The produced 3D dose distributions and the respective CT examinations were utilized for the development of organ/tissue-specific dose prediction deep neural networks (DNNs). A pipeline that integrates a robust open-source organ segmentation tool with the dose prediction DNNs was developed for the automatic estimation of radiation doses for 30 organs/tissues including sub-volumes of the heart and lungs. The accuracy and time efficiency of the presented methodology was assessed. Statistical analysis (t-tests) was conducted to determine if the differences between the ground truth organ/tissue radiation dose estimates and the respective dose predictions were significant. RESULTS The lowest median percentage differences between MC-derived organ/tissue doses and DNN dose predictions were observed for the lung vessels (4.3%), small bowel (4.7%), pulmonary artery (4.7%), and colon (5.2%), while the highest differences were observed for the right lung's upper lobe (13.3%), spleen (13.1%), pancreas (12.1%), and stomach (11.6%). Statistical analysis showed that the differences were not significant (p-value > 0.18). Furthermore, the mean inference time, regarding the validation cohort, of the developed methodology was 77.0 ± 11.0 s. CONCLUSION The proposed workflow enables fast and accurate organ/tissue radiation dose estimations. The developed algorithms and dose prediction DNNs are publicly available ( https://github.com/eltzanis/multi-structure-CT-dosimetry ). CLINICAL RELEVANCE STATEMENT The accuracy and time efficiency of the developed pipeline compose a useful tool for personalized dosimetry in CT. By adopting the proposed workflow, institutions can utilize an automated pipeline for patient-specific dosimetry in CT. KEY POINTS Personalized dosimetry is ideal, but is time-consuming. The proposed pipeline composes a tool for facilitating patient-specific CT dosimetry in routine clinical practice. The developed workflow integrates a robust open-source segmentation tool with organ/tissue-specific dose prediction neural networks.
Collapse
Affiliation(s)
- Eleftherios Tzanis
- Department of Medical Physics, School of Medicine, University of Crete, Heraklion, Greece
| | - John Damilakis
- Department of Medical Physics, School of Medicine, University of Crete, Heraklion, Greece.
| |
Collapse
|
42
|
Chen J, Liu Y, Wei S, Bian Z, Subramanian S, Carass A, Prince JL, Du Y. A survey on deep learning in medical image registration: New technologies, uncertainty, evaluation metrics, and beyond. Med Image Anal 2025; 100:103385. [PMID: 39612808 PMCID: PMC11730935 DOI: 10.1016/j.media.2024.103385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/27/2024] [Accepted: 11/01/2024] [Indexed: 12/01/2024]
Abstract
Deep learning technologies have dramatically reshaped the field of medical image registration over the past decade. The initial developments, such as regression-based and U-Net-based networks, established the foundation for deep learning in image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, network architectures, and uncertainty estimation. These advancements have not only enriched the field of image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration.
Collapse
Affiliation(s)
- Junyu Chen
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA.
| | - Yihao Liu
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Shuwen Wei
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Zhangxing Bian
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Shalini Subramanian
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, MD, USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins School of Medicine, MD, USA
| |
Collapse
|
43
|
Siegel MJ, Thomas MA, Haq A, Seymore N, Sodhi KS, Abadia A. Comparison of Radiation Dose and Image Quality in Pediatric Abdominopelvic Photon-Counting Versus Energy-Integrating Detector CT. J Comput Assist Tomogr 2025:00004728-990000000-00425. [PMID: 39905977 DOI: 10.1097/rct.0000000000001730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 12/24/2024] [Indexed: 02/06/2025]
Abstract
OBJECTIVE Adoption of abdominal photon counting detector CT (PCD-CT) into clinical pediatric CT practice requires evidence that it provides diagnostic images at acceptable radiation doses. Thus, this study aimed to compare radiation dose and image quality of PCD-CT and conventional energy-integrating detector CT (EID-CT) in pediatric abdominopelvic CT. MATERIALS AND METHODS This institutional review board-approved retrospective study included 147 children (median age 8.5 y; 80 boys, 67 girls) who underwent clinically indicated contrast-enhanced abdominopelvic PCD-CT between October 1, 2022 and April 30, 2023 and 147 children (median age 8.5 y; 74 boys, 73 girls) who underwent EID-CT between July 1, 2021 and January 1, 2022. Patients in the 2 groups were matched by age and effective diameter. Radiation dose parameters (CT dose index volume, CTDIvol; dose length product, DLP; size-specific dose estimate, SSDE) were recorded. In a subset of 25 matched pairs, subjective image quality was assessed on a scale of 1 to 4 (1=highest quality), and liver attenuation, dose-normalized noise, and contrast-to-noise ratio (CNR) were measured. Groups were compared using parametric and/or nonparametric testing. RESULTS Among the 147 matched pairs, there were no significant differences in sex (P=0.576), age (P=0.084), or diameter (P=0.668). PCD-CT showed significantly lower median CTDIvol, DLP, and SSDE (1.6 mGy, 63.8 mGy-cm, 3.1 mGy) compared with EID-CT (3.7 mGy, 155.3 mGy-cm, 6.0 mGy) (P<0.001). In the subset of 25 patients, PCD-CT and EID-CT showed no significant difference in overall image quality for reader 1 (1.0 vs. 1.0, P=0.781) or reader 2 (1.0 vs. 1.0, P=0.817), or artifacts for reader 1 (1.0 vs. 1.0, P=0.688) or reader 2 (1.0 vs. 1.0, P=0.219). After normalizing for radiation dose, image noise was significantly lower with PCD-CT (P<0.001), while CNR in the liver (P=0.244) and portal vein (P=0.079) were comparable to EID-CT. CONCLUSION Abdominopelvic PCD-CT in children significantly reduces radiation dose while maintaining subjective image quality, and accounting for dose levels, has the potential to lower image noise and achieve comparable CNR to EID-CT. These data expand understanding of the capabilities of PCD-CT and support its routine use in children.
Collapse
Affiliation(s)
- Marilyn J Siegel
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Matthew Allan Thomas
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Adeel Haq
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Noah Seymore
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | - Kushaljit Singh Sodhi
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO
| | | |
Collapse
|
44
|
Cabini RF, Cozzi A, Leu S, Thelen B, Krause R, Del Grande F, Pizzagalli DU, Rizzo SMR. CompositIA: an open-source automated quantification tool for body composition scores from thoraco-abdominal CT scans. Eur Radiol Exp 2025; 9:12. [PMID: 39881078 PMCID: PMC11780042 DOI: 10.1186/s41747-025-00552-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 01/10/2025] [Indexed: 01/31/2025] Open
Abstract
BACKGROUND Body composition scores allow for quantifying the volume and physical properties of specific tissues. However, their manual calculation is time-consuming and prone to human error. This study aims to develop and validate CompositIA, an automated, open-source pipeline for quantifying body composition scores from thoraco-abdominal computed tomography (CT) scans. METHODS A retrospective dataset of 205 contrast-enhanced thoraco-abdominal CT examinations was used for training, while 54 scans from a publicly available dataset were used for independent testing. Two radiology residents performed manual segmentation, identifying the centers of the L1 and L3 vertebrae and segmenting the corresponding axial slices. MultiResUNet was used to identify CT slices intersecting the L1 and L3 vertebrae, and its performance was evaluated using the mean absolute error (MAE). Two U-nets were used to segment the axial slices, with performance evaluated through the volumetric Dice similarity coefficient (vDSC). CompositIA's performance in quantifying body composition indices was assessed using mean percentage relative error (PRE), regression, and Bland-Altman analyses. RESULTS On the independent dataset, CompositIA achieved a MAE of about 5 mm in detecting slices intersecting the L1 and L3 vertebrae, with a MAE < 10 mm in at least 85% of cases and a vDSC greater than 0.85 in segmenting axial slices. Regression and Bland-Altman analyses demonstrated a strong linear relationship and good agreement between automated and manual scores (p values < 0.001 for all indices), with mean PREs ranging from 5.13% to 15.18%. CONCLUSION CompositIA facilitated the automated quantification of body composition scores, achieving high precision in independent testing. RELEVANCE STATEMENT CompositIA is an automated, open-source pipeline for quantifying body composition indices from CT scans, simplifying clinical assessments, and expanding their applicability. KEY POINTS Manual body composition assessment from CTs is time-consuming and prone to errors. CompositIA was trained on 205 CT scans and tested on 54 scans. CompositIA demonstrated mean percentage relative errors under 15% compared to manual indices. CompositIA simplifies body composition assessment through an artificial intelligence-driven and open-source pipeline.
Collapse
Affiliation(s)
- Raffaella Fiamma Cabini
- Euler Institute, Università della Svizzera italiana, Lugano, Switzerland
- International Center of Advanced Computing in Medicine (ICAM), Pavia, Italy
| | - Andrea Cozzi
- Imaging Institute of Southern Switzerland, Ente Ospedaliero Cantonale, Lugano, Switzerland
| | - Svenja Leu
- Imaging Institute of Southern Switzerland, Ente Ospedaliero Cantonale, Lugano, Switzerland
| | - Benedikt Thelen
- Euler Institute, Università della Svizzera italiana, Lugano, Switzerland
| | - Rolf Krause
- Euler Institute, Università della Svizzera italiana, Lugano, Switzerland
- International Center of Advanced Computing in Medicine (ICAM), Pavia, Italy
| | - Filippo Del Grande
- Imaging Institute of Southern Switzerland, Ente Ospedaliero Cantonale, Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera italiana, Lugano, Switzerland
| | - Diego Ulisse Pizzagalli
- Euler Institute, Università della Svizzera italiana, Lugano, Switzerland.
- International Center of Advanced Computing in Medicine (ICAM), Pavia, Italy.
- Faculty of Biomedical Sciences, Università della Svizzera italiana, Lugano, Switzerland.
| | - Stefania Maria Rita Rizzo
- Imaging Institute of Southern Switzerland, Ente Ospedaliero Cantonale, Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera italiana, Lugano, Switzerland
| |
Collapse
|
45
|
Haberl D, Ning J, Kluge K, Kumpf K, Yu J, Jiang Z, Constantino C, Monaci A, Starace M, Haug AR, Calabretta R, Camoni L, Bertagna F, Mascherbauer K, Hofer F, Albano D, Sciagra R, Oliveira F, Costa D, Nitsche C, Hacker M, Spielvogel CP. Generative artificial intelligence enables the generation of bone scintigraphy images and improves generalization of deep learning models in data-constrained environments. Eur J Nucl Med Mol Imaging 2025:10.1007/s00259-025-07091-8. [PMID: 39878897 DOI: 10.1007/s00259-025-07091-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2024] [Accepted: 01/11/2025] [Indexed: 01/31/2025]
Abstract
PURPOSE Advancements of deep learning in medical imaging are often constrained by the limited availability of large, annotated datasets, resulting in underperforming models when deployed under real-world conditions. This study investigated a generative artificial intelligence (AI) approach to create synthetic medical images taking the example of bone scintigraphy scans, to increase the data diversity of small-scale datasets for more effective model training and improved generalization. METHODS We trained a generative model on 99mTc-bone scintigraphy scans from 9,170 patients in one center to generate high-quality and fully anonymized annotated scans of patients representing two distinct disease patterns: abnormal uptake indicative of (i) bone metastases and (ii) cardiac uptake indicative of cardiac amyloidosis. A blinded reader study was performed to assess the clinical validity and quality of the generated data. We investigated the added value of the generated data by augmenting an independent small single-center dataset with synthetic data and by training a deep learning model to detect abnormal uptake in a downstream classification task. We tested this model on 7,472 scans from 6,448 patients across four external sites in a cross-tracer and cross-scanner setting and associated the resulting model predictions with clinical outcomes. RESULTS The clinical value and high quality of the synthetic imaging data were confirmed by four readers, who were unable to distinguish synthetic scans from real scans (average accuracy: 0.48% [95% CI 0.46-0.51]), disagreeing in 239 (60%) of 400 cases (Fleiss' kappa: 0.18). Adding synthetic data to the training set improved model performance by a mean (± SD) of 33(± 10)% AUC (p < 0.0001) for detecting abnormal uptake indicative of bone metastases and by 5(± 4)% AUC (p < 0.0001) for detecting uptake indicative of cardiac amyloidosis across both internal and external testing cohorts, compared to models without synthetic training data. Patients with predicted abnormal uptake had adverse clinical outcomes (log-rank: p < 0.0001). CONCLUSIONS Generative AI enables the targeted generation of bone scintigraphy images representing different clinical conditions. Our findings point to the potential of synthetic data to overcome challenges in data sharing and in developing reliable and prognostic deep learning models in data-limited environments.
Collapse
Affiliation(s)
- David Haberl
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Spitalgasse 23, Vienna, 1090, Austria
- Christian Doppler Laboratory for Applied Metabolomics, Medical University of Vienna, Vienna, Austria
| | - Jing Ning
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Spitalgasse 23, Vienna, 1090, Austria
- Christian Doppler Laboratory for Applied Metabolomics, Medical University of Vienna, Vienna, Austria
| | - Kilian Kluge
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Spitalgasse 23, Vienna, 1090, Austria
- Christian Doppler Laboratory for Applied Metabolomics, Medical University of Vienna, Vienna, Austria
| | - Katarina Kumpf
- IT4Science, IT Services & Strategic Information Management, Medical University of Vienna, Vienna, Austria
| | - Josef Yu
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Spitalgasse 23, Vienna, 1090, Austria
| | - Zewen Jiang
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Spitalgasse 23, Vienna, 1090, Austria
- Christian Doppler Laboratory for Applied Metabolomics, Medical University of Vienna, Vienna, Austria
| | - Claudia Constantino
- Nuclear Medicine-Radiopharmacology, Champalimaud Clinical Centre, Champalimaud Foundation, Lisbon, Portugal
| | - Alice Monaci
- Department of Experimental and Clinical Biomedical Sciences, Nuclear Medicine Unit, University of Florence, Florence, Italy
| | - Maria Starace
- Department of Experimental and Clinical Biomedical Sciences, Nuclear Medicine Unit, University of Florence, Florence, Italy
| | - Alexander R Haug
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Spitalgasse 23, Vienna, 1090, Austria
- Christian Doppler Laboratory for Applied Metabolomics, Medical University of Vienna, Vienna, Austria
| | - Raffaella Calabretta
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Spitalgasse 23, Vienna, 1090, Austria
| | - Luca Camoni
- ASST Spedali Civili of Brescia, Università degli Studi di Brescia, Brescia, Italy
| | - Francesco Bertagna
- ASST Spedali Civili of Brescia, Università degli Studi di Brescia, Brescia, Italy
| | - Katharina Mascherbauer
- Department of Internal Medicine II, Division of Cardiology, Medical University of Vienna, Vienna, Austria
| | - Felix Hofer
- Department of Internal Medicine II, Division of Cardiology, Medical University of Vienna, Vienna, Austria
| | - Domenico Albano
- ASST Spedali Civili of Brescia, Università degli Studi di Brescia, Brescia, Italy
| | - Roberto Sciagra
- Department of Experimental and Clinical Biomedical Sciences, Nuclear Medicine Unit, University of Florence, Florence, Italy
| | - Francisco Oliveira
- Nuclear Medicine-Radiopharmacology, Champalimaud Clinical Centre, Champalimaud Foundation, Lisbon, Portugal
| | - Durval Costa
- Nuclear Medicine-Radiopharmacology, Champalimaud Clinical Centre, Champalimaud Foundation, Lisbon, Portugal
| | - Christian Nitsche
- Department of Internal Medicine II, Division of Cardiology, Medical University of Vienna, Vienna, Austria
| | - Marcus Hacker
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Spitalgasse 23, Vienna, 1090, Austria
| | - Clemens P Spielvogel
- Department of Biomedical Imaging and Image-guided Therapy, Division of Nuclear Medicine, Medical University of Vienna, Spitalgasse 23, Vienna, 1090, Austria.
| |
Collapse
|
46
|
Oo DW, Sturniolo A, Jung M, Langenbach M, Foldyna B, Kiel DP, Aerts HJ, Natarajan P, Lu MT, Raghu VK. OPPORTUNISTIC ASSESSMENT OF CARDIOVASCULAR RISK USING AI-DERIVED STRUCTURAL AORTIC AND CARDIAC PHENOTYPES FROM NON-CONTRAST CHEST COMPUTED TOMOGRAPHY. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.01.28.25321302. [PMID: 39974056 PMCID: PMC11839003 DOI: 10.1101/2025.01.28.25321302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Background Primary prevention of cardiovascular disease relies on accurate risk assessment using scores such as the Pooled Cohort Equations (PCE) and PREVENT. However, necessary input variables for these scores are often unavailable in the electronic health record (EHR), and information from routinely collected data (e.g., non-contrast chest CT) may further improve performance. Here, we test whether a risk prediction model based on structural features of the heart and aorta from chest CT has added value to existing clinical algorithms for predicting major adverse cardiovascular events (MACE). Methods We developed a LASSO model to predict fatal MACE over 12 years of follow-up using structural radiomics features describing cardiac chamber and aorta segmentations from 13,437 lung cancer screening chest CTs from the National Lung Screening Trial. We compared this radiomics model to the PCE and PREVENT scores in an external testing set of 4,303 individuals who had a chest CT at a Mass General Brigham site and had no history of diabetes, prior MACE, or statin treatment. Discrimination for incident MACE was assessed using the concordance index. We used a binary threshold to determine MACE rates in patients who were statin-eligible or ineligible by the PCE/PREVENT scores (≥7.5% risk) or the radiomics score (≥5.0% risk). Results were stratified by whether all variables were available to calculate the PCE or PREVENT scores. Results In the external testing set (n = 4,303; mean age 61.5 ± 9.3 years; 47.1% male), 8.0% had incident MACE over a median 5.1 years of follow-up. The radiomics risk score significantly improved discrimination beyond the PCE (c-index 0.653 vs. 0.567, p < 0.001) and performed similarly in individuals who were missing inputs. Those statin-eligible by both the radiomics and PCE scores had a 2.6-fold higher incidence of MACE than those eligible by the PCE score alone (29.5 [20.5, 39.1] vs. 11.2 [8.0, 14.4] events per 1,000 person-years among PCE-eligible individuals). In patients missing inputs, incident MACE rates were 1.8-fold higher in those statin-eligible by the radiomics score than those statin-ineligible (29.5 [21.9, 37.6] vs. 16.7 [14.3, 19.0] events per 1000 person-years). Similar results were found when comparing to the PREVENT score. Left ventricular volume and short axis length were most predictive of myocardial infarction, while left atrial sphericity and surface-to-volume ratio were most predictive of stroke. Conclusions Based on a single chest CT, a cardiac shape-based risk prediction model predicted cardiovascular events beyond clinical algorithms and demonstrated similar performance in patients who were missing inputs to standard cardiovascular risk calculators. Patients at high-risk by the radiomics score may benefit from intensified primary prevention (e.g., statin prescription).
Collapse
Affiliation(s)
- Daniel W Oo
- Cardiovascular Imaging Research Center (CIRC), Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Audra Sturniolo
- Cardiovascular Imaging Research Center (CIRC), Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Matthias Jung
- Cardiovascular Imaging Research Center (CIRC), Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Marcel Langenbach
- Cardiovascular Imaging Research Center (CIRC), Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Borek Foldyna
- Cardiovascular Imaging Research Center (CIRC), Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States of America
| | - Douglas P Kiel
- Hinda and Arthur Marcus Institute on Aging Research, Hebrew SeniorLife, Department of Medicine, Beth Israel Deaconess Medical Center & Harvard Medical School, Boston, MA, United States of America
| | - Hugo Jwl Aerts
- Cardiovascular Imaging Research Center (CIRC), Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States of America
- Radiology and Nuclear Medicine, GROW & CARIM Maastricht University, Maastricht, Netherlands
| | - Pradeep Natarajan
- Cardiovascular Research Center and Center for Genomic Medicine, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Program in Medical and Population Genetics and Cardiovascular Disease Initiative, Broad Institute of Harvard and MIT, Cambridge, MA, United States of America
| | - Michael T Lu
- Cardiovascular Imaging Research Center (CIRC), Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States of America
| | - Vineet K Raghu
- Cardiovascular Imaging Research Center (CIRC), Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, United States of America
| |
Collapse
|
47
|
Tian Y, Liang Y, Chen Y, Zhang J, Bian H. Multilevel support-assisted prototype optimization network for few-shot medical segmentation of lung lesions. Sci Rep 2025; 15:3290. [PMID: 39865124 PMCID: PMC11770124 DOI: 10.1038/s41598-025-87829-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2024] [Accepted: 01/22/2025] [Indexed: 01/28/2025] Open
Abstract
Medical image annotation is scarce and costly. Few-shot segmentation has been widely used in medical image from only a few annotated examples. However, its research on lesion segmentation for lung diseases is still limited, especially for pulmonary aspergillosis. Lesion areas usually have complex shapes and blurred edges. Lesion segmentation requires more attention to deal with the diversity and uncertainty of lesions. To address this challenge, we propose MSPO-Net, a multilevel support-assisted prototype optimization network designed for few-shot lesion segmentation in computerized tomography (CT) images of lung diseases. MSPO-Net learns lesion prototypes from low-level to high-level features. Self-attention threshold learning strategy can focus on the global information and obtain an optimal threshold for CT images. Our model refines prototypes through a support-assisted prototype optimization module, adaptively enhancing their representativeness for the diversity of lesions and adapting to the unseen lesions better. In clinical examinations, CT is more practical than X-rays. To ensure the quality of our work, we have established a small-scale CT image dataset for three lung diseases and annotated by experienced doctors. Experiments demonstrate that MSPO-Net can improve segmentation performance and robustness of lung disease lesion. MSPO-Net achieves state-of-the-art performance in both single and unseen lung disease segmentation, indicating its potentiality to reduce doctors' workload and improve diagnostic accuracy. This research has certain clinical significance. Code is available at https://github.com/Tian-Yuan-ty/MSPO-Net .
Collapse
Affiliation(s)
- Yuan Tian
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| | - Yongquan Liang
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China.
| | - Yufeng Chen
- Shandong Provincial Public Health Clinical Center, Shandong University, Jinan, 250013, Shandong, China
| | - Jingjing Zhang
- Shandong Provincial Public Health Clinical Center, Shandong University, Jinan, 250013, Shandong, China
| | - Hongyang Bian
- Shandong Provincial Public Health Clinical Center, Shandong University, Jinan, 250013, Shandong, China
| |
Collapse
|
48
|
Meneghetti AR, Hernández ML, Kuehn JP, Löck S, Carrero ZI, Perez-Lopez R, Bressem K, Brinker TK, Pearson AT, Truhn D, Nebelung S, Kather JN. End-to-end prediction of clinical outcomes in head and neck squamous cell carcinoma with foundation model-based multiple instance learning. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.01.22.25320517. [PMID: 39974018 PMCID: PMC11839013 DOI: 10.1101/2025.01.22.25320517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Background Foundation models (FMs) show promise in medical AI by learning flexible features from large datasets, potentially surpassing handcrafted radiomics. Outcome prediction of head and neck squamous cell carcinomas (HNSCC) with FMs using routine imaging remains unexplored. Purpose To evaluate end-to-end FM-based multiple instance learning (MIL) for 2-year overall survival (OS), locoregional control (LRC), and freedom from distant metastasis (FFDM) prediction and risk group stratification using pretreatment CT scans in HNSCC. Materials and Methods We analyzed data of 2485 patients from three retrospective HNSCC cohorts (RADCURE, HN1, HN-PET-CT), treated between 2004 and 2017 with available pre-treatment CTs and primary gross tumor volume (GTVp) segmentations. The RADCURE cohort was split into training (n=1464) and test (N=606), with HN1 (n=131) and HN-PET-CT (n=284) as additional test cohorts. FM-based MIL models (2D, multiview and 3D) for 2-year endpoint prediction and risk stratification wre evaluated based on area under the receiver operator curve (AUROC) and Kaplan-Meier (KM) with hazard ratios (HR), compared with radiomics and assessed for multimodal enhancement with clinical baselines. Results 2D MIL models achieved 2-year test AUROCs of 0.75-0.84 (OS), 0.66-0.75 (LRC) and 0.71-0.78 (FFDM), outperforming multiview and 3D MIL (AUROCs: 0.50-0.77, p≥0.15) and comparable or superior to radiomics (AUROCs: 0.64-0.74, p≥0.012). Significant stratification was observed (HRs: 2.14-4.77, p≤0.039). Multimodal enhancement of 2-year OS/FFDM (AUROCs: 0.82-0.87, p≤0.018) was observed for patients without human papilloma virus positive (HPV+) tumors. Conclusion FM-based MIL demonstrates promise in HNSCC risk prediction, showing similar or superior performance to radiomics and enhancing clinical baselines in non-HPV+ patients.
Collapse
Affiliation(s)
- Asier Rabasco Meneghetti
- Else Kroener Fresenius Center for Digital Health, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, 01307 Dresden, Germany
- German Cancer Consortium (DKTK), Partner site Dresden, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Marta Ligero Hernández
- Else Kroener Fresenius Center for Digital Health, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, 01307 Dresden, Germany
| | - Jens-Peter Kuehn
- Institute and Policlinic for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Steffen Löck
- German Cancer Consortium (DKTK), Partner site Dresden, German Cancer Research Center (DKFZ), Heidelberg, Germany
- OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
- Department of Radiotherapy and Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universitat Dresden; Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
| | - Zunamys Itzel Carrero
- Else Kroener Fresenius Center for Digital Health, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, 01307 Dresden, Germany
| | - Raquel Perez-Lopez
- Radiomics Group, Vall d'Hebron Institute of Oncology, Vall d'Hebron Barcelona Hospital Campus, Barcelona, Spain
| | - Keno Bressem
- Department of Diagnostic and Interventional Radiology, Technical University of Munich, School of Medicine and Health, Klinikum rechts der Isar, TUM University Hospital, Ismaninger Str. 22, 81675 Munich
- Department of Cardiovascular Radiology and Nuclear Medicine, Technical University of Munich, School of Medicine and Health, German Heart Center, TUM University Hospital, Lazarethstr. 36, 80636, Munich
| | - Titus K Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), INF 223, 69120 Heidelberg, Germany
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago, Chicago, IL, USA
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany
| | - Sven Nebelung
- Else Kroener Fresenius Center for Digital Health, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, 01307 Dresden, Germany
- Department of Diagnostic and Interventional Radiology, Medical Faculty, RWTH Aachen University, 52074 Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, 01307 Dresden, Germany
- Department of Medicine I, University Hospital Dresden, Dresden, Germany
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| |
Collapse
|
49
|
Białek P, Dobek A, Falenta K, Kurnatowska I, Stefańczyk L. Usefulness of Radiomics and Kidney Volume Based on Non-Enhanced Computed Tomography in Chronic Kidney Disease: Initial Report. Kidney Blood Press Res 2025; 50:161-170. [PMID: 39837303 PMCID: PMC11844675 DOI: 10.1159/000543305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2024] [Accepted: 12/19/2024] [Indexed: 01/23/2025] Open
Abstract
INTRODUCTION Chronic kidney disease (CKD) is classified according to the estimated glomerular filtration rate (eGFR), but kidney volume (KV) can also provide meaningful information. Very few radiomics (RDX) studies on CKD have utilized computed tomography (CT). This study aimed to determine whether non-enhanced computed tomography (NECT)-based RDX can be useful in evaluation of patients with CKD and to compare it with KV. METHODS The NECT scans of 64 subjects with impaired kidney function (defined as <60 mL/min/1.73 m2) and 60 controls with normal kidney function were retrospectively analyzed. Kidney segmentations, volume measurements, and RDX features extraction were performed. Machine-learning models using RDX were constructed to classify the kidneys as having structural markers of impaired or normal function. RESULTS The median KV in the impaired kidney function group was 114.83 mL vs. 159.43 mL (p < 0.001) in the control group. There was a statistically significant strong positive correlation between KV and eGFR (rs = 0.579, p < 0.001) and a strong negative correlation between KV and serum creatinine level (rs = -0.514, p < 0.001). The KV-based models achieved the best area under the curve (AUC) of 0.746, whereas the RDX-based models achieved the best AUC of 0.878. CONCLUSIONS RDX can be useful in identifying patients with impaired kidney function on NECT. RDX-based models outperformed KV-based models. RDX has the potential to identify patients with a higher risk of CKD based on imaging, which, as we believe, can indirectly support clinical decision-making.
Collapse
Affiliation(s)
- Piotr Białek
- 1st Department of Radiology and Diagnostic Imaging, Medical University of Lodz, Lodz, Poland
| | - Adam Dobek
- 1st Department of Radiology and Diagnostic Imaging, Medical University of Lodz, Lodz, Poland
| | - Krzysztof Falenta
- 1st Department of Radiology and Diagnostic Imaging, Medical University of Lodz, Lodz, Poland
| | - Ilona Kurnatowska
- Department of Internal Diseases and Transplant Nephrology, Medical University of Lodz, Lodz, Poland
| | - Ludomir Stefańczyk
- 1st Department of Radiology and Diagnostic Imaging, Medical University of Lodz, Lodz, Poland
| |
Collapse
|
50
|
Hong V, Pieper S, James J, Anderson DE, Pinter C, Chang YS, Aslan B, Kozono D, Doyle PF, Caplan S, Kang H, Balboni T, Spektor A, Huynh MA, Keko M, Kikinis R, Hackney DB, Alkalay RN. Automated Segmentation of Trunk Musculature with a Deep CNN Trained from Sparse Annotations in Radiation Therapy Patients with Metastatic Spine Disease. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2025:2025.01.13.25319967. [PMID: 39974027 PMCID: PMC11838942 DOI: 10.1101/2025.01.13.25319967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Purpose Given the high prevalence of vertebral fractures post-radiotherapy in patients with metastatic spine disease, accurate and rapid muscle segmentation could support efforts to quantify muscular changes due to disease or treatment and enable biomechanical modeling for assessments of vertebral loading to improve personalized evaluation of vertebral fracture risk. This study presents a deep-learning approach for segmenting the complete volume of the trunk muscles from clinical CT images trained using sparsely annotated data. Materials and Methods we extracted 2,009 axial CT images at the midpoint of each vertebral level (T4 to L4) from clinical CT of 148 cancer patients. The key extensor and flexor muscles (up to 8 muscles per side) were manually contoured and labeled per image in the thoracic and lumbar regions. We first trained a 2D nnU-Net deep-learning model on these labels to segment key extensor and flexor muscles. Using these sparse annotations per spine, we trained the model to segment each muscle's entire 3D volume. Results The proposed method achieved comparable performance to manual segmentations, as assessed by expert radiologists, with a mean Dice score above 0.769. Significantly, the model drastically reduced segmentation time, from 4.3-6.5 hours for manual segmentation of 14 single axial CT images to approximately 1 minute for segmenting the complete thoracic-abdominal 3D volume. Conclusion The approach demonstrates high potential for automating 3D muscle segmentation, significantly reducing the manual intervention required for generating musculoskeletal models, and could be instrumental in enhancing clinical decision-making and patient care in radiation oncology.
Collapse
Affiliation(s)
- Vy Hong
- School of Computation, Information and Technology, Technical University Munich, Munich, Germany
- Center for Advanced Orthopedic Studies, Orthopedic Department, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Steve Pieper
- Isomics, Inc, Cambridge, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Joanna James
- Center for Advanced Orthopedic Studies, Orthopedic Department, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Dennis E Anderson
- Center for Advanced Orthopedic Studies, Orthopedic Department, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Csaba Pinter
- EBATINCA, S.L, Las Palmas de Gran Canaria, 35002 Las Palmas, Spain
| | - Yi Shuen Chang
- Department of Radiology, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Bulent Aslan
- Department of Radiology, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - David Kozono
- Department of Radiation Oncology, Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Patrick F Doyle
- Department of Radiation Oncology, Brigham and Women's Hospital, Boston, MA, USA
| | - Sarah Caplan
- Department of Radiation Oncology, Brigham and Women's Hospital, Boston, MA, USA
| | - Heejoo Kang
- Department of Radiation Oncology, Brigham and Women's Hospital, Boston, MA, USA
| | - Tracy Balboni
- Department of Radiation Oncology, Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Alexander Spektor
- Department of Radiation Oncology, Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Mai Anh Huynh
- Department of Radiation Oncology, Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Mario Keko
- Department of Orthopedics, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - David B Hackney
- Department of Radiology, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Ron N Alkalay
- Center for Advanced Orthopedic Studies, Orthopedic Department, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| |
Collapse
|