1
|
Dong X, Chen G, Zhu Y, Ma B, Ban X, Wu N, Ming Y. Artificial intelligence in skeletal metastasis imaging. Comput Struct Biotechnol J 2024; 23:157-164. [PMID: 38144945 PMCID: PMC10749216 DOI: 10.1016/j.csbj.2023.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 11/02/2023] [Accepted: 11/02/2023] [Indexed: 12/26/2023] Open
Abstract
In the field of metastatic skeletal oncology imaging, the role of artificial intelligence (AI) is becoming more prominent. Bone metastasis typically indicates the terminal stage of various malignant neoplasms. Once identified, it necessitates a comprehensive revision of the initial treatment regime, and palliative care is often the only resort. Given the gravity of the condition, the diagnosis of bone metastasis should be approached with utmost caution. AI techniques are being evaluated for their efficacy in a range of tasks within medical imaging, including object detection, disease classification, region segmentation, and prognosis prediction in medical imaging. These methods offer a standardized solution to the frequently subjective challenge of image interpretation.This subjectivity is most desirable in bone metastasis imaging. This review describes the basic imaging modalities of bone metastasis imaging, along with the recent developments and current applications of AI in the respective imaging studies. These concrete examples emphasize the importance of using computer-aided systems in the clinical setting. The review culminates with an examination of the current limitations and prospects of AI in the realm of bone metastasis imaging. To establish the credibility of AI in this domain, further research efforts are required to enhance the reproducibility and attain robust level of empirical support.
Collapse
Affiliation(s)
- Xiying Dong
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
- Department of Urology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 100021 Beijing, China
| | - Guilin Chen
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Graduate School of Peking Union Medical College, Beijing 100730, China
| | - Yuanpeng Zhu
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Graduate School of Peking Union Medical College, Beijing 100730, China
| | - Boyuan Ma
- School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China
| | - Xiaojuan Ban
- School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China
| | - Nan Wu
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Beijing Key Laboratory for Genetic Research of Skeletal Deformity, Beijing 100730, China
| | - Yue Ming
- Department of Nuclear Medicine (PET-CT Center), National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
2
|
Belue MJ, Harmon SA, Yang D, An JY, Gaur S, Law YM, Turkbey E, Xu Z, Tetreault J, Lay NS, Yilmaz EC, Phelps TE, Simon B, Lindenberg L, Mena E, Pinto PA, Bagci U, Wood BJ, Citrin DE, Dahut WL, Madan RA, Gulley JL, Xu D, Choyke PL, Turkbey B. Deep Learning-Based Detection and Classification of Bone Lesions on Staging Computed Tomography in Prostate Cancer: A Development Study. Acad Radiol 2024; 31:2424-2433. [PMID: 38262813 PMCID: PMC11214604 DOI: 10.1016/j.acra.2024.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 01/02/2024] [Accepted: 01/04/2024] [Indexed: 01/25/2024]
Abstract
RATIONALE AND OBJECTIVES Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists. MATERIALS AND METHODS This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36). RESULTS In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%). CONCLUSION Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa.
Collapse
Affiliation(s)
- Mason J Belue
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, Maryland, USA (M.J.B., S.A.H., N.S.L., E.C.Y., T.E.P., B.S., L.L., E.M., P.L.C., B.T.)
| | - Stephanie A Harmon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, Maryland, USA (M.J.B., S.A.H., N.S.L., E.C.Y., T.E.P., B.S., L.L., E.M., P.L.C., B.T.)
| | - Dong Yang
- NVIDIA Corporation, Santa Clara, California, USA (D.Y., Z.X., J.T., D.X.)
| | - Julie Y An
- Department of Radiology, University of California, San Diego, California, USA (J.Y.A.)
| | - Sonia Gaur
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA (S.G.)
| | - Yan Mee Law
- Department of Radiology, Singapore General Hospital, Singapore (Y.M.L.)
| | - Evrim Turkbey
- Department of Radiology, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA (E.T., B.J.W.)
| | - Ziyue Xu
- NVIDIA Corporation, Santa Clara, California, USA (D.Y., Z.X., J.T., D.X.)
| | - Jesse Tetreault
- NVIDIA Corporation, Santa Clara, California, USA (D.Y., Z.X., J.T., D.X.)
| | - Nathan S Lay
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, Maryland, USA (M.J.B., S.A.H., N.S.L., E.C.Y., T.E.P., B.S., L.L., E.M., P.L.C., B.T.)
| | - Enis C Yilmaz
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, Maryland, USA (M.J.B., S.A.H., N.S.L., E.C.Y., T.E.P., B.S., L.L., E.M., P.L.C., B.T.)
| | - Tim E Phelps
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, Maryland, USA (M.J.B., S.A.H., N.S.L., E.C.Y., T.E.P., B.S., L.L., E.M., P.L.C., B.T.)
| | - Benjamin Simon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, Maryland, USA (M.J.B., S.A.H., N.S.L., E.C.Y., T.E.P., B.S., L.L., E.M., P.L.C., B.T.)
| | - Liza Lindenberg
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, Maryland, USA (M.J.B., S.A.H., N.S.L., E.C.Y., T.E.P., B.S., L.L., E.M., P.L.C., B.T.)
| | - Esther Mena
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, Maryland, USA (M.J.B., S.A.H., N.S.L., E.C.Y., T.E.P., B.S., L.L., E.M., P.L.C., B.T.)
| | - Peter A Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA (P.A.P.)
| | - Ulas Bagci
- Radiology and Biomedical Engineering Department, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA (U.B.)
| | - Bradford J Wood
- Department of Radiology, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA (E.T., B.J.W.); Center for Interventional Oncology, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA (B.J.W.)
| | - Deborah E Citrin
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA (D.E.C.)
| | - William L Dahut
- Genitourinary Malignancies Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA (W.L.D., R.A.M.)
| | - Ravi A Madan
- Genitourinary Malignancies Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA (W.L.D., R.A.M.)
| | - James L Gulley
- Center for Immuno-Oncology, National Cancer Institute, National Institutes of Health, Bethesda, Maryland, USA (J.L.G.)
| | - Daguang Xu
- NVIDIA Corporation, Santa Clara, California, USA (D.Y., Z.X., J.T., D.X.)
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, Maryland, USA (M.J.B., S.A.H., N.S.L., E.C.Y., T.E.P., B.S., L.L., E.M., P.L.C., B.T.)
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, Maryland, USA (M.J.B., S.A.H., N.S.L., E.C.Y., T.E.P., B.S., L.L., E.M., P.L.C., B.T.).
| |
Collapse
|
3
|
Xu K, Kang H. A Review of Machine Learning Approaches for Brain Positron Emission Tomography Data Analysis. Nucl Med Mol Imaging 2024; 58:203-212. [PMID: 38932757 PMCID: PMC11196571 DOI: 10.1007/s13139-024-00845-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 01/19/2024] [Accepted: 01/25/2024] [Indexed: 06/28/2024] Open
Abstract
Positron emission tomography (PET) imaging has moved forward the development of medical diagnostics and research across various domains, including cardiology, neurology, infection detection, and oncology. The integration of machine learning (ML) algorithms into PET data analysis has further enhanced their capabilities of including disease diagnosis and classification, image segmentation, and quantitative analysis. ML algorithms empower researchers and clinicians to extract valuable insights from complex big PET datasets, which enabling automated pattern recognition, predictive health outcome modeling, and more efficient data analysis. This review explains the basic knowledge of PET imaging, statistical methods for PET image analysis, and challenges of PET data analysis. We also discussed the improvement of analysis capabilities by combining PET data with machine learning algorithms and the application of this combination in various aspects of PET image research. This review also highlights current trends and future directions in PET imaging, emphasizing the driving and critical role of machine learning and big PET image data analytics in improving diagnostic accuracy and personalized medical approaches. Integration between PET imaging will shape the future of medical diagnosis and research.
Collapse
Affiliation(s)
- Ke Xu
- Department of Biostatistics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1100, Nashville, TN 37203 USA
| | - Hakmook Kang
- Department of Biostatistics, Vanderbilt University Medical Center, 2525 West End Avenue, Suite 1100, Nashville, TN 37203 USA
| |
Collapse
|
4
|
Yeom YS, Braunstein L, Morton LM, Bolton KL, Choi JW, Choi HY, Greenstein N, Lee C. A novel method for rapid estimation of active bone marrow dose for radiotherapy patients in epidemiological studies. Med Phys 2024; 51:4472-4481. [PMID: 38734989 DOI: 10.1002/mp.17118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 03/21/2024] [Accepted: 04/18/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND In a dedicated effort to improve the assessment of clonal hematopoiesis (CH) and study leukemia risk following radiotherapy, we are developing a large-scale cohort study among cancer patients who received radiation. To that end, it will be critical to analyze dosimetric parameters of red bone marrow (ABM) exposure in relation to CH and its progression to myeloid neoplasms, requiring reconstruction method for ABM doses of a large-scale patients rapidly and accurately. PURPOSE To support a large-scale cohort study on the assessment of clonal hematopoiesis and leukemia risk following radiotherapy, we present a new method for the rapid reconstruction of ABM doses of radiotherapy among cancer patients. METHODS The key idea of the presented method is to segment patient bones rapidly and automatically by matching a whole-body computational human phantom, in which the skeletal system is divided into 34 bone sites, to patient CT images via 3D skeletal registration. The automatic approach was used to segment site-specific bones for 40 radiotherapy patients. Also, we segmented the bones manually. The bones segmented both manually and automatically were then combined with the patient dose matrix calculated by the treatment planning system (TPS) to derive patient ABM dose. We evaluated the performance of the automatic method in geometric and dosimetric accuracy by comparison with the manual approach. RESULTS The pelvis showed the best geometric performance [volume overlap fraction (VOF): 52% (mean) with 23% (σ) and average distance (AD): 0.8 cm (mean) with 0.5 cm (σ)]. The pelvis also showed the best dosimetry performance [absorbed dose difference (ADD): 0.7 Gy (mean) with 1.0 Gy (σ)]. Some bones showed unsatisfactory performances such as the cervical vertebrae [ADD: 5.2 Gy (mean) with 10.8 Gy (σ)]. This impact on the total ABM dose, however, was not significant. An excellent agreement for the total ABM dose was indeed observed [ADD: 0.4 Gy (mean) with 0.4 Gy (σ)]. The computation time required for dose calculation using our method was robust (about one minute per patient). CONCLUSIONS We confirmed that our method estimates ABM doses across treatment sites accurately, while providing high computational efficiency. The method will be used to reconstruct patient-specific ABM doses for dose-response assessment in a large cohort study. The method can also be applied to prospective dose calculation within a clinical TPS to support clinical decision making at the point of care.
Collapse
Affiliation(s)
- Yeon Soo Yeom
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Gangwon, Republic of Korea
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, Maryland, USA
| | - Lior Braunstein
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Lindsay M Morton
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, Maryland, USA
| | - Kelly L Bolton
- Department of Medicine, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Ji Won Choi
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, Gangwon, Republic of Korea
| | - Hyeong Yun Choi
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, Maryland, USA
| | | | - Choonsik Lee
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, Maryland, USA
| |
Collapse
|
5
|
Li S, Wang H, Meng Y, Zhang C, Song Z. Multi-organ segmentation: a progressive exploration of learning paradigms under scarce annotation. Phys Med Biol 2024; 69:11TR01. [PMID: 38479023 DOI: 10.1088/1361-6560/ad33b5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 03/13/2024] [Indexed: 05/21/2024]
Abstract
Precise delineation of multiple organs or abnormal regions in the human body from medical images plays an essential role in computer-aided diagnosis, surgical simulation, image-guided interventions, and especially in radiotherapy treatment planning. Thus, it is of great significance to explore automatic segmentation approaches, among which deep learning-based approaches have evolved rapidly and witnessed remarkable progress in multi-organ segmentation. However, obtaining an appropriately sized and fine-grained annotated dataset of multiple organs is extremely hard and expensive. Such scarce annotation limits the development of high-performance multi-organ segmentation models but promotes many annotation-efficient learning paradigms. Among these, studies on transfer learning leveraging external datasets, semi-supervised learning including unannotated datasets and partially-supervised learning integrating partially-labeled datasets have led the dominant way to break such dilemmas in multi-organ segmentation. We first review the fully supervised method, then present a comprehensive and systematic elaboration of the 3 abovementioned learning paradigms in the context of multi-organ segmentation from both technical and methodological perspectives, and finally summarize their challenges and future trends.
Collapse
Affiliation(s)
- Shiman Li
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Haoran Wang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Yucong Meng
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Chenxi Zhang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| |
Collapse
|
6
|
Yawson AK, Walter A, Wolf N, Klüter S, Hoegen P, Adeberg S, Debus J, Frank M, Jäkel O, Giske K. Essential parameters needed for a U-Net-based segmentation of individual bones on planning CT images in the head and neck region using limited datasets for radiotherapy application. Phys Med Biol 2024; 69:035008. [PMID: 38164988 DOI: 10.1088/1361-6560/ad1996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 12/29/2023] [Indexed: 01/03/2024]
Abstract
Objective.The field of radiotherapy is highly marked by the lack of datasets even with the availability of public datasets. Our study uses a very limited dataset to provide insights on essential parameters needed to automatically and accurately segment individual bones on planning CT images of head and neck cancer patients.Approach.The study was conducted using 30 planning CT images of real patients acquired from 5 different cohorts. 15 cases from 4 cohorts were randomly selected as training and validation datasets while the remaining were used as test datasets. Four experimental sets were formulated to explore parameters such as background patch reduction, class-dependent augmentation and incorporation of a weight map on the loss function.Main results.Our best experimental scenario resulted in a mean Dice score of 0.93 ± 0.06 for other bones (skull, mandible, scapulae, clavicles, humeri and hyoid), 0.93 ± 0.02 for ribs and 0.88 ± 0.03 for vertebrae on 7 test cases from the same cohorts as the training datasets. We compared our proposed solution approach to a retrained nnU-Net and obtained comparable results for vertebral bones while outperforming in the correct identification of the left and right instances of ribs, scapulae, humeri and clavicles. Furthermore, we evaluated the generalization capability of our proposed model on a new cohort and the mean Dice score yielded 0.96 ± 0.10 for other bones, 0.95 ± 0.07 for ribs and 0.81 ± 0.19 for vertebrae on 8 test cases.Significance.With these insights, we are challenging the utilization of an automatic and accurate bone segmentation tool into the clinical routine of radiotherapy despite the limited training datasets.
Collapse
Affiliation(s)
- Ama Katseena Yawson
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Alexandra Walter
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Karlsruhe Institute of Technology (KIT), Department of Mathematics, Karlsruhe, Germany
| | - Nora Wolf
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Sebastian Klüter
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- University Hospital Heidelberg, Department of Radiation Oncology, Heidelberg, Germany
| | - Philip Hoegen
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- University Hospital Heidelberg, Department of Radiation Oncology, Heidelberg, Germany
| | | | - Jürgen Debus
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- University Hospital Heidelberg, Department of Radiation Oncology, Heidelberg, Germany
- Heidelberg Ion Therapy Center (HIT), Heidelberg, Germany
| | - Martin Frank
- Karlsruhe Institute of Technology (KIT), Department of Mathematics, Karlsruhe, Germany
| | - Oliver Jäkel
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg Ion Therapy Center (HIT), Heidelberg, Germany
| | - Kristina Giske
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
| |
Collapse
|
7
|
Walter A, Hoegen-Saßmannshausen P, Stanic G, Rodrigues JP, Adeberg S, Jäkel O, Frank M, Giske K. Segmentation of 71 Anatomical Structures Necessary for the Evaluation of Guideline-Conforming Clinical Target Volumes in Head and Neck Cancers. Cancers (Basel) 2024; 16:415. [PMID: 38254904 PMCID: PMC11154560 DOI: 10.3390/cancers16020415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
The delineation of the clinical target volumes (CTVs) for radiation therapy is time-consuming, requires intensive training and shows high inter-observer variability. Supervised deep-learning methods depend heavily on consistent training data; thus, State-of-the-Art research focuses on making CTV labels more homogeneous and strictly bounding them to current standards. International consensus expert guidelines standardize CTV delineation by conditioning the extension of the clinical target volume on the surrounding anatomical structures. Training strategies that directly follow the construction rules given in the expert guidelines or the possibility of quantifying the conformance of manually drawn contours to the guidelines are still missing. Seventy-one anatomical structures that are relevant to CTV delineation in head- and neck-cancer patients, according to the expert guidelines, were segmented on 104 computed tomography scans, to assess the possibility of automating their segmentation by State-of-the-Art deep learning methods. All 71 anatomical structures were subdivided into three subsets of non-overlapping structures, and a 3D nnU-Net model with five-fold cross-validation was trained for each subset, to automatically segment the structures on planning computed tomography scans. We report the DICE, Hausdorff distance and surface DICE for 71 + 5 anatomical structures, for most of which no previous segmentation accuracies have been reported. For those structures for which prediction values have been reported, our segmentation accuracy matched or exceeded the reported values. The predictions from our models were always better than those predicted by the TotalSegmentator. The sDICE with 2 mm margin was larger than 80% for almost all the structures. Individual structures with decreased segmentation accuracy are analyzed and discussed with respect to their impact on the CTV delineation following the expert guidelines. No deviation is expected to affect the rule-based automation of the CTV delineation.
Collapse
Affiliation(s)
- Alexandra Walter
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany; (G.S.); (J.P.R.); (O.J.); (K.G.)
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
- Karlsruhe Institute of Technology (KIT), Scientific Computing Center, Zirkel 2, 76131 Karlsruhe, Germany;
| | - Philipp Hoegen-Saßmannshausen
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, 69120 Heidelberg, Germany
| | - Goran Stanic
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany; (G.S.); (J.P.R.); (O.J.); (K.G.)
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
- Faculty of Physics and Astronomy, University of Heidelberg, 69120 Heidelberg, Germany
| | - Joao Pedro Rodrigues
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany; (G.S.); (J.P.R.); (O.J.); (K.G.)
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
| | - Sebastian Adeberg
- Department of Radiotherapy and Radiation Oncology, Marburg University Hospital, 35043 Marburg, Germany;
- Marburg Ion-Beam Therapy Center (MIT), 35043 Marburg, Germany
- Universitäres Centrum für Tumorerkrankungen (UCT), 35033 Marburg, Germany
| | - Oliver Jäkel
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany; (G.S.); (J.P.R.); (O.J.); (K.G.)
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
- Heidelberg Ion-Beam Therapy Center (HIT), 69120 Heidelberg, Germany
| | - Martin Frank
- Karlsruhe Institute of Technology (KIT), Scientific Computing Center, Zirkel 2, 76131 Karlsruhe, Germany;
| | - Kristina Giske
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany; (G.S.); (J.P.R.); (O.J.); (K.G.)
- Heidelberg Institute of Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), 69120 Heidelberg, Germany;
| |
Collapse
|
8
|
Roest C, Kloet RW, Lamers MJ, Yakar D, Kwee TC. Focused view CT angiography for selective visualization of stroke related arteries: technical feasibility. Eur Radiol 2023; 33:9099-9108. [PMID: 37438639 PMCID: PMC10667412 DOI: 10.1007/s00330-023-09904-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 04/18/2023] [Accepted: 05/02/2023] [Indexed: 07/14/2023]
Abstract
OBJECTIVES This study investigated the technical feasibility of focused view CTA for the selective visualization of stroke related arteries. METHODS A total of 141 CTA examinations for acute ischemic stroke evaluation were divided into a set of 100 cases to train a deep learning algorithm (dubbed "focused view CTA") that selectively extracts brain (including intracranial arteries) and extracranial arteries, and a test set of 41 cases. The visibility of anatomic structures at focused view and unmodified CTA was assessed using the following scoring system: 5 = completely visible, diagnostically sufficient; 4 = nearly completely visible, diagnostically sufficient; 3 = incompletely visible, barely diagnostically sufficient; 2 = hardly visible, diagnostically insufficient; 1 = not visible, diagnostically insufficient. RESULTS At focused view CTA, median scores for the aortic arch, subclavian arteries, common carotid arteries, C1, C6, and C7 segments of the internal carotid arteries, V4 segment of the vertebral arteries, basilar artery, cerebellum including cerebellar arteries, cerebrum including cerebral arteries, and dural venous sinuses, were all 4. Median scores for the C2 to C5 segments of the internal carotid arteries, and V1 to V3 segments of the vertebral arteries ranged between 3 and 2. At unmodified CTA, median score for all above-mentioned anatomic structures was 5, which was significantly higher (p < 0.0001) than that at focused view CTA. CONCLUSION Focused view CTA shows promise for the selective visualization of stroke-related arteries. Further improvements should focus on more accurately visualizing the smaller and tortuous internal carotid and vertebral artery segments close to bone. CLINICAL RELEVANCE Focused view CTA may speed up image interpretation time for LVO detection and may potentially be used as a tool to study the clinical relevance of incidental findings in future prospective long-term follow-up studies. KEY POINTS • A deep learning-based algorithm ("focused view CTA") was developed to selectively visualize relevant structures for acute ischemic stroke evaluation at CTA. • The elimination of unrequested anatomic background information was complete in all cases. • Focused view CTA may be used to study the clinical relevance of incidental findings.
Collapse
Affiliation(s)
- Christian Roest
- Medical Imaging Center, Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands.
| | - Reina W Kloet
- Medical Imaging Center, Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Maria J Lamers
- Medical Imaging Center, Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Derya Yakar
- Medical Imaging Center, Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
- Department of Radiology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Thomas C Kwee
- Medical Imaging Center, Department of Radiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| |
Collapse
|
9
|
Schnider E, Wolleb J, Huck A, Toranelli M, Rauter G, Müller-Gerbl M, Cattin PC. Improved distinct bone segmentation in upper-body CT through multi-resolution networks. Int J Comput Assist Radiol Surg 2023; 18:2091-2099. [PMID: 37338664 PMCID: PMC10589171 DOI: 10.1007/s11548-023-02957-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 05/09/2023] [Indexed: 06/21/2023]
Abstract
PURPOSE Automated distinct bone segmentation from CT scans is widely used in planning and navigation workflows. U-Net variants are known to provide excellent results in supervised semantic segmentation. However, in distinct bone segmentation from upper-body CTs a large field of view and a computationally taxing 3D architecture are required. This leads to low-resolution results lacking detail or localisation errors due to missing spatial context when using high-resolution inputs. METHODS We propose to solve this problem by using end-to-end trainable segmentation networks that combine several 3D U-Nets working at different resolutions. Our approach, which extends and generalizes HookNet and MRN, captures spatial information at a lower resolution and skips the encoded information to the target network, which operates on smaller high-resolution inputs. We evaluated our proposed architecture against single-resolution networks and performed an ablation study on information concatenation and the number of context networks. RESULTS Our proposed best network achieves a median DSC of 0.86 taken over all 125 segmented bone classes and reduces the confusion among similar-looking bones in different locations. These results outperform our previously published 3D U-Net baseline results on the task and distinct bone segmentation results reported by other groups. CONCLUSION The presented multi-resolution 3D U-Nets address current shortcomings in bone segmentation from upper-body CT scans by allowing for capturing a larger field of view while avoiding the cubic growth of the input pixels and intermediate computations that quickly outgrow the computational capacities in 3D. The approach thus improves the accuracy and efficiency of distinct bone segmentation from upper-body CT.
Collapse
Affiliation(s)
- Eva Schnider
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland.
| | - Julia Wolleb
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| | - Antal Huck
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| | - Mireille Toranelli
- Department of Biomedicine, Musculoskeletal Research, University of Basel, Basel, Switzerland
| | - Georg Rauter
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| | - Magdalena Müller-Gerbl
- Department of Biomedicine, Musculoskeletal Research, University of Basel, Basel, Switzerland
| | - Philippe C Cattin
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| |
Collapse
|
10
|
Kwolek K, Grzelecki D, Kwolek K, Marczak D, Kowalczewski J, Tyrakowski M. Automated patellar height assessment on high-resolution radiographs with a novel deep learning-based approach. World J Orthop 2023; 14:387-398. [PMID: 37377994 PMCID: PMC10292056 DOI: 10.5312/wjo.v14.i6.387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 04/06/2023] [Accepted: 05/06/2023] [Indexed: 06/19/2023] Open
Abstract
BACKGROUND Artificial intelligence and deep learning have shown promising results in medical imaging and interpreting radiographs. Moreover, medical community shows a gaining interest in automating routine diagnostics issues and orthopedic measurements.
AIM To verify the accuracy of automated patellar height assessment using deep learning-based bone segmentation and detection approach on high resolution radiographs.
METHODS 218 Lateral knee radiographs were included in the analysis. 82 radiographs were utilized for training and 10 other radiographs for validation of a U-Net neural network to achieve required Dice score. 92 other radiographs were used for automatic (U-Net) and manual measurements of the patellar height, quantified by Caton-Deschamps (CD) and Blackburne-Peel (BP) indexes. The detection of required bones regions on high-resolution images was done using a You Only Look Once (YOLO) neural network. The agreement between manual and automatic measurements was calculated using the interclass correlation coefficient (ICC) and the standard error for single measurement (SEM). To check U-Net's generalization the segmentation accuracy on the test set was also calculated.
RESULTS Proximal tibia and patella was segmented with accuracy 95.9% (Dice score) by U-Net neural network on lateral knee subimages automatically detected by the YOLO network (mean Average Precision mAP greater than 0.96). The mean values of CD and BP indexes calculated by orthopedic surgeons (R#1 and R#2) was 0.93 (± 0.19) and 0.89 (± 0.19) for CD and 0.80 (± 0.17) and 0.78 (± 0.17) for BP. Automatic measurements performed by our algorithm for CD and BP indexes were 0.92 (± 0.21) and 0.75 (± 0.19), respectively. Excellent agreement between the orthopedic surgeons’ measurements and results of the algorithm has been achieved (ICC > 0.75, SEM < 0.014).
CONCLUSION Automatic patellar height assessment can be achieved on high-resolution radiographs with the required accuracy. Determining patellar end-points and the joint line-fitting to the proximal tibia joint surface allows for accurate CD and BP index calculations. The obtained results indicate that this approach can be valuable tool in a medical practice.
Collapse
Affiliation(s)
- Kamil Kwolek
- Department of Spine Disorders and Orthopaedics, Centre of Postgraduate Medical Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock 05-400, Poland
| | - Dariusz Grzelecki
- Department of Orthopaedics and Rheumoorthopedics, Centre of Postgraduate Medical Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock 05-400, Poland
| | - Konrad Kwolek
- Department of Orthopaedics and Traumatology, University Hospital, Krakow 30-663, Poland
| | - Dariusz Marczak
- Department of Orthopaedics and Rheumoorthopedics, Centre of Postgraduate Medical Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock 05-400, Poland
| | - Jacek Kowalczewski
- Department of Orthopaedics and Rheumoorthopedics, Centre of Postgraduate Medical Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock 05-400, Poland
| | - Marcin Tyrakowski
- Department of Spine Disorders and Orthopaedics, Centre of Postgraduate Medical Education, Gruca Orthopaedic and Trauma Teaching Hospital, Otwock 05-400, Poland
| |
Collapse
|
11
|
Nan L, Tang M, Liang B, Mo S, Kang N, Song S, Zhang X, Zeng X. Automated Sagittal Skeletal Classification of Children Based on Deep Learning. Diagnostics (Basel) 2023; 13:diagnostics13101719. [PMID: 37238203 DOI: 10.3390/diagnostics13101719] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 05/02/2023] [Accepted: 05/05/2023] [Indexed: 05/28/2023] Open
Abstract
Malocclusions are a type of cranio-maxillofacial growth and developmental deformity that occur with high incidence in children. Therefore, a simple and rapid diagnosis of malocclusions would be of great benefit to our future generation. However, the application of deep learning algorithms to the automatic detection of malocclusions in children has not been reported. Therefore, the aim of this study was to develop a deep learning-based method for automatic classification of the sagittal skeletal pattern in children and to validate its performance. This would be the first step in establishing a decision support system for early orthodontic treatment. In this study, four different state-of-the-art (SOTA) models were trained and compared by using 1613 lateral cephalograms, and the best performance model, Densenet-121, was selected was further subsequent validation. Lateral cephalograms and profile photographs were used as the input for the Densenet-121 model, respectively. The models were optimized using transfer learning and data augmentation techniques, and label distribution learning was introduced during model training to address the inevitable label ambiguity between adjacent classes. Five-fold cross-validation was conducted for a comprehensive evaluation of our method. The sensitivity, specificity, and accuracy of the CNN model based on lateral cephalometric radiographs were 83.99, 92.44, and 90.33%, respectively. The accuracy of the model with profile photographs was 83.39%. The accuracy of both CNN models was improved to 91.28 and 83.98%, respectively, while the overfitting decreased after addition of label distribution learning. Previous studies have been based on adult lateral cephalograms. Therefore, our study is novel in using deep learning network architecture with lateral cephalograms and profile photographs obtained from children in order to obtain a high-precision automatic classification of the sagittal skeletal pattern in children.
Collapse
Affiliation(s)
- Lan Nan
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Min Tang
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Bohui Liang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
| | - Shuixue Mo
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Na Kang
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Shaohua Song
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Xuejun Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China
| | - Xiaojuan Zeng
- College of Stomatology, Guangxi Medical University, Nanning 530021, China
- Guangxi Health Commission Key Laboratory of Prevention and Treatment for Oral Infectious Diseases, Nanning 530021, China
- Guangxi Key Laboratory of Oral and Maxillofacial Rehabilitation and Reconstruction, Nanning 530021, China
| |
Collapse
|
12
|
Lindgren Belal S, Larsson M, Holm J, Buch-Olsen KM, Sörensen J, Bjartell A, Edenbrandt L, Trägårdh E. Automated quantification of PET/CT skeletal tumor burden in prostate cancer using artificial intelligence: The PET index. Eur J Nucl Med Mol Imaging 2023; 50:1510-1520. [PMID: 36650356 PMCID: PMC10027829 DOI: 10.1007/s00259-023-06108-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 01/05/2023] [Indexed: 01/19/2023]
Abstract
PURPOSE Consistent assessment of bone metastases is crucial for patient management and clinical trials in prostate cancer (PCa). We aimed to develop a fully automated convolutional neural network (CNN)-based model for calculating PET/CT skeletal tumor burden in patients with PCa. METHODS A total of 168 patients from three centers were divided into training, validation, and test groups. Manual annotations of skeletal lesions in [18F]fluoride PET/CT scans were used to train a CNN. The AI model was evaluated in 26 patients and compared to segmentations by physicians and to a SUV 15 threshold. PET index representing the percentage of skeletal volume taken up by lesions was estimated. RESULTS There was no case in which all readers agreed on prevalence of lesions that the AI model failed to detect. PET index by the AI model correlated moderately strong to physician PET index (mean r = 0.69). Threshold PET index correlated fairly with physician PET index (mean r = 0.49). The sensitivity for lesion detection was 65-76% for AI, 68-91% for physicians, and 44-51% for threshold depending on which physician was considered reference. CONCLUSION It was possible to develop an AI-based model for automated assessment of PET/CT skeletal tumor burden. The model's performance was superior to using a threshold and provides fully automated calculation of whole-body skeletal tumor burden. It could be further developed to apply to different radiotracers. Objective scan evaluation is a first step toward developing a PET/CT imaging biomarker for PCa skeletal metastases.
Collapse
Affiliation(s)
- Sarah Lindgren Belal
- Division of Nuclear Medicine, Department of Translational Medicine, Lund University, Malmö, Sweden.
- Department of Surgery, Skåne University Hospital, Malmö, Sweden.
- Wallenberg Center for Molecular Medicine, Lund University, Malmö, Sweden.
| | | | - Jorun Holm
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark
| | | | - Jens Sörensen
- Division of Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - Anders Bjartell
- Division of Urological Cancer, Department of Translational Medicine, Lund University, Malmö, Sweden
| | - Lars Edenbrandt
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy at University of Gothenburg, Gothenburg, Sweden
| | - Elin Trägårdh
- Division of Nuclear Medicine, Department of Translational Medicine, Lund University, Malmö, Sweden
- Wallenberg Center for Molecular Medicine, Lund University, Malmö, Sweden
| |
Collapse
|
13
|
Systematic Review of Tumor Segmentation Strategies for Bone Metastases. Cancers (Basel) 2023; 15:cancers15061750. [PMID: 36980636 PMCID: PMC10046265 DOI: 10.3390/cancers15061750] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023] Open
Abstract
Purpose: To investigate the segmentation approaches for bone metastases in differentiating benign from malignant bone lesions and characterizing malignant bone lesions. Method: The literature search was conducted in Scopus, PubMed, IEEE and MedLine, and Web of Science electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 77 original articles, 24 review articles, and 1 comparison paper published between January 2010 and March 2022 were included in the review. Results: The results showed that most studies used neural network-based approaches (58.44%) and CT-based imaging (50.65%) out of 77 original articles. However, the review highlights the lack of a gold standard for tumor boundaries and the need for manual correction of the segmentation output, which largely explains the absence of clinical translation studies. Moreover, only 19 studies (24.67%) specifically mentioned the feasibility of their proposed methods for use in clinical practice. Conclusion: Development of tumor segmentation techniques that combine anatomical information and metabolic activities is encouraging despite not having an optimal tumor segmentation method for all applications or can compensate for all the difficulties built into data limitations.
Collapse
|
14
|
Wang X, Wang Y. Composite Attention Residual U-Net for Rib Fracture Detection. ENTROPY (BASEL, SWITZERLAND) 2023; 25:466. [PMID: 36981354 PMCID: PMC10047421 DOI: 10.3390/e25030466] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/25/2023] [Accepted: 02/27/2023] [Indexed: 06/18/2023]
Abstract
Computed tomography (CT) images play a vital role in diagnosing rib fractures and determining the severity of chest trauma. However, quickly and accurately identifying rib fractures in a large number of CT images is an arduous task for radiologists. We propose a U-net-based detection method designed to extract rib fracture features at the pixel level to find rib fractures rapidly and precisely. Two modules are applied to the segmentation network-a combined attention module (CAM) and a hybrid dense dilated convolution module (HDDC). The features of the same layer of the encoder and the decoder are fused through CAM, strengthening the local features of the subtle fracture area and increasing the edge features. HDDC is used between the encoder and decoder to obtain sufficient semantic information. Experiments show that on the public dataset, the model test brings the effects of Recall (81.71%), F1 (81.86%), and Dice (53.28%). Experienced radiologists reach lower false positives for each scan, whereas they have underperforming neural network models in terms of detection sensitivities with a long time diagnosis. With the aid of our model, radiologists can achieve higher detection sensitivities than computer-only or human-only diagnosis.
Collapse
|
15
|
Piri R, Hamakan Y, Vang A, Edenbrandt L, Larsson M, Enqvist O, Gerke O, Høilund-Carlsen PF. Common carotid segmentation in 18 F-sodium fluoride PET/CT scans: Head-to-head comparison of artificial intelligence-based and manual method. Clin Physiol Funct Imaging 2023; 43:71-77. [PMID: 36331059 PMCID: PMC10100011 DOI: 10.1111/cpf.12793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 10/06/2022] [Accepted: 10/14/2022] [Indexed: 11/06/2022]
Abstract
BACKGROUND Carotid atherosclerosis is a major cause of stroke, traditionally diagnosed late. Positron emission tomography/computed tomography (PET/CT) with 18 F-sodium fluoride (NaF) detects arterial wall micro-calcification long before macro-calcification becomes detectable by ultrasound, CT or magnetic resonance imaging. However, manual PET/CT processing is time-consuming and requires experience. We compared a convolutional neural network (CNN) approach with manual segmentation of the common carotids. METHODS Segmentation in NaF-PET/CT scans of 29 healthy volunteers and 20 angina pectoris patients were compared for segmented volume (Vol) and mean, maximal, and total standardized uptake values (SUVmean, SUVmax, and SUVtotal). SUVmean was the average of SUVmeans within the VOI, SUVmax the highest SUV in all voxels in the VOI, and SUVtotal the SUVmean multiplied by the Vol of the VOI. Intra and Interobserver variability with manual segmentation was examined in 25 randomly selected scans. RESULTS Bias for Vol, SUVmean, SUVmax, and SUVtotal were 1.33 ± 2.06, -0.01 ± 0.05, 0.09 ± 0.48, and 1.18 ± 1.99 in the left and 1.89 ± 1.5, -0.07 ± 0.12, 0.05 ± 0.47, and 1.61 ± 1.47, respectively, in the right common carotid artery. Manual segmentation lasted typically 20 min versus 1 min with the CNN-based approach. Mean Vol deviation at repeat manual segmentation was 14% and 27% in left and right common carotids. CONCLUSIONS CNN-based segmentation was much faster and provided SUVmean values virtually identical to manually obtained ones, suggesting CNN-based analysis as a promising substitute of slow and cumbersome manual processing.
Collapse
Affiliation(s)
- Reza Piri
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark.,Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Yaran Hamakan
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark
| | - Ask Vang
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark
| | - Lars Edenbrandt
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.,Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | | | - Olof Enqvist
- Eigenvision AB, Malmö, Sweden.,Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Oke Gerke
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark.,Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Poul Flemming Høilund-Carlsen
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark.,Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
16
|
Jonsson T. Micro-CT and deep learning: Modern techniques and applications in insect morphology and neuroscience. FRONTIERS IN INSECT SCIENCE 2023; 3:1016277. [PMID: 38469492 PMCID: PMC10926430 DOI: 10.3389/finsc.2023.1016277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 01/06/2023] [Indexed: 03/13/2024]
Abstract
Advances in modern imaging and computer technologies have led to a steady rise in the use of micro-computed tomography (µCT) in many biological areas. In zoological research, this fast and non-destructive method for producing high-resolution, two- and three-dimensional images is increasingly being used for the functional analysis of the external and internal anatomy of animals. µCT is hereby no longer limited to the analysis of specific biological tissues in a medical or preclinical context but can be combined with a variety of contrast agents to study form and function of all kinds of tissues and species, from mammals and reptiles to fish and microscopic invertebrates. Concurrently, advances in the field of artificial intelligence, especially in deep learning, have revolutionised computer vision and facilitated the automatic, fast and ever more accurate analysis of two- and three-dimensional image datasets. Here, I want to give a brief overview of both micro-computed tomography and deep learning and present their recent applications, especially within the field of insect science. Furthermore, the combination of both approaches to investigate neural tissues and the resulting potential for the analysis of insect sensory systems, from receptor structures via neuronal pathways to the brain, are discussed.
Collapse
Affiliation(s)
- Thorin Jonsson
- Institute of Biology, Karl-Franzens-University Graz, Graz, Austria
| |
Collapse
|
17
|
Hallinan JTPD, Zhu L, Zhang W, Ge S, Muhamat Nor FE, Ong HY, Eide SE, Cheng AJL, Kuah T, Lim DSW, Low XZ, Yeong KY, AlMuhaish MI, Alsooreti A, Kumarakulasinghe NB, Teo EC, Yap QV, Chan YH, Lin S, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A. Deep learning assessment compared to radiologist reporting for metastatic spinal cord compression on CT. Front Oncol 2023; 13:1151073. [PMID: 37213273 PMCID: PMC10193838 DOI: 10.3389/fonc.2023.1151073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 03/16/2023] [Indexed: 05/23/2023] Open
Abstract
Introduction Metastatic spinal cord compression (MSCC) is a disastrous complication of advanced malignancy. A deep learning (DL) algorithm for MSCC classification on CT could expedite timely diagnosis. In this study, we externally test a DL algorithm for MSCC classification on CT and compare with radiologist assessment. Methods Retrospective collection of CT and corresponding MRI from patients with suspected MSCC was conducted from September 2007 to September 2020. Exclusion criteria were scans with instrumentation, no intravenous contrast, motion artefacts and non-thoracic coverage. Internal CT dataset split was 84% for training/validation and 16% for testing. An external test set was also utilised. Internal training/validation sets were labelled by radiologists with spine imaging specialization (6 and 11-years post-board certification) and were used to further develop a DL algorithm for MSCC classification. The spine imaging specialist (11-years expertise) labelled the test sets (reference standard). For evaluation of DL algorithm performance, internal and external test data were independently reviewed by four radiologists: two spine specialists (Rad1 and Rad2, 7 and 5-years post-board certification, respectively) and two oncological imaging specialists (Rad3 and Rad4, 3 and 5-years post-board certification, respectively). DL model performance was also compared against the CT report issued by the radiologist in a real clinical setting. Inter-rater agreement (Gwet's kappa) and sensitivity/specificity/AUCs were calculated. Results Overall, 420 CT scans were evaluated (225 patients, mean age=60 ± 11.9[SD]); 354(84%) CTs for training/validation and 66(16%) CTs for internal testing. The DL algorithm showed high inter-rater agreement for three-class MSCC grading with kappas of 0.872 (p<0.001) and 0.844 (p<0.001) on internal and external testing, respectively. On internal testing DL algorithm inter-rater agreement (κ=0.872) was superior to Rad 2 (κ=0.795) and Rad 3 (κ=0.724) (both p<0.001). DL algorithm kappa of 0.844 on external testing was superior to Rad 3 (κ=0.721) (p<0.001). CT report classification of high-grade MSCC disease was poor with only slight inter-rater agreement (κ=0.027) and low sensitivity (44.0), relative to the DL algorithm with almost-perfect inter-rater agreement (κ=0.813) and high sensitivity (94.0) (p<0.001). Conclusion Deep learning algorithm for metastatic spinal cord compression on CT showed superior performance to the CT report issued by experienced radiologists and could aid earlier diagnosis.
Collapse
Affiliation(s)
- James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- *Correspondence: James Thomas Patrick Decourcy Hallinan,
| | - Lei Zhu
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Wenqiao Zhang
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Shuliang Ge
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Faimee Erwan Muhamat Nor
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Han Yang Ong
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Sterling Ellis Eide
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Amanda J. L. Cheng
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Tricia Kuah
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Desmond Shi Wei Lim
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Kuan Yuen Yeong
- Department of Radiology, Ng Teng Fong General Hospital, Singapore, Singapore
| | - Mona I. AlMuhaish
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Radiology, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Ahmed Mohamed Alsooreti
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Imaging, Salmaniya Medical Complex, Manama, Bahrain
| | | | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Qai Ven Yap
- Biostatistics Unit, Yong Loo Lin School of Medicine, Singapore, Singapore
| | - Yiong Huak Chan
- Biostatistics Unit, Yong Loo Lin School of Medicine, Singapore, Singapore
| | - Shuxun Lin
- Division of Spine Surgery, Department of Orthopaedic Surgery, Ng Teng Fong General Hospital, Singapore, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, Singapore, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, Singapore, Singapore
| | - Balamurugan A. Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, Singapore, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
18
|
Roper J, Lin M, Rong Y. Extensive upfront validation and testing are needed prior to the clinical implementation of AI-based auto-segmentation tools. J Appl Clin Med Phys 2022; 24:e13873. [PMID: 36545883 PMCID: PMC9859989 DOI: 10.1002/acm2.13873] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 11/30/2022] [Accepted: 12/01/2022] [Indexed: 12/24/2022] Open
Affiliation(s)
- Justin Roper
- Department of Radiation OncologyWinship Cancer Institute of Emory UniversityAtlantaGeorgiaUSA
| | - Mu‐Han Lin
- Department of Radiation OncologyUniversity of Texas Southwestern Medical CenterDallasTexasUSA
| | - Yi Rong
- Department of Radiation OncologyMayo Clinic HospitalsPhoenixArizonaUSA
| |
Collapse
|
19
|
Farhadi F, Barnes MR, Sugito HR, Sin JM, Henderson ER, Levy JJ. Applications of artificial intelligence in orthopaedic surgery. FRONTIERS IN MEDICAL TECHNOLOGY 2022; 4:995526. [PMID: 36590152 PMCID: PMC9797865 DOI: 10.3389/fmedt.2022.995526] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Accepted: 11/28/2022] [Indexed: 12/23/2022] Open
Abstract
The practice of medicine is rapidly transforming as a result of technological breakthroughs. Artificial intelligence (AI) systems are becoming more and more relevant in medicine and orthopaedic surgery as a result of the nearly exponential growth in computer processing power, cloud based computing, and development, and refining of medical-task specific software algorithms. Because of the extensive role of technologies such as medical imaging that bring high sensitivity, specificity, and positive/negative prognostic value to management of orthopaedic disorders, the field is particularly ripe for the application of machine-based integration of imaging studies, among other applications. Through this review, we seek to promote awareness in the orthopaedics community of the current accomplishments and projected uses of AI and ML as described in the literature. We summarize the current state of the art in the use of ML and AI in five key orthopaedic disciplines: joint reconstruction, spine, orthopaedic oncology, trauma, and sports medicine.
Collapse
Affiliation(s)
- Faraz Farhadi
- Geisel School of Medicine, Dartmouth College, Hanover, NH, United States,Radiology and Imaging Sciences, National Institutes of Health (NIH), Bethesda, United States,Correspondence: Faraz Farhadi Joshua J. Levy
| | - Matthew R. Barnes
- Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Harun R. Sugito
- Geisel School of Medicine, Dartmouth College, Hanover, NH, United States
| | - Jessica M. Sin
- Department of Radiology, Dartmouth Health, Lebanon, United States
| | - Eric R. Henderson
- Department of Orthopaedics, Dartmouth Health, Lebanon, United States
| | - Joshua J. Levy
- Department of Pathology and Laboratory Medicine, Dartmouth Health, Lebanon, NH, United States,Correspondence: Faraz Farhadi Joshua J. Levy
| |
Collapse
|
20
|
Kuiper RJA, Sakkers RJB, van Stralen M, Arbabi V, Viergever MA, Weinans H, Seevinck PR. Efficient cascaded V-net optimization for lower extremity CT segmentation validated using bone morphology assessment. J Orthop Res 2022; 40:2894-2907. [PMID: 35239226 PMCID: PMC9790725 DOI: 10.1002/jor.25314] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 01/13/2022] [Accepted: 02/02/2022] [Indexed: 02/04/2023]
Abstract
Semantic segmentation of bone from lower extremity computerized tomography (CT) scans can improve and accelerate the visualization, diagnosis, and surgical planning in orthopaedics. However, the large field of view of these scans makes automatic segmentation using deep learning based methods challenging, slow and graphical processing unit (GPU) memory intensive. We investigated methods to more efficiently represent anatomical context for accurate and fast segmentation and compared these with state-of-the-art methodology. Six lower extremity bones from patients of two different datasets were manually segmented from CT scans, and used to train and optimize a cascaded deep learning approach. We varied the number of resolution levels, receptive fields, patch sizes, and number of V-net blocks. The best performing network used a multi-stage, cascaded V-net approach with 1283 -643 -323 voxel patches as input. The average Dice coefficient over all bones was 0.98 ± 0.01, the mean surface distance was 0.26 ± 0.12 mm and the 95th percentile Hausdorff distance 0.65 ± 0.28 mm. This was a significant improvement over the results of the state-of-the-art nnU-net, with only approximately 1/12th of training time, 1/3th of inference time and 1/4th of GPU memory required. Comparison of the morphometric measurements performed on automatic and manual segmentations showed good correlation (Intraclass Correlation Coefficient [ICC] >0.8) for the alpha angle and excellent correlation (ICC >0.95) for the hip-knee-ankle angle, femoral inclination, femoral version, acetabular version, Lateral Centre-Edge angle, acetabular coverage. The segmentations were generally of sufficient quality for the tested clinical applications and were performed accurately and quickly compared to state-of-the-art methodology from the literature.
Collapse
Affiliation(s)
- Ruurd J. A. Kuiper
- Department of OrthopaedicsUniversity Medical Center UtrechtUtrechtThe Netherlands,Image Sciences InstituteUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Ralph J. B. Sakkers
- Department of OrthopaedicsUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Marijn van Stralen
- Image Sciences InstituteUniversity Medical Center UtrechtUtrechtThe Netherlands,MRIguidance B.V.UtrechtThe Netherlands
| | - Vahid Arbabi
- Department of OrthopaedicsUniversity Medical Center UtrechtUtrechtThe Netherlands,Department of Mechanical EngineeringUniversity of BirjandBirjandIran
| | - Max A. Viergever
- Image Sciences InstituteUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Harrie Weinans
- Department of OrthopaedicsUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Peter R. Seevinck
- Image Sciences InstituteUniversity Medical Center UtrechtUtrechtThe Netherlands,MRIguidance B.V.UtrechtThe Netherlands
| |
Collapse
|
21
|
Piri R, Edenbrandt L, Larsson M, Enqvist O, Skovrup S, Iversen KK, Saboury B, Alavi A, Gerke O, Høilund-Carlsen PF. "Global" cardiac atherosclerotic burden assessed by artificial intelligence-based versus manual segmentation in 18F-sodium fluoride PET/CT scans: Head-to-head comparison. J Nucl Cardiol 2022; 29:2531-2539. [PMID: 34386861 DOI: 10.1007/s12350-021-02758-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 07/13/2021] [Indexed: 01/06/2023]
Abstract
BACKGROUND Artificial intelligence (AI) is known to provide effective means to accelerate and facilitate clinical and research processes. So in this study it was aimed to compare a AI-based method for cardiac segmentation in positron emission tomography/computed tomography (PET/CT) scans with manual segmentation to assess global cardiac atherosclerosis burden. METHODS A trained convolutional neural network (CNN) was used for cardiac segmentation in 18F-sodium fluoride PET/CT scans of 29 healthy volunteers and 20 angina pectoris patients and compared with manual segmentation. Parameters for segmented volume (Vol) and mean, maximal, and total standardized uptake values (SUVmean, SUVmax, SUVtotal) were analyzed by Bland-Altman Limits of Agreement. Repeatability with AI-based assessment of the same scans is 100%. Repeatability (same conditions, same operator) and reproducibility (same conditions, two different operators) of manual segmentation was examined by re-segmentation in 25 randomly selected scans. RESULTS Mean (± SD) values with manual vs. CNN-based segmentation were Vol 617.65 ± 154.99 mL vs 625.26 ± 153.55 mL (P = .21), SUVmean 0.69 ± 0.15 vs 0.69 ± 0.15 (P = .26), SUVmax 2.68 ± 0.86 vs 2.77 ± 1.05 (P = .34), and SUVtotal 425.51 ± 138.93 vs 427.91 ± 132.68 (P = .62). Limits of agreement were - 89.42 to 74.2, - 0.02 to 0.02, - 1.52 to 1.32, and - 68.02 to 63.21, respectively. Manual segmentation lasted typically 30 minutes vs about one minute with the CNN-based approach. The maximal deviation at manual re-segmentation was for the four parameters 0% to 0.5% with the same and 0% to 1% with different operators. CONCLUSION The CNN-based method was faster and provided values for Vol, SUVmean, SUVmax, and SUVtotal comparable to the manually obtained ones. This AI-based segmentation approach appears to offer a more reproducible and much faster substitute for slow and cumbersome manual segmentation of the heart.
Collapse
Affiliation(s)
- Reza Piri
- Department of Nuclear Medicine, Odense University Hospital, 5000, Odense C, Denmark.
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark.
| | - Lars Edenbrandt
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Physiology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | | | - Olof Enqvist
- Eigenvision AB, Malmö, Sweden
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Sofie Skovrup
- Department of Nuclear Medicine, Odense University Hospital, 5000, Odense C, Denmark
| | - Kasper Karmark Iversen
- Department of Cardiology, Herlev and Gentofte Hospital, Copenhagen, Denmark
- Department of Emergency Medicine, Herlev and Gentofte Hospital, Copenhagen, Denmark
| | - Babak Saboury
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
- Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Abass Alavi
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Oke Gerke
- Department of Nuclear Medicine, Odense University Hospital, 5000, Odense C, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Poul Flemming Høilund-Carlsen
- Department of Nuclear Medicine, Odense University Hospital, 5000, Odense C, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
22
|
Milara E, Gómez-Grande A, Tomás-Soler S, Seiffert AP, Alonso R, Gómez EJ, Martínez-López J, Sánchez-González P. Bone marrow segmentation and radiomics analysis of [ 18F]FDG PET/CT images for measurable residual disease assessment in multiple myeloma. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:107083. [PMID: 36044803 DOI: 10.1016/j.cmpb.2022.107083] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 07/07/2022] [Accepted: 08/22/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVES The last few years have been crucial in defining the most appropriate way to quantitatively assess [18F]FDG PET images in Multiple Myeloma (MM) patients to detect persistent tumor burden. The visual evaluation of images complements the assessment of Measurable Residual Disease (MRD) in bone marrow samples by multiparameter flow cytometry (MFC) or next-generation sequencing (NGS). The aim of this study was to quantify MRD by analyzing quantitative and texture [18F]FDG PET features. METHODS Whole body [18F]FDG PET of 39 patients with newly diagnosed MM were included in the database, and visually evaluated by experts in nuclear medicine. A segmentation methodology of the skeleton from CT images and an additional manual segmentation tool were proposed, implemented in a software solution including a graphical user interface. Both the compact bone and the spinal canal were removed from the segmentation to obtain only the bone marrow mask. SUV metrics, GLCM, GLRLM, and NGTDM parameters were extracted from the PET images and evaluated by Mann-Whitney U-tests and Spearman ρ rank correlation as valuable features differentiating PET+/PET- and MFC+/MFC- groups. Seven machine learning algorithms were applied for evaluating the classification performance of the extracted features. RESULTS Quantitative analysis for PET+/PET- differentiating demonstrated to be significant for most of the variables assessed with Mann-Whitney U-test such as Variance, Energy, and Entropy (p-value = 0.001). Moreover, the quantitative analysis with a balanced database evaluated by Mann-Whitney U-test revealed in even better results with 19 features with p-values < 0.001. On the other hand, radiomics analysis for MFC+/MFC- differentiating demonstrated the necessity of combining MFC evaluation with [18F]FDG PET assessment in the MRD diagnosis. Machine learning algorithms using the image features for the PET+/PET- classification demonstrated high performance metrics but decreasing for the MFC+/MFC- classification. CONCLUSIONS A proof-of-concept for the extraction and evaluation of bone marrow radiomics features of [18F]FDG PET images was proposed and implemented. The validation showed the possible use of these features for the image-based assessment of MRD.
Collapse
Affiliation(s)
- Eva Milara
- Biomedical Engineering and Telemedicine Centre, ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid, Avenida Complutense 30, Madrid 28040, Spain.
| | - Adolfo Gómez-Grande
- Department of Nuclear Medicine, Hospital Universitario 12 de Octubre, Madrid, Spain; Facultad de Medicina, Universidad Complutense de Madrid, Madrid, Spain
| | - Sebastián Tomás-Soler
- Biomedical Engineering and Telemedicine Centre, ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid, Avenida Complutense 30, Madrid 28040, Spain
| | - Alexander P Seiffert
- Biomedical Engineering and Telemedicine Centre, ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid, Avenida Complutense 30, Madrid 28040, Spain
| | - Rafael Alonso
- Facultad de Medicina, Universidad Complutense de Madrid, Madrid, Spain; Department of Hematology and Instituto de Investigación Sanitaria (imas12), Hospital Universitario 12 de Octubre, Madrid, Spain; Clinical Research Hematology Unit, Centro Nacional de Investigaciones Oncológicas (CNIO), Madrid, Spain; Centro de Investigación Biomédica en Red Cáncer (CIBERONC), Madrid, Spain
| | - Enrique J Gómez
- Biomedical Engineering and Telemedicine Centre, ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid, Avenida Complutense 30, Madrid 28040, Spain; Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Madrid, Spain
| | - Joaquín Martínez-López
- Facultad de Medicina, Universidad Complutense de Madrid, Madrid, Spain; Department of Hematology and Instituto de Investigación Sanitaria (imas12), Hospital Universitario 12 de Octubre, Madrid, Spain; Clinical Research Hematology Unit, Centro Nacional de Investigaciones Oncológicas (CNIO), Madrid, Spain; Centro de Investigación Biomédica en Red Cáncer (CIBERONC), Madrid, Spain
| | - Patricia Sánchez-González
- Biomedical Engineering and Telemedicine Centre, ETSI Telecomunicación, Center for Biomedical Technology, Universidad Politécnica de Madrid, Avenida Complutense 30, Madrid 28040, Spain; Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Madrid, Spain.
| |
Collapse
|
23
|
Freely Available, Fully Automated AI-Based Analysis of Primary Tumour and Metastases of Prostate Cancer in Whole-Body [18F]-PSMA-1007 PET-CT. Diagnostics (Basel) 2022; 12:diagnostics12092101. [PMID: 36140502 PMCID: PMC9497460 DOI: 10.3390/diagnostics12092101] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 08/25/2022] [Accepted: 08/28/2022] [Indexed: 11/16/2022] Open
Abstract
Here, we aimed to develop and validate a fully automated artificial intelligence (AI)-based method for the detection and quantification of suspected prostate tumour/local recurrence, lymph node metastases, and bone metastases from [18F]PSMA-1007 positron emission tomography-computed tomography (PET-CT) images. Images from 660 patients were included. Segmentations by one expert reader were ground truth. A convolutional neural network (CNN) was developed and trained on a training set, and the performance was tested on a separate test set of 120 patients. The AI method was compared with manual segmentations performed by several nuclear medicine physicians. Assessment of tumour burden (total lesion volume (TLV) and total lesion uptake (TLU)) was performed. The sensitivity of the AI method was, on average, 79% for detecting prostate tumour/recurrence, 79% for lymph node metastases, and 62% for bone metastases. On average, nuclear medicine physicians’ corresponding sensitivities were 78%, 78%, and 59%, respectively. The correlations of TLV and TLU between AI and nuclear medicine physicians were all statistically significant and ranged from R = 0.53 to R = 0.83. In conclusion, the development of an AI-based method for prostate cancer detection with sensitivity on par with nuclear medicine physicians was possible. The developed AI tool is freely available for researchers.
Collapse
|
24
|
Artificial Intelligence Increases the Agreement among Physicians Classifying Focal Skeleton/Bone Marrow Uptake in Hodgkin’s Lymphoma Patients Staged with [18F]FDG PET/CT—a Retrospective Study. Nucl Med Mol Imaging 2022; 57:110-116. [PMID: 36998589 PMCID: PMC10043120 DOI: 10.1007/s13139-022-00765-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 06/23/2022] [Accepted: 07/25/2022] [Indexed: 10/15/2022] Open
Abstract
Abstract
Purpose
Classification of focal skeleton/bone marrow uptake (BMU) can be challenging. The aim is to investigate whether an artificial intelligence–based method (AI), which highlights suspicious focal BMU, increases interobserver agreement among a group of physicians from different hospitals classifying Hodgkin’s lymphoma (HL) patients staged with [18F]FDG PET/CT.
Methods
Forty-eight patients staged with [18F]FDG PET/CT at Sahlgenska University Hospital between 2017 and 2018 were reviewed twice, 6 months apart, regarding focal BMU. During the second time review, the 10 physicians also had access to AI-based advice regarding focal BMU.
Results
Each physician’s classifications were pairwise compared with the classifications made by all the other physicians, resulting in 45 unique pairs of comparisons both without and with AI advice. The agreement between the physicians increased significantly when AI advice was available, which was measured as an increase in mean Kappa values from 0.51 (range 0.25–0.80) without AI advice to 0.61 (range 0.19–0.94) with AI advice (p = 0.005). The majority of the physicians agreed with the AI-based method in 40 (83%) of the 48 cases.
Conclusion
An AI-based method significantly increases interobserver agreement among physicians working at different hospitals by highlighting suspicious focal BMU in HL patients staged with [18F]FDG PET/CT.
Collapse
|
25
|
Pontiki AA, Lampridis S, De Angelis S, Lamata P, Housden R, Benedetti G, Bille A, Rhode K. Creation of personalised rib prostheses using a statistical shape model and 3D printing: Case report. Front Surg 2022; 9:936638. [PMID: 36090337 PMCID: PMC9450702 DOI: 10.3389/fsurg.2022.936638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 07/26/2022] [Indexed: 12/02/2022] Open
Abstract
Management of chest wall defects after oncologic resection can be challenging, depending on the size and location of the defect, as well as the method of reconstruction. This report presents the first clinical case where patient-specific rib prostheses were created using a computer program and statistical shape model of human ribs. A 64-year-old male was diagnosed with non-small-cell lung cancer originating in the right upper lobe and invading the lateral aspect of the 3rd, 4th, and 5th ribs. Prior to surgical resection, a statistical shape model of human ribs was created and used to synthesise rib models in the software MATLAB (MathWorks, Natick, MA, USA). The patient's age, weight, height, and sex, as well as the number and side of the ribs of interest, were the inputs to the program. Based on these data, the program generated digital models of the right 3rd, 4th, and 5th ribs. These models were 3D printed, and a silicone mould was created from them. The patient subsequently underwent right upper lobectomy with en bloc resection of the involved chest wall. During the operation, the silicone mould was used to produce rigid prostheses consisting of methyl methacrylate and two layers of polypropylene mesh in a “sandwich” fashion. The prosthetic patch was then implanted to cover the chest wall defect. Thirty days after the surgery, the patient has returned to his pre-disease performance and physical activities. The statistical shape model and 3D printing is an optimised 3D modelling method that can provide clinicians with a time-efficient technique to create personalised rib prostheses, without any expertise or prior software knowledge.
Collapse
Affiliation(s)
- Antonia A. Pontiki
- Department of Surgical & Interventional Engineering, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Correspondence: Antonia A. Pontiki
| | - Savvas Lampridis
- Department of Thoracic Surgery, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Sara De Angelis
- Department of Surgical & Interventional Engineering, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Pablo Lamata
- Department of Surgical & Interventional Engineering, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Richard Housden
- Department of Surgical & Interventional Engineering, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Giulia Benedetti
- Department of Radiology, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Andrea Bille
- Department of Thoracic Surgery, Guy’s and St Thomas’ NHS Foundation Trust, London, United Kingdom
| | - Kawal Rhode
- Department of Surgical & Interventional Engineering, School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| |
Collapse
|
26
|
Efficient lower-limb segmentation for large-scale volumetric CT by using projection view and voxel group attention. Med Biol Eng Comput 2022; 60:2201-2216. [DOI: 10.1007/s11517-022-02598-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 04/12/2022] [Indexed: 10/18/2022]
|
27
|
Piri R, Edenbrandt L, Larsson M, Enqvist O, Nøddeskou-Fink AH, Gerke O, Høilund-Carlsen PF. Aortic wall segmentation in 18F-sodium fluoride PET/CT scans: Head-to-head comparison of artificial intelligence-based versus manual segmentation. J Nucl Cardiol 2022; 29:2001-2010. [PMID: 33982202 DOI: 10.1007/s12350-021-02649-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 04/12/2021] [Indexed: 01/03/2023]
Abstract
BACKGROUND We aimed to establish and test an automated AI-based method for rapid segmentation of the aortic wall in positron emission tomography/computed tomography (PET/CT) scans. METHODS For segmentation of the wall in three sections: the arch, thoracic, and abdominal aorta, we developed a tool based on a convolutional neural network (CNN), available on the Research Consortium for Medical Image Analysis (RECOMIA) platform, capable of segmenting 100 different labels in CT images. It was tested on 18F-sodium fluoride PET/CT scans of 49 subjects (29 healthy controls and 20 angina pectoris patients) and compared to data obtained by manual segmentation. The following derived parameters were compared using Bland-Altman Limits of Agreement: segmented volume, and maximal, mean, and total standardized uptake values (SUVmax, SUVmean, SUVtotal). The repeatability of the manual method was examined in 25 randomly selected scans. RESULTS CNN-derived values for volume, SUVmax, and SUVtotal were all slightly, i.e., 13-17%, lower than the corresponding manually obtained ones, whereas SUVmean values for the three aortic sections were virtually identical for the two methods. Manual segmentation lasted typically 1-2 hours per scan compared to about one minute with the CNN-based approach. The maximal deviation at repeat manual segmentation was 6%. CONCLUSIONS The automated CNN-based approach was much faster and provided parameters that were about 15% lower than the manually obtained values, except for SUVmean values, which were comparable. AI-based segmentation of the aorta already now appears as a trustworthy and fast alternative to slow and cumbersome manual segmentation.
Collapse
Affiliation(s)
- Reza Piri
- Department of Nuclear Medicine, Odense University Hospital, 5000, Odense, Denmark.
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark.
| | - Lars Edenbrandt
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Physiology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | | | - Olof Enqvist
- Eigenvision AB, Malmö, Sweden
- Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | | | - Oke Gerke
- Department of Nuclear Medicine, Odense University Hospital, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Poul Flemming Høilund-Carlsen
- Department of Nuclear Medicine, Odense University Hospital, 5000, Odense, Denmark
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
28
|
Zhou X, Wang H, Feng C, Xu R, He Y, Li L, Tu C. Emerging Applications of Deep Learning in Bone Tumors: Current Advances and Challenges. Front Oncol 2022; 12:908873. [PMID: 35928860 PMCID: PMC9345628 DOI: 10.3389/fonc.2022.908873] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 06/15/2022] [Indexed: 12/12/2022] Open
Abstract
Deep learning is a subfield of state-of-the-art artificial intelligence (AI) technology, and multiple deep learning-based AI models have been applied to musculoskeletal diseases. Deep learning has shown the capability to assist clinical diagnosis and prognosis prediction in a spectrum of musculoskeletal disorders, including fracture detection, cartilage and spinal lesions identification, and osteoarthritis severity assessment. Meanwhile, deep learning has also been extensively explored in diverse tumors such as prostate, breast, and lung cancers. Recently, the application of deep learning emerges in bone tumors. A growing number of deep learning models have demonstrated good performance in detection, segmentation, classification, volume calculation, grading, and assessment of tumor necrosis rate in primary and metastatic bone tumors based on both radiological (such as X-ray, CT, MRI, SPECT) and pathological images, implicating a potential for diagnosis assistance and prognosis prediction of deep learning in bone tumors. In this review, we first summarized the workflows of deep learning methods in medical images and the current applications of deep learning-based AI for diagnosis and prognosis prediction in bone tumors. Moreover, the current challenges in the implementation of the deep learning method and future perspectives in this field were extensively discussed.
Collapse
Affiliation(s)
- Xiaowen Zhou
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Hua Wang
- Xiangya School of Medicine, Central South University, Changsha, China
| | - Chengyao Feng
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Ruilin Xu
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Yu He
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Lan Li
- Department of Pathology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Chao Tu
- Department of Orthopaedics, The Second Xiangya Hospital, Central South University, Changsha, China
- Hunan Key Laboratory of Tumor Models and Individualized Medicine, The Second Xiangya Hospital, Central South University, Changsha, China
- *Correspondence: Chao Tu,
| |
Collapse
|
29
|
Kuah T, Vellayappan BA, Makmur A, Nair S, Song J, Tan JH, Kumar N, Quek ST, Hallinan JTPD. State-of-the-Art Imaging Techniques in Metastatic Spinal Cord Compression. Cancers (Basel) 2022; 14:cancers14133289. [PMID: 35805059 PMCID: PMC9265325 DOI: 10.3390/cancers14133289] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/24/2022] [Accepted: 06/28/2022] [Indexed: 12/23/2022] Open
Abstract
Metastatic Spinal Cord Compression (MSCC) is a debilitating complication in oncology patients. This narrative review discusses the strengths and limitations of various imaging modalities in diagnosing MSCC, the role of imaging in stereotactic body radiotherapy (SBRT) for MSCC treatment, and recent advances in deep learning (DL) tools for MSCC diagnosis. PubMed and Google Scholar databases were searched using targeted keywords. Studies were reviewed in consensus among the co-authors for their suitability before inclusion. MRI is the gold standard of imaging to diagnose MSCC with reported sensitivity and specificity of 93% and 97% respectively. CT Myelogram appears to have comparable sensitivity and specificity to contrast-enhanced MRI. Conventional CT has a lower diagnostic accuracy than MRI in MSCC diagnosis, but is helpful in emergent situations with limited access to MRI. Metal artifact reduction techniques for MRI and CT are continually being researched for patients with spinal implants. Imaging is crucial for SBRT treatment planning and three-dimensional positional verification of the treatment isocentre prior to SBRT delivery. Structural and functional MRI may be helpful in post-treatment surveillance. DL tools may improve detection of vertebral metastasis and reduce time to MSCC diagnosis. This enables earlier institution of definitive therapy for better outcomes.
Collapse
Affiliation(s)
- Tricia Kuah
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
- Correspondence: ; Tel.: +65-6779-5555
| | - Balamurugan A. Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, Singapore 119074, Singapore;
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Shalini Nair
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
| | - Junda Song
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore; (J.H.T.); (N.K.)
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore; (J.H.T.); (N.K.)
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore; (A.M.); (S.N.); (J.S.); (S.T.Q.); (J.T.P.D.H.)
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
30
|
Pontiki AA, De Angelis S, Dibblin C, Trujillo-Cortes I, Lamata P, Housden R, Benedetti G, Bille A, Rhode K. Development and Evaluation of a Rib Statistical Shape Model for Thoracic Surgery. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3758-3763. [PMID: 36085707 DOI: 10.1109/embc48229.2022.9870985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Patients with advanced cancer undergoing chest wall resection may require reconstruction. Currently, rib prostheses are created by segmenting computed tomography images, which is time-consuming and labour intensive. The aim was to optimise the production of digital rib models based on a patient's age, weight, height and gender. A statistical shape model of human ribs was created and used to synthetise rib models, which were compared to the ones produced by segmentation and mirroring. The segmentation took 11.56±1.60 min compared to 0.027 ±0.009 min using the new technique. The average mesh error between the mirroring technique and segmentation was 0.58±0.25 mm (right ribs), and 0.87±0.18 mm (left ribs), compared to 1.37±0.66 mm ( ) and 1.68 ±0.77 mm ( ), respectively, for the new technique. The new technique is promising for the efficiency and ease-of-use in the clinical environment. Clinical Relevance- This is an optimised 3D modelling method providing clinicians with a time-efficient technique to create patient-specific rib prostheses, without any expertise or software knowledge required.
Collapse
|
31
|
Deep Learning Model for Grading Metastatic Epidural Spinal Cord Compression on Staging CT. Cancers (Basel) 2022; 14:cancers14133219. [PMID: 35804990 PMCID: PMC9264856 DOI: 10.3390/cancers14133219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/21/2022] [Accepted: 06/24/2022] [Indexed: 02/02/2023] Open
Abstract
Background: Metastatic epidural spinal cord compression (MESCC) is a disastrous complication of advanced malignancy. Deep learning (DL) models for automatic MESCC classification on staging CT were developed to aid earlier diagnosis. Methods: This retrospective study included 444 CT staging studies from 185 patients with suspected MESCC who underwent MRI spine studies within 60 days of the CT studies. The DL model training/validation dataset consisted of 316/358 (88%) and the test set of 42/358 (12%) CT studies. Training/validation and test datasets were labeled in consensus by two subspecialized radiologists (6 and 11-years-experience) using the MRI studies as the reference standard. Test sets were labeled by the developed DL models and four radiologists (2−7 years of experience) for comparison. Results: DL models showed almost-perfect interobserver agreement for classification of CT spine images into normal, low, and high-grade MESCC, with kappas ranging from 0.873−0.911 (p < 0.001). The DL models (lowest κ = 0.873, 95% CI 0.858−0.887) also showed superior interobserver agreement compared to two of the four radiologists for three-class classification, including a specialist (κ = 0.820, 95% CI 0.803−0.837) and general radiologist (κ = 0.726, 95% CI 0.706−0.747), both p < 0.001. Conclusion: DL models for the MESCC classification on a CT showed comparable to superior interobserver agreement to radiologists and could be used to aid earlier diagnosis.
Collapse
|
32
|
Liberini V, Laudicella R, Balma M, Nicolotti DG, Buschiazzo A, Grimaldi S, Lorenzon L, Bianchi A, Peano S, Bartolotta TV, Farsad M, Baldari S, Burger IA, Huellner MW, Papaleo A, Deandreis D. Radiomics and artificial intelligence in prostate cancer: new tools for molecular hybrid imaging and theragnostics. Eur Radiol Exp 2022; 6:27. [PMID: 35701671 PMCID: PMC9198151 DOI: 10.1186/s41747-022-00282-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 04/20/2022] [Indexed: 11/21/2022] Open
Abstract
In prostate cancer (PCa), the use of new radiopharmaceuticals has improved the accuracy of diagnosis and staging, refined surveillance strategies, and introduced specific and personalized radioreceptor therapies. Nuclear medicine, therefore, holds great promise for improving the quality of life of PCa patients, through managing and processing a vast amount of molecular imaging data and beyond, using a multi-omics approach and improving patients’ risk-stratification for tailored medicine. Artificial intelligence (AI) and radiomics may allow clinicians to improve the overall efficiency and accuracy of using these “big data” in both the diagnostic and theragnostic field: from technical aspects (such as semi-automatization of tumor segmentation, image reconstruction, and interpretation) to clinical outcomes, improving a deeper understanding of the molecular environment of PCa, refining personalized treatment strategies, and increasing the ability to predict the outcome. This systematic review aims to describe the current literature on AI and radiomics applied to molecular imaging of prostate cancer.
Collapse
Affiliation(s)
- Virginia Liberini
- Medical Physiopathology - A.O.U. Città della Salute e della Scienza di Torino, Division of Nuclear Medicine, Department of Medical Science, University of Torino, 10126, Torino, Italy. .,Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy.
| | - Riccardo Laudicella
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, 8006, Zurich, Switzerland.,Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and of Morpho-Functional Imaging, University of Messina, 98125, Messina, Italy.,Nuclear Medicine Unit, Fondazione Istituto G. Giglio, Ct.da Pietrapollastra Pisciotto, Cefalù, Palermo, Italy
| | - Michele Balma
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | | | - Ambra Buschiazzo
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | - Serena Grimaldi
- Medical Physiopathology - A.O.U. Città della Salute e della Scienza di Torino, Division of Nuclear Medicine, Department of Medical Science, University of Torino, 10126, Torino, Italy
| | - Leda Lorenzon
- Medical Physics Department, Central Bolzano Hospital, 39100, Bolzano, Italy
| | - Andrea Bianchi
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | - Simona Peano
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | | | - Mohsen Farsad
- Nuclear Medicine, Central Hospital Bolzano, 39100, Bolzano, Italy
| | - Sergio Baldari
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and of Morpho-Functional Imaging, University of Messina, 98125, Messina, Italy
| | - Irene A Burger
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, 8006, Zurich, Switzerland.,Department of Nuclear Medicine, Kantonsspital Baden, 5004, Baden, Switzerland
| | - Martin W Huellner
- Department of Nuclear Medicine, University Hospital Zurich, University of Zurich, 8006, Zurich, Switzerland
| | - Alberto Papaleo
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100, Cuneo, Italy
| | - Désirée Deandreis
- Medical Physiopathology - A.O.U. Città della Salute e della Scienza di Torino, Division of Nuclear Medicine, Department of Medical Science, University of Torino, 10126, Torino, Italy
| |
Collapse
|
33
|
Hallinan JTPD, Zhu L, Zhang W, Lim DSW, Baskar S, Low XZ, Yeong KY, Teo EC, Kumarakulasinghe NB, Yap QV, Chan YH, Lin S, Tan JH, Kumar N, Vellayappan BA, Ooi BC, Quek ST, Makmur A. Deep Learning Model for Classifying Metastatic Epidural Spinal Cord Compression on MRI. Front Oncol 2022; 12:849447. [PMID: 35600347 PMCID: PMC9114468 DOI: 10.3389/fonc.2022.849447] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/18/2022] [Indexed: 11/13/2022] Open
Abstract
Background Metastatic epidural spinal cord compression (MESCC) is a devastating complication of advanced cancer. A deep learning (DL) model for automated MESCC classification on MRI could aid earlier diagnosis and referral. Purpose To develop a DL model for automated classification of MESCC on MRI. Materials and Methods Patients with known MESCC diagnosed on MRI between September 2007 and September 2017 were eligible. MRI studies with instrumentation, suboptimal image quality, and non-thoracic regions were excluded. Axial T2-weighted images were utilized. The internal dataset split was 82% and 18% for training/validation and test sets, respectively. External testing was also performed. Internal training/validation data were labeled using the Bilsky MESCC classification by a musculoskeletal radiologist (10-year experience) and a neuroradiologist (5-year experience). These labels were used to train a DL model utilizing a prototypical convolutional neural network. Internal and external test sets were labeled by the musculoskeletal radiologist as the reference standard. For assessment of DL model performance and interobserver variability, test sets were labeled independently by the neuroradiologist (5-year experience), a spine surgeon (5-year experience), and a radiation oncologist (11-year experience). Inter-rater agreement (Gwet’s kappa) and sensitivity/specificity were calculated. Results Overall, 215 MRI spine studies were analyzed [164 patients, mean age = 62 ± 12(SD)] with 177 (82%) for training/validation and 38 (18%) for internal testing. For internal testing, the DL model and specialists all showed almost perfect agreement (kappas = 0.92–0.98, p < 0.001) for dichotomous Bilsky classification (low versus high grade) compared to the reference standard. Similar performance was seen for external testing on a set of 32 MRI spines with the DL model and specialists all showing almost perfect agreement (kappas = 0.94–0.95, p < 0.001) compared to the reference standard. Conclusion A DL model showed comparable agreement to a subspecialist radiologist and clinical specialists for the classification of malignant epidural spinal cord compression and could optimize earlier diagnosis and surgical referral.
Collapse
Affiliation(s)
- James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Lei Zhu
- NUS Graduate School, Integrative Sciences and Engineering Programme, National University of Singapore, Singapore, Singapore
| | - Wenqiao Zhang
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Desmond Shi Wei Lim
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Sangeetha Baskar
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Kuan Yuen Yeong
- Department of Radiology, Ng Teng Fong General Hospital, Singapore, Singapore
| | - Ee Chin Teo
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore
| | | | - Qai Ven Yap
- Biostatistics Unit, Yong Loo Lin School of Medicine, Singapore, Singapore
| | - Yiong Huak Chan
- Biostatistics Unit, Yong Loo Lin School of Medicine, Singapore, Singapore
| | - Shuxun Lin
- Division of Spine Surgery, Department of Orthopaedic Surgery, Ng Teng Fong General Hospital, Singapore, Singapore
| | - Jiong Hao Tan
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, Singapore, Singapore
| | - Naresh Kumar
- University Spine Centre, Department of Orthopaedic Surgery, National University Health System, Singapore, Singapore
| | - Balamurugan A Vellayappan
- Department of Radiation Oncology, National University Cancer Institute Singapore, National University Hospital, Singapore, Singapore
| | - Beng Chin Ooi
- Department of Computer Science, School of Computing, National University of Singapore, Singapore, Singapore
| | - Swee Tian Quek
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, Singapore, Singapore.,Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|
34
|
Improved distinct bone segmentation from upper-body CT using binary-prediction-enhanced multi-class inference. Int J Comput Assist Radiol Surg 2022; 17:2113-2120. [PMID: 35595948 PMCID: PMC9515055 DOI: 10.1007/s11548-022-02650-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 04/20/2022] [Indexed: 11/28/2022]
Abstract
Purpose: Automated distinct bone segmentation has many applications in planning and navigation tasks. 3D U-Nets have previously been used to segment distinct bones in the upper body, but their performance is not yet optimal. Their most substantial source of error lies not in confusing one bone for another, but in confusing background with bone-tissue. Methods: In this work, we propose binary-prediction-enhanced multi-class (BEM) inference, which takes into account an additional binary background/bone-tissue prediction, to improve the multi-class distinct bone segmentation. We evaluate the method using different ways of obtaining the binary prediction, contrasting a two-stage approach to four networks with two segmentation heads. We perform our experiments on two datasets: An in-house dataset comprising 16 upper-body CT scans with voxelwise labelling into 126 distinct classes, and a public dataset containing 50 synthetic CT scans, with 41 different classes. Results: The most successful network with two segmentation heads achieves a class-median Dice coefficient of 0.85 on cross-validation with the upper-body CT dataset. These results outperform both our previously published 3D U-Net baseline with standard inference, and previously reported results from other groups. On the synthetic dataset, we also obtain improved results when using BEM-inference. Conclusion: Using a binary bone-tissue/background prediction as guidance during inference improves distinct bone segmentation from upper-body CT scans and from the synthetic dataset. The results are robust to multiple ways of obtaining the bone-tissue segmentation and hold for the two-stage approach as well as for networks with two segmentation heads.
Collapse
|
35
|
Hildebrandt MG, Naghavi-Behzad M, Vogsen M. A role of FDG-PET/CT for response evaluation in metastatic breast cancer? Semin Nucl Med 2022; 52:520-530. [PMID: 35525631 DOI: 10.1053/j.semnuclmed.2022.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 03/27/2022] [Indexed: 01/19/2023]
Abstract
Breast cancer prognosis is steadily improving due to early detection of primary cancer in screening programs and revolutionizing treatment development. In the metastatic setting, therapy improvements render breast cancer a chronic disease. Although FDG-PET/CT has emerged as a highly accurate method for staging metastatic breast cancer, there has been no change in response evaluation methods for decades. FDG-PET/CT has proven high prognostic values in patients with metastatic breast cancer when using quantitative PET methods. It has also shown a higher predictive value than conventional CT when applying the respective response evaluation criteria, RECIST and PERCIST. Response categorization using FDG-PET/CT is more sensitive in detecting progressive and regressive disease, while conventional imaging such as CT and bone scintigraphy deem stable disease more often. These findings reflect the higher accuracy of FDG-PET/CT for response evaluation in this patient group. But does the higher accuracy of FDG-PET/CT translate into a patient benefit when implementing it for monitoring response to palliative treatment? We have evidence of survival benefit from a retrospective study indicating the superiority of using FDG-PET/CT compared with conventional imaging for response evaluation in metastatic breast cancer patients. The survival benefit seems to result from earlier detection of progression with FDG-PET/CT than conventional imaging, leading to an earlier change in treatment with potentially better efficacy of the subsequent treatment line. FDG-PET/CT can be used semiquantitatively as suggested in PERCIST. However, we still need to improve clinically applicable methods based on neural network modeling to better integrate the quantitative information in a smart and standardized way, enabling relevant comparability between scans, patients, and institutions. Such innovation is warranted to support imaging specialists in diagnostic response assessment. Prospective multicenter studies analyzing patients' survival, quality of life, societal and patient costs of replacing conventional imaging with FDG-PET/CT are needed before firm conclusions can be drawn on which type of scan to recommend in future clinical guidelines.
Collapse
Affiliation(s)
- Malene Grubbe Hildebrandt
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Center for Personalized Response Monitoring in Oncology, PREMIO, Odense University Hospital, Odense, Denmark; Center for Innovative Medical Technology, CIMT, Odense University Hospital, Odense, Denmark.
| | - Mohammad Naghavi-Behzad
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Center for Personalized Response Monitoring in Oncology, PREMIO, Odense University Hospital, Odense, Denmark
| | - Marianne Vogsen
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Center for Personalized Response Monitoring in Oncology, PREMIO, Odense University Hospital, Odense, Denmark; Department of Oncology, Odense University Hospital, Odense, Denmark
| |
Collapse
|
36
|
Piri R, Nøddeskou-Fink AH, Gerke O, Larsson M, Edenbrandt L, Enqvist O, Høilund-Carlsen PF, Stochkendahl MJ. PET/CT imaging of spinal inflammation and microcalcification in patients with low back pain: A pilot study on the quantification by artificial intelligence-based segmentation. Clin Physiol Funct Imaging 2022; 42:225-232. [PMID: 35319166 PMCID: PMC9322590 DOI: 10.1111/cpf.12751] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 03/11/2022] [Indexed: 12/29/2022]
Abstract
Background Current imaging modalities are often incapable of identifying nociceptive sources of low back pain (LBP). We aimed to characterize these by means of positron emission tomography/computed tomography (PET/CT) of the lumbar spine region applying tracers 18F‐fluorodeoxyglucose (FDG) and 18F‐sodium fluoride (NaF) targeting inflammation and active microcalcification, respectively. Methods Using artificial intelligence (AI)‐based quantification, we compared PET findings in two sex‐ and age‐matched groups, a case group of seven males and five females, mean age 45 ± 14 years, with ongoing LBP and a similar control group of 12 pain‐free individuals. PET/CT scans were segmented into three distinct volumes of interest (VOIs): lumbar vertebral bodies, facet joints and intervertebral discs. Maximum, mean and total standardized uptake values (SUVmax, SUVmean and SUVtotal) for FDG and NaF uptake in the 3 VOIs were measured and compared between groups. Holm–Bonferroni correction was applied to adjust for multiple testing. Results FDG uptake was slightly higher in most locations of the LBP group including higher SUVmean in the intervertebral discs (0.96 ± 0.34 vs. 0.69 ± 0.15). All NaF uptake values were higher in cases, including higher SUVmax in the intervertebral discs (11.63 ± 3.29 vs. 9.45 ± 1.32) and facet joints (14.98 ± 6.55 vs. 10.60 ± 2.97). Conclusion Observed intergroup differences suggest acute inflammation and microcalcification as possible nociceptive causes of LBP. AI‐based quantification of relevant lumbar VOIs in PET/CT scans of LBP patients and controls appears to be feasible. These promising, early findings warrant further investigation and confirmation.
Collapse
Affiliation(s)
- Reza Piri
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark.,Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | | | - Oke Gerke
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark.,Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | | | - Lars Edenbrandt
- Department of Molecular and Clinical Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.,Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Olof Enqvist
- Eigenvision AB, Malmö, Sweden.,Department of Electrical Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Poul-Flemming Høilund-Carlsen
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark.,Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Mette J Stochkendahl
- Department of Sports Science and Clinical Biomechanics, University of Southern Denmark, Odense, Denmark.,Chiropractic Knowledge Hub, Odense, Denmark
| |
Collapse
|
37
|
Xiong X, Smith BJ, Graves SA, Sunderland JJ, Graham MM, Gross BA, Buatti JM, Beichel RR. Quantification of uptake in pelvis F-18 FLT PET-CT images using a 3D localization and segmentation CNN. Med Phys 2022; 49:1585-1598. [PMID: 34982836 PMCID: PMC9447843 DOI: 10.1002/mp.15440] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 12/01/2021] [Accepted: 12/02/2021] [Indexed: 11/12/2022] Open
Abstract
PURPOSE The purpose of this work was to develop and validate a deep convolutional neural network (CNN) approach for the automated pelvis segmentation in computed tomography (CT) scans to enable the quantification of active pelvic bone marrow by means of Fluorothymidine F-18 (FLT) tracer uptake measurement in positron emission tomography (PET) scans. This quantification is a critical step in calculating bone marrow dose for radiopharmaceutical therapy clinical applications as well as external beam radiation doses. METHODS An approach for the combined localization and segmentation of the pelvis in CT volumes of varying sizes, ranging from full-body to pelvis CT scans, was developed that utilizes a novel CNN architecture in combination with a random sampling strategy. The method was validated on 34 planning CT scans and 106 full-body FLT PET-CT scans using a cross-validation strategy. Specifically, two different training and CNN application options were studied, quantitatively assessed, and statistically compared. RESULTS The proposed method was able to successfully locate and segment the pelvis in all test cases. On all data sets, an average Dice coefficient of 0.9396 ± $\pm$ 0.0182 or better was achieved. The relative tracer uptake measurement error ranged between 0.065% and 0.204%. The proposed approach is time-efficient and shows a reduction in runtime of up to 95% compared to a standard U-Net-based approach without a localization component. CONCLUSIONS The proposed method enables the efficient calculation of FLT uptake in the pelvis. Thus, it represents a valuable tool to facilitate bone marrow preserving adaptive radiation therapy and radiopharmaceutical dose calculation. Furthermore, the method can be adapted to process other bone structures as well as organs.
Collapse
Affiliation(s)
- Xiaofan Xiong
- Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242
| | - Brian J. Smith
- Department of Biostatistics, The University of Iowa, Iowa City, IA 52242
| | - Stephen A. Graves
- Department of Radiology, The University of Iowa, Iowa City, IA 52242
| | | | - Michael M. Graham
- Department of Radiology, The University of Iowa, Iowa City, IA 52242
| | - Brandie A. Gross
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242
| | - John M. Buatti
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242
| | - Reinhard R. Beichel
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242
| |
Collapse
|
38
|
Marwa F, Zahzah EH, Bouallegue K, Machhout M. Deep learning based neural network application for automatic ultrasonic computed tomographic bone image segmentation. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:13537-13562. [PMID: 35194385 PMCID: PMC8853291 DOI: 10.1007/s11042-022-12322-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/30/2021] [Accepted: 01/17/2022] [Indexed: 06/14/2023]
Abstract
Deep-learning techniques have led to technological progress in the area of medical imaging segmentation especially in the ultrasound domain. In this paper, the main goal of this study is to optimize a deep-learning-based neural network architecture for automatic segmentation in Ultrasonic Computed Tomography (USCT) bone images in a short time process. The proposed method is based on an end to end neural network architecture. First, the novelty is shown by the improvement of Variable Structure Model of Neuron (VSMN), which is trained for both USCT noise removal and dataset augmentation. Second, a VGG-SegNet neural network architecture is trained and tested on new USCT images not seen before for automatic bone segmentation. Therefore, we offer a free USCT dataset. In addition, the proposed model is implemented on both the CPU and the GPU, hence overcoming previous works by a value of 97.38% and 96% for training and validation and achieving high segmentation accuracy for testing with a small error of 0.006, in a short time process. The suggested method demonstrates its ability to augment USCT data and then to automatically segment USCT bone structures achieving excellent accuracy outperforming the state of the art.
Collapse
Affiliation(s)
- Fradi Marwa
- Physic Department of Faculty of Sciences of Monastir, Monastir University, Monastir, Tunisia
- Laboratory of Informatics, Image and Interaction (L3i, France), La Rochelle University, La Rochelle, France
| | - El-hadi Zahzah
- Laboratory of Informatics, Image and Interaction (L3i, France), La Rochelle University, La Rochelle, France
| | | | - Mohsen Machhout
- Physic Department of Faculty of Sciences of Monastir, Monastir University, Monastir, Tunisia
| |
Collapse
|
39
|
Li MD, Ahmed SR, Choy E, Lozano-Calderon SA, Kalpathy-Cramer J, Chang CY. Artificial intelligence applied to musculoskeletal oncology: a systematic review. Skeletal Radiol 2022; 51:245-256. [PMID: 34013447 DOI: 10.1007/s00256-021-03820-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 05/13/2021] [Accepted: 05/13/2021] [Indexed: 02/02/2023]
Abstract
Developments in artificial intelligence have the potential to improve the care of patients with musculoskeletal tumors. We performed a systematic review of the published scientific literature to identify the current state of the art of artificial intelligence applied to musculoskeletal oncology, including both primary and metastatic tumors, and across the radiology, nuclear medicine, pathology, clinical research, and molecular biology literature. Through this search, we identified 252 primary research articles, of which 58 used deep learning and 194 used other machine learning techniques. Articles involving deep learning have mostly involved bone scintigraphy, histopathology, and radiologic imaging. Articles involving other machine learning techniques have mostly involved transcriptomic analyses, radiomics, and clinical outcome prediction models using medical records. These articles predominantly present proof-of-concept work, other than the automated bone scan index for bone metastasis quantification, which has translated to clinical workflows in some regions. We systematically review and discuss this literature, highlight opportunities for multidisciplinary collaboration, and identify potentially clinically useful topics with a relative paucity of research attention. Musculoskeletal oncology is an inherently multidisciplinary field, and future research will need to integrate and synthesize noisy siloed data from across clinical, imaging, and molecular datasets. Building the data infrastructure for collaboration will help to accelerate progress towards making artificial intelligence truly useful in musculoskeletal oncology.
Collapse
Affiliation(s)
- Matthew D Li
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA. .,Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Syed Rakin Ahmed
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.,Harvard Medical School, Harvard Graduate Program in Biophysics, Harvard University, Cambridge, MA, USA.,Geisel School of Medicine At Dartmouth, Dartmouth College, Hanover, NH, USA
| | - Edwin Choy
- Division of Hematology Oncology, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Santiago A Lozano-Calderon
- Department of Orthopedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Connie Y Chang
- Division of Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
40
|
Computational and image processing methods for analysis and automation of anatomical alignment and joint spacing in reconstructive surgery. Int J Comput Assist Radiol Surg 2022; 17:541-551. [DOI: 10.1007/s11548-021-02548-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 12/17/2021] [Indexed: 11/05/2022]
|
41
|
Aiello M, Baldi D, Esposito G, Valentino M, Randon M, Salvatore M, Cavaliere C. Evaluation of AI-Based Segmentation Tools for COVID-19 Lung Lesions on Conventional and Ultra-low Dose CT Scans. Dose Response 2022; 20:15593258221082896. [PMID: 35422680 PMCID: PMC9002358 DOI: 10.1177/15593258221082896] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 02/04/2022] [Indexed: 11/16/2022] Open
Abstract
A reliable diagnosis and accurate monitoring are pivotal steps for treatment and prevention of COVID-19. Chest computed tomography (CT) has been considered a crucial diagnostic imaging technique for the injury assessment of the viral pneumonia. Furthermore, the automatization of the segmentation methods for lung alterations helps to speed up the diagnosis and lighten radiologists' workload. Considering the assiduous pathology monitoring, ultra-low dose (ULD) chest CT protocols have been implemented to drastically reduce the radiation burden. Unfortunately, the available AI technologies have not been trained on ULD-CT data and validated and their applicability deserves careful evaluation. Therefore, this work aims to compare the results of available AI tools (BCUnet, CORADS AI, NVIDIA CLARA Train SDK and CT Pneumonia Analysis) on a dataset of 73 CT examinations acquired both with conventional dose (CD) and ULD protocols. COVID-19 volume percentage, resulting from each tool, was statistically compared. This study demonstrated high comparability of the results on CD-CT and ULD-CT data among the four AI tools, with high correlation between the results obtained on both protocols (R > .68, P < .001, for all AI tools).
Collapse
Affiliation(s)
| | | | | | - Marika Valentino
- Istituto di Scienze Applicate e
Sistemi Intelligenti “Eduardo Caianiello” (ISASI-CNR), Pozzuoli, Italy
- Università Degli Studi di Napoli
Federico II, Dip. di Ingegneria Elettrica e Delle Tecnologie
Dell'Informazione, Italy
| | | | | | | |
Collapse
|
42
|
Bori E, Pancani S, Vigliotta S, Innocenti B. Validation and accuracy evaluation of automatic segmentation for knee joint pre-planning. Knee 2021; 33:275-281. [PMID: 34739958 DOI: 10.1016/j.knee.2021.10.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 09/28/2021] [Accepted: 10/12/2021] [Indexed: 02/02/2023]
Abstract
BACKGROUND Proper use of three-dimensional (3D) models generated from medical imaging data in clinical preoperative planning, training and consultation is based on the preliminary proved accuracy of the replication of the patient anatomy. Therefore, this study investigated the dimensional accuracy of 3D reconstructions of the knee joint generated from computed tomography scans via automatic segmentation by comparing them with 3D models generated through manual segmentation. METHODS Three unpaired, fresh-frozen right legs were investigated. Three-dimensional models of the femur and the tibia of each leg were manually segmented using a commercial software and compared in terms of geometrical accuracy with the 3D models automatically segmented using proprietary software. Bony landmarks were identified and used to calculate clinically relevant distances: femoral epicondylar distance; posterior femoral epicondylar distance; femoral trochlear groove length; tibial knee center tubercle distance (TKCTD). Pearson's correlation coefficient and Bland and Altman plots were used to evaluate the level of agreement between measured distances. RESULTS Differences between parameters measured on 3D models manually and automatically segmented were below 1 mm (range: -0.06 to 0.72 mm), except for TKCTD (between 1.00 and 1.40 mm in two specimens). In addition, there was a significant strong correlation between measurements. CONCLUSIONS The results obtained are comparable to those reported in previous studies where accuracy of bone 3D reconstruction was investigated. Automatic segmentation techniques can be used to quickly reconstruct reliable 3D models of bone anatomy and these results may contribute to enhance the spread of this technology in preoperative and operative settings, where it has shown considerable potential.
Collapse
Affiliation(s)
- Edoardo Bori
- BEAMS Department, Université Libre de Bruxelles, Bruxelles, Belgium.
| | | | | | | |
Collapse
|
43
|
Paravastu SS, Hasani N, Farhadi F, Collins MT, Edenbrandt L, Summers RM, Saboury B. Applications of Artificial Intelligence in 18F-Sodium Fluoride Positron Emission Tomography/Computed Tomography:: Current State and Future Directions. PET Clin 2021; 17:115-135. [PMID: 34809861 DOI: 10.1016/j.cpet.2021.09.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
This review discusses the current state of artificial intelligence (AI) in 18F-NaF-PET/CT imaging and the potential applications to come in diagnosis, prognostication, and improvement of care in patients with bone diseases, with emphasis on the role of AI algorithms in CT bone segmentation, relying on their prevalence in medical imaging and utility in the extraction of spatial information in combined PET/CT studies.
Collapse
Affiliation(s)
- Sriram S Paravastu
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Skeletal Disorders and Mineral Homeostasis Section, National Institute of Dental and Craniofacial Research, National Institutes of Health (NIH), 30 Convent Dr., Building 30, Room 228 MSC 4320, Bethesda, MD 20892, USA
| | - Navid Hasani
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; University of Queensland Faculty of Medicine, Ochsner Clinical School, New Orleans, LA 70121, USA
| | - Faraz Farhadi
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| | - Michael T Collins
- Skeletal Disorders and Mineral Homeostasis Section, National Institute of Dental and Craniofacial Research, National Institutes of Health (NIH), 30 Convent Dr., Building 30, Room 228 MSC 4320, Bethesda, MD 20892, USA
| | - Lars Edenbrandt
- Department of Clinical Physiology, Sahlgrenska University Hospital, Göteborg, Sweden
| | - Ronald M Summers
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland- Baltimore County, Baltimore, MD, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
44
|
Ma K, Harmon SA, Klyuzhin IS, Rahmim A, Turkbey B. Clinical Application of Artificial Intelligence in Positron Emission Tomography: Imaging of Prostate Cancer. PET Clin 2021; 17:137-143. [PMID: 34809863 DOI: 10.1016/j.cpet.2021.09.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
PET imaging with targeted novel tracers has been commonly used in the clinical management of prostate cancer. The use of artificial intelligence (AI) in PET imaging is a relatively new approach and in this review article, we will review the current trends and categorize the currently available research into the quantification of tumor burden within the organ, evaluation of metastatic disease, and translational/supplemental research which aims to improve other AI research efforts.
Collapse
Affiliation(s)
- Kevin Ma
- Artificial Intelligence Resource, Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA
| | - Stephanie A Harmon
- Artificial Intelligence Resource, Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA
| | - Ivan S Klyuzhin
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| | - Baris Turkbey
- Artificial Intelligence Resource, Molecular Imaging Branch, NCI, NIH, Bethesda, MD, USA.
| |
Collapse
|
45
|
Leydon P, O'Connell M, Greene D, Curran KM. Bone segmentation in contrast enhanced whole-body computed tomography. Biomed Phys Eng Express 2021; 8. [PMID: 34749353 DOI: 10.1088/2057-1976/ac37ab] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 11/08/2021] [Indexed: 11/12/2022]
Abstract
Segmentation of bone regions allows for enhanced diagnostics, disease characterisation and treatment monitoring in CT imaging. In contrast enhanced whole-body scans accurate automatic segmentation is particularly difficult as low dose whole body protocols reduce image quality and make contrast enhanced regions more difficult to separate when relying on differences in pixel intensities. This paper outlines a U-net architecture with novel preprocessing techniques, based on the windowing of training data and the modification of sigmoid activation threshold selection to successfully segment bone-bone marrow regions from low dose contrast enhanced whole-body CT scans. The proposed method achieved mean Dice coefficients of 0.979 ±0.02, 0.965 ±0.03, and 0.934 ±0.06 on two internal datasets and one external test dataset respectively. We have demonstrated that appropriate preprocessing is important for differentiating between bone and contrast dye, and that excellent results can be achieved with limited data.
Collapse
Affiliation(s)
- Patrick Leydon
- Applied Science, Limerick Institute of Technology, Moylish, Limerick, IRELAND
| | - Martin O'Connell
- School of Medicine, University College Dublin, University College Dublin, Dublin, Dublin 4, IRELAND
| | - Derek Greene
- School of Computer Science, University College Dublin, University College Dublin, Dublin, Dublin 4, IRELAND
| | - Kathleen M Curran
- School of Medicine, University College Dublin, University College Dublin, Dublin, 4, IRELAND
| |
Collapse
|
46
|
Deep learning takes the pain out of back breaking work - Automatic vertebral segmentation and attenuation measurement for osteoporosis. Clin Imaging 2021; 81:54-59. [PMID: 34598006 DOI: 10.1016/j.clinimag.2021.08.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/30/2021] [Accepted: 08/13/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Osteoporosis is an underdiagnosed and undertreated disease worldwide. Recent studies have highlighted the use of simple vertebral trabecular attenuation values for opportunistic osteoporosis screening. Meanwhile, machine learning has been used to accurately segment large parts of the human skeleton. PURPOSE To evaluate a fully automated deep learning-based method for lumbar vertebral segmentation and measurement of vertebral volumetric trabecular attenuation values. MATERIAL AND METHODS A deep learning-based method for automated segmentation of bones was retrospectively applied to non-contrast CT scans of 1008 patients (mean age 57 years, 472 female, 536 male). Each vertebral segmentation was automatically reduced by 7 mm in all directions in order to avoid cortical bone. The mean and median volumetric attenuation values from Th12 to L4 were obtained and plotted against patient age and sex. L1 values were further analyzed to facilitate comparison with previous studies. RESULTS The mean L1 attenuation values decreased linearly with age by -2.2 HU per year (age > 30, 95% CI: -2.4, -2.0, R2 = 0.3544). The mean L1 attenuation value of the entire population cohort was 140 HU ± 54. CONCLUSIONS With results closely matching those of previous studies, we believe that our fully automated deep learning-based method can be used to obtain lumbar volumetric trabecular attenuation values which can be used for opportunistic screening of osteoporosis in patients undergoing CT scans for other reasons.
Collapse
|
47
|
Yin P, Zhi X, Sun C, Wang S, Liu X, Chen L, Hong N. Radiomics Models for the Preoperative Prediction of Pelvic and Sacral Tumor Types: A Single-Center Retrospective Study of 795 Cases. Front Oncol 2021; 11:709659. [PMID: 34568036 PMCID: PMC8459744 DOI: 10.3389/fonc.2021.709659] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 08/02/2021] [Indexed: 01/31/2023] Open
Abstract
Purpose To assess the performance of random forest (RF)-based radiomics approaches based on 3D computed tomography (CT) and clinical features to predict the types of pelvic and sacral tumors. Materials and Methods A total of 795 patients with pathologically confirmed pelvic and sacral tumors were analyzed, including metastatic tumors (n = 181), chordomas (n = 85), giant cell tumors (n =120), chondrosarcoma (n = 127), osteosarcoma (n = 106), neurogenic tumors (n = 95), and Ewing’s sarcoma (n = 81). After semi-automatic segmentation, 1316 hand-crafted radiomics features of each patient were extracted. Four radiomics models (RMs) and four clinical-RMs were built to identify these seven types of tumors. The area under the receiver operating characteristic curve (AUC) and accuracy (ACC) were used to evaluate different models. Results In total, 795 patients (432 males, 363 females; mean age of 42.1 ± 17.8 years) were consisted of 215 benign tumors and 580 malignant tumors. The sex, age, history of malignancy and tumor location had significant differences between benign and malignant tumors (P < 0.05). For the two-class models, clinical-RM2 (AUC = 0.928, ACC = 0.877) performed better than clinical-RM1 (AUC = 0.899, ACC = 0.854). For the three-class models, the proposed clinical-RM3 achieved AUCs between 0.923 (for chordoma) and 0.964 (for sarcoma), while the AUCs of the clinical-RM4 ranged from 0.799 (for osteosarcoma) to 0.869 (for chondrosarcoma) in the validation set. Conclusions The RF-based clinical-radiomics models provided high discriminatory performance in predicting pelvic and sacral tumor types, which could be used for clinical decision-making.
Collapse
Affiliation(s)
- Ping Yin
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Xin Zhi
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Chao Sun
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Sicong Wang
- Department of Pharmaceuticals Diagnosis, GE Healthcare, Shanghai, China
| | - Xia Liu
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Lei Chen
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Nan Hong
- Department of Radiology, Peking University People's Hospital, Beijing, China
| |
Collapse
|
48
|
Analytical performance of aPROMISE: automated anatomic contextualization, detection, and quantification of [ 18F]DCFPyL (PSMA) imaging for standardized reporting. Eur J Nucl Med Mol Imaging 2021; 49:1041-1051. [PMID: 34463809 PMCID: PMC8803714 DOI: 10.1007/s00259-021-05497-8] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 07/09/2021] [Indexed: 11/21/2022]
Abstract
Purpose The application of automated image analyses could improve and facilitate standardization and consistency of quantification in [18F]DCFPyL (PSMA) PET/CT scans. In the current study, we analytically validated aPROMISE, a software as a medical device that segments organs in low-dose CT images with deep learning, and subsequently detects and quantifies potential pathological lesions in PSMA PET/CT. Methods To evaluate the deep learning algorithm, the automated segmentations of the low-dose CT component of PSMA PET/CT scans from 20 patients were compared to manual segmentations. Dice scores were used to quantify the similarities between the automated and manual segmentations. Next, the automated quantification of tracer uptake in the reference organs and detection and pre-segmentation of potential lesions were evaluated in 339 patients with prostate cancer, who were all enrolled in the phase II/III OSPREY study. Three nuclear medicine physicians performed the retrospective independent reads of OSPREY images with aPROMISE. Quantitative consistency was assessed by the pairwise Pearson correlations and standard deviation between the readers and aPROMISE. The sensitivity of detection and pre-segmentation of potential lesions was evaluated by determining the percent of manually selected abnormal lesions that were automatically detected by aPROMISE. Results The Dice scores for bone segmentations ranged from 0.88 to 0.95. The Dice scores of the PSMA PET/CT reference organs, thoracic aorta and liver, were 0.89 and 0.97, respectively. Dice scores of other visceral organs, including prostate, were observed to be above 0.79. The Pearson correlation for blood pool reference was higher between any manual reader and aPROMISE, than between any pair of manual readers. The standard deviations of reference organ uptake across all patients as determined by aPROMISE (SD = 0.21 blood pool and SD = 1.16 liver) were lower compared to those of the manual readers. Finally, the sensitivity of aPROMISE detection and pre-segmentation was 91.5% for regional lymph nodes, 90.6% for all lymph nodes, and 86.7% for bone in metastatic patients. Conclusion In this analytical study, we demonstrated the segmentation accuracy of the deep learning algorithm, the consistency in quantitative assessment across multiple readers, and the high sensitivity in detecting potential lesions. The study provides a foundational framework for clinical evaluation of aPROMISE in standardized reporting of PSMA PET/CT. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05497-8.
Collapse
|
49
|
Liu X, Han C, Wang H, Wu J, Cui Y, Zhang X, Wang X. Fully automated pelvic bone segmentation in multiparameteric MRI using a 3D convolutional neural network. Insights Imaging 2021; 12:93. [PMID: 34232404 PMCID: PMC8263843 DOI: 10.1186/s13244-021-01044-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 06/21/2021] [Indexed: 01/13/2023] Open
Abstract
BACKGROUND Accurate segmentation of pelvic bones is an initial step to achieve accurate detection and localisation of pelvic bone metastases. This study presents a deep learning-based approach for automated segmentation of normal pelvic bony structures in multiparametric magnetic resonance imaging (mpMRI) using a 3D convolutional neural network (CNN). METHODS This retrospective study included 264 pelvic mpMRI data obtained between 2018 and 2019. The manual annotations of pelvic bony structures (which included lumbar vertebra, sacrococcyx, ilium, acetabulum, femoral head, femoral neck, ischium, and pubis) on diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC) images were used to create reference standards. A 3D U-Net CNN was employed for automatic pelvic bone segmentation. Additionally, 60 mpMRI data from 2020 were included and used to evaluate the model externally. RESULTS The CNN achieved a high Dice similarity coefficient (DSC) average in both testing (0.80 [DWI images] and 0.85 [ADC images]) and external (0.79 [DWI images] and 0.84 [ADC images]) validation sets. Pelvic bone volumes measured with manual and CNN-predicted segmentations were highly correlated (R2 value of 0.84-0.97) and in close agreement (mean bias of 2.6-4.5 cm3). A SCORE system was designed to qualitatively evaluate the model for which both testing and external validation sets achieved high scores in terms of both qualitative evaluation and concordance between two readers (ICC = 0.904; 95% confidence interval: 0.871-0.929). CONCLUSIONS A deep learning-based method can achieve automated pelvic bone segmentation on DWI and ADC images with suitable quantitative and qualitative performance.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - He Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jingyun Wu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
50
|
Hamwood J, Schmutz B, Collins MJ, Allenby MC, Alonso-Caneiro D. A deep learning method for automatic segmentation of the bony orbit in MRI and CT images. Sci Rep 2021; 11:13693. [PMID: 34211081 PMCID: PMC8249400 DOI: 10.1038/s41598-021-93227-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 06/15/2021] [Indexed: 12/23/2022] Open
Abstract
This paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.
Collapse
Affiliation(s)
- Jared Hamwood
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology (QUT), Kelvin Grove, Qld, 4059, Australia
| | - Beat Schmutz
- Centre in Regenerative Medicine, Institute of Health and Biomedical Innovation, Queensland University of Technology, Kelvin Grove, QLD, 4059, Australia
- Metro North Hospital and Health Service, Jamieson Trauma Institute, Herston, QLD, 4029, Australia
| | - Michael J Collins
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology (QUT), Kelvin Grove, Qld, 4059, Australia
| | - Mark C Allenby
- Biofabrication and Tissue Morphology Laboratory, Centre for Biomedical Technologies, School of Mechanical Medical and Process Engineering, Queensland University of Technology (QUT), Herston, Qld, 4000, Australia
| | - David Alonso-Caneiro
- Contact Lens and Visual Optics Laboratory, Centre for Vision and Eye Research, School of Optometry and Vision Science, Queensland University of Technology (QUT), Kelvin Grove, Qld, 4059, Australia.
| |
Collapse
|