1
|
Anam C, Naufal A, Dwihapsari Y, Fujibuchi T, Dougherty G. A Practical Method for Slice Spacing Measurement Using the American Association of Physicists in Medicine Computed Tomography Performance Phantom. J Med Phys 2024; 49:103-109. [PMID: 38828077 PMCID: PMC11141755 DOI: 10.4103/jmp.jmp_155_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 01/14/2024] [Accepted: 01/19/2024] [Indexed: 06/05/2024] Open
Abstract
Background The slice spacing has a crucial role in the accuracy of computed tomography (CT) images in sagittal and coronal planes. However, there is no practical method for measuring the accuracy of the slice spacing. Purpose This study proposes a novel method to automatically measure the slice spacing using the American Association of Physicists in Medicine (AAPM) CT performance phantom. Methods The AAPM CT performance phantom module 610-04 was used to measure slice spacing. The process of slice spacing measurement involves a pair of axial images of the module containing ramp aluminum objects located at adjacent slice positions. The middle aluminum plate of each image was automatically segmented. Next, the two segmented images were combined to produce one image with two stair objects. The centroid coordinates of two stair objects were automatically determined. Subsequently, the distance between these two centroids was measured to directly indicate the slice spacing. For comparison, the slice spacing was calculated by accessing the slice position attributes from the DICOM header of both images. The proposed method was tested on phantom images with variations in slice spacing and field of view (FOV). Results The results showed that the automatic measurement of slice spacing was quite accurate for all variations of slice spacing and FOV, with average differences of 9.0% and 9.3%, respectively. Conclusion A new automated method for measuring the slice spacing using the AAPM CT phantom was successfully demonstrated and tested for variations of slice spacing and FOV. Slice spacing measurement may be considered an additional parameter to be checked in addition to other established parameters.
Collapse
Affiliation(s)
- Choirul Anam
- Department of Physics, Faculty of Sciences and Mathematics, Diponegoro University, Tembalang, Semarang, Central Java, Surabaya, East Java, Indonesia
| | - Ariij Naufal
- Department of Physics, Faculty of Sciences and Mathematics, Diponegoro University, Tembalang, Semarang, Central Java, Surabaya, East Java, Indonesia
| | - Yanurita Dwihapsari
- Department of Physics, Faculty of Science and Data Analytics, Sepuluh Nopember Institute of Technology (ITS), Kampus ITS Sukolilo, Surabaya, East Java, Indonesia
| | - Toshioh Fujibuchi
- Department of Health Sciences, Division of Medical Quantum Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Geoff Dougherty
- Department of Applied Physics and Medical Imaging, California State University Channel Islands, Camarillo, CA, USA
| |
Collapse
|
2
|
Wu S, Nakao M, Imanishi K, Nakamura M, Mizowaki T, Matsuda T. Computed Tomography slice interpolation in the longitudinal direction based on deep learning techniques: To reduce slice thickness or slice increment without dose increase. PLoS One 2022; 17:e0279005. [PMID: 36520814 PMCID: PMC9754169 DOI: 10.1371/journal.pone.0279005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Large slice thickness or slice increment causes information insufficiency of Computed Tomography (CT) data in the longitudinal direction, which degrades the quality of CT-based diagnosis. Traditional approaches such as high-resolution computed tomography (HRCT) and linear interpolation can solve this problem. However, HRCT suffers from dose increase, and linear interpolation causes artifacts. In this study, we propose a deep-learning-based approach to reconstruct densely sliced CT from sparsely sliced CT data without any dose increase. The proposed method reconstructs CT images from neighboring slices using a U-net architecture. To prevent multiple reconstructed slices from influencing one another, we propose a parallel architecture in which multiple U-net architectures work independently. Moreover, for a specific organ (i.e., the liver), we propose a range-clip technique to improve reconstruction quality, which enhances the learning of CT values within this organ by enlarging the range of the training data. CT data from 130 patients were collected, with 80% used for training and the remaining 20% used for testing. Experiments showed that our parallel U-net architecture reduced the mean absolute error of CT values in the reconstructed slices by 22.05%, and also reduced the incidence of artifacts around the boundaries of target organs, compared with linear interpolation. Further improvements of 15.12%, 11.04%, 10.94%, and 10.63% were achieved for the liver, left kidney, right kidney, and stomach, respectively, using the proposed range-clip algorithm. Also, we compared the proposed architecture with original U-net method, and the experimental results demonstrated the superiority of our approach.
Collapse
Affiliation(s)
- Shuqiong Wu
- The Institute of Scientific and Industrial Research, Osaka University, Ibaraki, Osaka, Japan
- * E-mail:
| | - Megumi Nakao
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | | | - Mitsuhiro Nakamura
- Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, Kyoto, Japan
- Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Takashi Mizowaki
- Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Tetsuya Matsuda
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| |
Collapse
|
3
|
Xie H, Lei Y, Wang T, Tian Z, Roper J, Bradley JD, Curran WJ, Tang X, Liu T, Yang X. High through-plane resolution CT imaging with self-supervised deep learning. Phys Med Biol 2021; 66. [PMID: 34049297 DOI: 10.1088/1361-6560/ac0684] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 05/28/2021] [Indexed: 12/11/2022]
Abstract
CT images for radiotherapy planning are usually acquired in thick slices to reduce the imaging dose, especially for pediatric patients, and to lessen the need for contouring and treatment planning on more slices. However, low through-plane resolution may degrade the accuracy of dose calculations. In this paper, a self-supervised deep learning workflow is proposed to synthesize high through-plane resolution CT images by learning from their high in-plane resolution features. The proposed workflow was designed to facilitate neural networks to learn the mapping from low-resolution (LR) to high-resolution (HR) images in the axial plane. During the inference step, the HR sagittal and coronal images were generated by feeding two parallel-trained neural networks with the respective LR sagittal and coronal images to the trained neural networks. The CT simulation images of a cohort of 75 patients with head and neck cancer (1 mm slice thickness) and 200 CT images of a cohort of 20 lung cancer patients (3 mm slice thickness) were retrospectively investigated in a cross-validation manner. The HR images generated with the proposed method were qualitatively (visual quality, image intensity profiles and preliminary observer study) and quantitatively (mean absolute error, edge keeping index, structural similarity index measurement, information fidelity criterion and visual information fidelity in pixel domain) inspected, while taking the original CT images of the head and neck and lung cancer patients as the reference. The qualitative results showed the capability of the proposed method for generating high through-plane resolution CT images with data from both groups of cancer patients. All the improvements in the measure metrics were confirmed to be statistically significant with paired two-samplet-test analysis. The innovative point of the work is that the proposed deep learning workflow for CT image generation with high through-plane resolution in radiotherapy is self-supervised, meaning that it does not rely on ground truth CT images to train the network. In addition, the assumption that the in-plane HR information can supervise the through-plane HR generation is confirmed. We hope that this will inspire more research on this topic to further improve the through-plane resolution of medical images.
Collapse
Affiliation(s)
- Huiqiao Xie
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Zhen Tian
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Jeffrey D Bradley
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Xiangyang Tang
- Winship Cancer Institute, Emory University, Atlanta, GA, United States of America.,Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, United States of America
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America.,Winship Cancer Institute, Emory University, Atlanta, GA, United States of America
| |
Collapse
|
4
|
Lasiyah N, Anam C, Hidayanto E, Dougherty G. Automated procedure for slice thickness verification of computed tomography images: Variations of slice thickness, position from iso-center, and reconstruction filter. J Appl Clin Med Phys 2021; 22:313-321. [PMID: 34109738 PMCID: PMC8292687 DOI: 10.1002/acm2.13317] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 05/07/2021] [Accepted: 05/17/2021] [Indexed: 11/11/2022] Open
Abstract
Purpose The purpose of this study is to automate the slice thickness verification on the AAPM CT performance phantom and validate it for variations of slice thickness, position from iso‐center, and reconstruction filter. Methods An automatic procedure for slice thickness verification on AAPM CT performance phantom was developed using MATLAB R2015b. The stair object image within the phantom was segmented, and the middle stair object was located. Its angle was determined using the Hough transformation, and the image was rotated accordingly. The profile through this object was obtained, and its full‐width of half maximum (FWHM) was automatically measured. The FWHM indicated the slice thickness of the image. The automated procedure was applied with variations in three independent parameters, i.e., the slice thickness, the distance from the phantom to the iso‐center, and the reconstruction filter. The automated results were compared to manual measurements made using electronic calipers. Results The differences of the automated results from the nominal slice thicknesses were within 1.0 mm. The automated results are comparable to those from manual approach (i.e., the difference of both is within 12%). The automatic procedure accurately obtained slice thickness even when the phantom was moved from the iso‐center position by up to 4 cm above and 4 cm below the iso‐center. The automated results were similar (to within 0.1 mm) for various reconstruction filters. Conclusions We successfully developed an automated procedure of slice thickness verification and confirmed that the automated procedure provided accurate results. It provided an easy and effective method of determining slice thickness.
Collapse
Affiliation(s)
- Nani Lasiyah
- Department of Physics, Faculty of Sciences and Mathematics, Diponegoro University, Semarang, Indonesia
| | - Choirul Anam
- Department of Physics, Faculty of Sciences and Mathematics, Diponegoro University, Semarang, Indonesia
| | - Eko Hidayanto
- Department of Physics, Faculty of Sciences and Mathematics, Diponegoro University, Semarang, Indonesia
| | - Geoff Dougherty
- Department of Applied Physics and Medical Imaging, California State University Channel Islands, Camarillo, CA, USA
| |
Collapse
|