1
|
Duan T, Chen W, Ruan M, Zhang X, Shen S, Gu W. Unsupervised deep learning-based medical image registration: a survey. Phys Med Biol 2025; 70:02TR01. [PMID: 39667278 DOI: 10.1088/1361-6560/ad9e69] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Accepted: 12/12/2024] [Indexed: 12/14/2024]
Abstract
In recent decades, medical image registration technology has undergone significant development, becoming one of the core technologies in medical image analysis. With the rise of deep learning, deep learning-based medical image registration methods have achieved revolutionary improvements in processing speed and automation, showing great potential, especially in unsupervised learning. This paper briefly introduces the core concepts of deep learning-based unsupervised image registration, followed by an in-depth discussion of innovative network architectures and a detailed review of these studies, highlighting their unique contributions. Additionally, this paper explores commonly used loss functions, datasets, and evaluation metrics. Finally, we discuss the main challenges faced by various categories and propose potential future research topics. This paper surveys the latest advancements in unsupervised deep neural network-based medical image registration methods, aiming to help active readers interested in this field gain a deep understanding of this exciting area.
Collapse
Affiliation(s)
- Taisen Duan
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, People's Republic of China
| | - Wenkang Chen
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, People's Republic of China
| | - Meilin Ruan
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
| | - Xuejun Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, People's Republic of China
| | - Shaofei Shen
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, People's Republic of China
| | - Weiyu Gu
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications and Network Technology, Guangxi University, Nanning 530004, People's Republic of China
| |
Collapse
|
2
|
Verbakel J, Boot MR, van der Gaast N, Dunning H, Bakker M, Jaarsma RL, Doornberg JN, Edwards MJR, van de Groes SAW, Hermans E. Symmetry of the left and right tibial plafond; a comparison of 75 distal tibia pairs. Eur J Trauma Emerg Surg 2024; 50:2877-2882. [PMID: 38874625 PMCID: PMC11666608 DOI: 10.1007/s00068-024-02568-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 05/30/2024] [Indexed: 06/15/2024]
Abstract
PURPOSE Tibia plafond or pilon fractures present a high level of complexity, making their surgical management challenging. Three-Dimensional Virtual Planning (3DVP) can assist in preoperative planning to achieve optimal fracture reduction. This study aimed to assess the symmetry of the left and right tibial plafond and whether left-right mirroring can reliably be used. METHODS Bilateral CT scans of the lower limbs of 75 patients without ankle problems or prior fractures of the lower limb were included. The CT images were segmented to create 3D surface models of the tibia. Subsequently, the left tibial models were mirrored and superimposed onto the right tibia models using a Coherent Point Drift surface matching algorithm. The tibias were then cut to create bone models of the distal tibia with a height of 30 mm, and correspondence points were established. The Euclidean distance was calculated between correspondence points and visualized in a boxplot and heatmaps. The articulating surface was selected as a region of interest. RESULTS The median left-right difference was 0.57 mm (IQR, 0.38 - 0.85 mm) of the entire tibial plafond and 0.53 mm (IQR, 0.37 - 0.76 mm) of the articulating surface. The area with the greatest left-right differences were the medial malleoli and the anterior tubercle of the tibial plafond. CONCLUSION The tibial plafond exhibits a high degree of bilateral symmetry. Therefore, the mirrored unfractured tibial plafond may be used as a template to optimize preoperative surgical reduction using 3DVP techniques in patients with pilon fractures.
Collapse
Affiliation(s)
- Joy Verbakel
- Department of Trauma Surgery, Radboud University Medical Center, Geert Grooteplein Zuid, 6525 GA, Nijmegen, The Netherlands.
| | - Miriam R Boot
- Orthopaedic Research Laboratory, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Nynke van der Gaast
- Department of Trauma Surgery, Radboud University Medical Center, Geert Grooteplein Zuid, 6525 GA, Nijmegen, The Netherlands
| | - Hans Dunning
- Orthopaedic Research Laboratory, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Max Bakker
- Orthopaedic Research Laboratory, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Ruurd L Jaarsma
- Department of Orthopaedic & Trauma Surgery, Flinders University and Flinders Medical Centre, Adelaide, Australia
| | - Job N Doornberg
- Department of Orthopaedic & Trauma Surgery, Flinders University and Flinders Medical Centre, Adelaide, Australia
- Department of Orthopaedic Surgery, University Medical Center Groningen, Groningen, The Netherlands
| | - Michael J R Edwards
- Department of Trauma Surgery, Radboud University Medical Center, Geert Grooteplein Zuid, 6525 GA, Nijmegen, The Netherlands
| | | | - Erik Hermans
- Department of Trauma Surgery, Radboud University Medical Center, Geert Grooteplein Zuid, 6525 GA, Nijmegen, The Netherlands
| |
Collapse
|
3
|
Kontio R, Wilkman T, Mesimäki K, Chepurnyi Y, Asikainen A, Haapanen A, Poutala A, Mikkonen M, Slobodianiuk A, Kopchak A. Automated 3-D Computer-Aided Measurement of the Bony Orbit: Evaluation of Correlations among Volume, Depth, and Surface Area. J Pers Med 2024; 14:508. [PMID: 38793092 PMCID: PMC11122174 DOI: 10.3390/jpm14050508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 05/04/2024] [Accepted: 05/05/2024] [Indexed: 05/26/2024] Open
Abstract
(1)The study aimed to measure the depth, volume, and surface area of the intact human orbit by applying an automated method of CT segmentation and to evaluate correlations among depth, volume, and surface area. Additionally, the relative increases in volume and surface area in proportion to the diagonal of the orbit were assessed. (2) CT data from 174 patients were analyzed. A ball-shaped mesh consisting of tetrahedral elements was inserted inside orbits until it encountered the bony boundaries. Orbital volume, area depth, and their correlations were measured. For the validation, an ICC was used. (3) The differences between genders were significant (p < 10-7) but there were no differences between sides. When comparing orbit from larger to smaller, a paired sample t-test indicated a significant difference in groups (p < 10-10). A simple linear model (Volume~1 + Gender + Depth + Gender:Depth) revealed that only depth had a significant effect on volume (p < 10-19). The ICCs were 1.0. (4) Orbital volume, depth, and surface area measurements based on an automated CT segmentation algorithm demonstrated high repeatability and reliability. Male orbits were always larger on average by 14%. There were no differences between the sides. The volume and surface area ratio did not differ between genders and was approximately 0.75.
Collapse
Affiliation(s)
- Risto Kontio
- Department of Oral and Maxillofacial Surgery, Helsinki University Hospital, 00290 Helsinki, Finland; (R.K.); (T.W.); (K.M.); (A.A.); (A.H.)
- Institute of Oral and Maxillofacial Diseases, Helsinki University, 00014 Helsinki, Finland
| | - Tommy Wilkman
- Department of Oral and Maxillofacial Surgery, Helsinki University Hospital, 00290 Helsinki, Finland; (R.K.); (T.W.); (K.M.); (A.A.); (A.H.)
| | - Karri Mesimäki
- Department of Oral and Maxillofacial Surgery, Helsinki University Hospital, 00290 Helsinki, Finland; (R.K.); (T.W.); (K.M.); (A.A.); (A.H.)
| | - Yurii Chepurnyi
- Department of Maxillofacial Surgery and Modern Dental Technologies, O.O.Bogomolets Medical University, 02000 Kyiv, Ukraine; (Y.C.); (A.K.)
| | - Antti Asikainen
- Department of Oral and Maxillofacial Surgery, Helsinki University Hospital, 00290 Helsinki, Finland; (R.K.); (T.W.); (K.M.); (A.A.); (A.H.)
| | - Aleksi Haapanen
- Department of Oral and Maxillofacial Surgery, Helsinki University Hospital, 00290 Helsinki, Finland; (R.K.); (T.W.); (K.M.); (A.A.); (A.H.)
| | - Arto Poutala
- Disior, Maria 01, Building 2, Lapinlahdenkatu 16, 00180 Helsinki, Finland; (A.P.); (M.M.)
| | - Marko Mikkonen
- Disior, Maria 01, Building 2, Lapinlahdenkatu 16, 00180 Helsinki, Finland; (A.P.); (M.M.)
| | - Alina Slobodianiuk
- Department of Maxillofacial Surgery and Modern Dental Technologies, O.O.Bogomolets Medical University, 02000 Kyiv, Ukraine; (Y.C.); (A.K.)
| | - Andrii Kopchak
- Department of Maxillofacial Surgery and Modern Dental Technologies, O.O.Bogomolets Medical University, 02000 Kyiv, Ukraine; (Y.C.); (A.K.)
| |
Collapse
|
4
|
Ryu J, Trager SC, Wilkinson MHF. A Fast Alpha-Tree Algorithm for Extreme Dynamic Range Pixel Dissimilarities. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:3199-3212. [PMID: 38090831 DOI: 10.1109/tpami.2023.3341721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
The α-tree algorithm is a useful hierarchical representation technique which facilitates comprehension of images such as remote sensing and medical images. Most α-tree algorithms make use of priority queues to process image edges in a correct order, but because traditional priority queues are inefficient in α-tree algorithms using extreme-dynamic-range pixel dissimilarities, they run slower compared with other related algorithms such as component tree. In this paper, we propose a novel hierarchical heap priority queue algorithm that can process α-tree edges much more efficiently than other state-of-the-art priority queues. Experimental results using 48-bit Sentinel-2 A remotely sensed images and randomly generated images have shown that the proposed hierarchical heap priority queue improved the timings of the flooding α-tree algorithm by replacing the heap priority queue with the proposed queue: 1.68 times in 4-N and 2.41 times in 8-N on Sentinel-2 A images, and 2.56 times and 4.43 times on randomly generated images.
Collapse
|
5
|
Edelmers E, Kazoka D, Bolocko K, Sudars K, Pilmane M. Automatization of CT Annotation: Combining AI Efficiency with Expert Precision. Diagnostics (Basel) 2024; 14:185. [PMID: 38248062 PMCID: PMC10814874 DOI: 10.3390/diagnostics14020185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 01/12/2024] [Accepted: 01/13/2024] [Indexed: 01/23/2024] Open
Abstract
The integration of artificial intelligence (AI), particularly through machine learning (ML) and deep learning (DL) algorithms, marks a transformative progression in medical imaging diagnostics. This technical note elucidates a novel methodology for semantic segmentation of the vertebral column in CT scans, exemplified by a dataset of 250 patients from Riga East Clinical University Hospital. Our approach centers on the accurate identification and labeling of individual vertebrae, ranging from C1 to the sacrum-coccyx complex. Patient selection was meticulously conducted, ensuring demographic balance in age and sex, and excluding scans with significant vertebral abnormalities to reduce confounding variables. This strategic selection bolstered the representativeness of our sample, thereby enhancing the external validity of our findings. Our workflow streamlined the segmentation process by eliminating the need for volume stitching, aligning seamlessly with the methodology we present. By leveraging AI, we have introduced a semi-automated annotation system that enables initial data labeling even by individuals without medical expertise. This phase is complemented by thorough manual validation against established anatomical standards, significantly reducing the time traditionally required for segmentation. This dual approach not only conserves resources but also expedites project timelines. While this method significantly advances radiological data annotation, it is not devoid of challenges, such as the necessity for manual validation by anatomically skilled personnel and reliance on specialized GPU hardware. Nonetheless, our methodology represents a substantial leap forward in medical data semantic segmentation, highlighting the potential of AI-driven approaches to revolutionize clinical and research practices in radiology.
Collapse
Affiliation(s)
- Edgars Edelmers
- Institute of Anatomy and Anthropology, Rīga Stradiņš University, LV-1010 Riga, Latvia; (D.K.); (M.P.)
| | - Dzintra Kazoka
- Institute of Anatomy and Anthropology, Rīga Stradiņš University, LV-1010 Riga, Latvia; (D.K.); (M.P.)
| | - Katrina Bolocko
- Department of Computer Graphics and Computer Vision, Riga Technical University, LV-1048 Riga, Latvia;
| | - Kaspars Sudars
- Institute of Electronics and Computer Science, LV-1006 Riga, Latvia;
| | - Mara Pilmane
- Institute of Anatomy and Anthropology, Rīga Stradiņš University, LV-1010 Riga, Latvia; (D.K.); (M.P.)
| |
Collapse
|
6
|
Conconi M, Pompili A, Sancisi N, Durante S, Leardini A, Belvedere C. Foot kinematics as a function of ground orientation and weightbearing. J Orthop Res 2024; 42:148-163. [PMID: 37442638 DOI: 10.1002/jor.25661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 05/25/2023] [Accepted: 07/10/2023] [Indexed: 07/15/2023]
Abstract
The foot is responsible for the bodyweight transfer to the ground, while adapting to different terrains and activities. Despite this fundamental role, the knowledge about the foot bone intrinsic kinematics is still limited. The aim of the study is to provide a quantitative and systematic description of the kinematics of all bones in the foot, considering the full range of dorsi/plantar flexion and pronation/supination of the foot, both in weightbearing and nonweightbearing conditions. Bone kinematics was accurately reconstructed for three specimens from a series of computed tomography scans taken in weightbearing configuration. The ground inclination was imposed through a set of wedges, varying the foot orientation both in the sagittal and coronal planes; the donor body-weight was applied or removed by a cable-rig. A total of 32 scans for each foot were acquired and segmented. Bone kinematics was expressed in terms of anatomical reference systems optimized for the foot kinematic description. Results agree with previous literature where available. However, our analysis reveals that bones such as calcaneus, navicular, intermediate cuneiform, fourth and fifth metatarsal move more during foot pronation than flexion. Weightbearing significantly increase the range of motion of almost all the bone. Cuneiform and metatarsal move more due to weightbearing than in response to ground inclination, showing their role in the load-acceptance phase. The data here reported represent a step toward a deeper understanding of the foot behavior, that may help in the definition of better treatment and medical devices, as well as new biomechanical model of the foot.
Collapse
Affiliation(s)
- Michele Conconi
- Department of Industrial Engineering-DIN, University of Bologna, Bologna, Italy
| | - Alessandro Pompili
- Department of Industrial Engineering-DIN, University of Bologna, Bologna, Italy
| | - Nicola Sancisi
- Department of Industrial Engineering-DIN, University of Bologna, Bologna, Italy
| | - Stefano Durante
- Area Tecnico Diagnostica Radiologica, IRCCS S. Orsola Malpighi Hospital, Bologna, Italy
| | - Alberto Leardini
- Movement Analysis Laboratory, IRCCS Istituto Ortopedico Rizzoli, Bologna, Italy
| | - Claudio Belvedere
- Movement Analysis Laboratory, IRCCS Istituto Ortopedico Rizzoli, Bologna, Italy
| |
Collapse
|
7
|
Schnider E, Wolleb J, Huck A, Toranelli M, Rauter G, Müller-Gerbl M, Cattin PC. Improved distinct bone segmentation in upper-body CT through multi-resolution networks. Int J Comput Assist Radiol Surg 2023; 18:2091-2099. [PMID: 37338664 PMCID: PMC10589171 DOI: 10.1007/s11548-023-02957-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 05/09/2023] [Indexed: 06/21/2023]
Abstract
PURPOSE Automated distinct bone segmentation from CT scans is widely used in planning and navigation workflows. U-Net variants are known to provide excellent results in supervised semantic segmentation. However, in distinct bone segmentation from upper-body CTs a large field of view and a computationally taxing 3D architecture are required. This leads to low-resolution results lacking detail or localisation errors due to missing spatial context when using high-resolution inputs. METHODS We propose to solve this problem by using end-to-end trainable segmentation networks that combine several 3D U-Nets working at different resolutions. Our approach, which extends and generalizes HookNet and MRN, captures spatial information at a lower resolution and skips the encoded information to the target network, which operates on smaller high-resolution inputs. We evaluated our proposed architecture against single-resolution networks and performed an ablation study on information concatenation and the number of context networks. RESULTS Our proposed best network achieves a median DSC of 0.86 taken over all 125 segmented bone classes and reduces the confusion among similar-looking bones in different locations. These results outperform our previously published 3D U-Net baseline results on the task and distinct bone segmentation results reported by other groups. CONCLUSION The presented multi-resolution 3D U-Nets address current shortcomings in bone segmentation from upper-body CT scans by allowing for capturing a larger field of view while avoiding the cubic growth of the input pixels and intermediate computations that quickly outgrow the computational capacities in 3D. The approach thus improves the accuracy and efficiency of distinct bone segmentation from upper-body CT.
Collapse
Affiliation(s)
- Eva Schnider
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland.
| | - Julia Wolleb
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| | - Antal Huck
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| | - Mireille Toranelli
- Department of Biomedicine, Musculoskeletal Research, University of Basel, Basel, Switzerland
| | - Georg Rauter
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| | - Magdalena Müller-Gerbl
- Department of Biomedicine, Musculoskeletal Research, University of Basel, Basel, Switzerland
| | - Philippe C Cattin
- Department of Biomedical Engineering, University of Basel, Hegenheimermattweg 167B, 4123, Allschwil, Switzerland
| |
Collapse
|
8
|
van Veldhuizen WA, van der Wel H, Kuipers HY, Kraeima J, Ten Duis K, Wolterink JM, de Vries JPPM, Schuurmann RCL, IJpma FFA. Development of a Statistical Shape Model and Assessment of Anatomical Shape Variations in the Hemipelvis. J Clin Med 2023; 12:jcm12113767. [PMID: 37297962 DOI: 10.3390/jcm12113767] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 05/28/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023] Open
Abstract
Knowledge about anatomical shape variations in the pelvis is mandatory for selection, fitting, positioning, and fixation in pelvic surgery. The current knowledge on pelvic shape variation mostly relies on point-to-point measurements on 2D X-ray images and computed tomography (CT) slices. Three-dimensional region-specific assessments of pelvic morphology are scarce. Our aim was to develop a statistical shape model of the hemipelvis to assess anatomical shape variations in the hemipelvis. CT scans of 200 patients (100 male and 100 female) were used to obtain segmentations. An iterative closest point algorithm was performed to register these 3D segmentations, so a principal component analysis (PCA) could be performed, and a statistical shape model (SSM) of the hemipelvis was developed. The first 15 principal components (PCs) described 90% of the total shape variation, and the reconstruction ability of this SSM resulted in a root mean square error of 1.58 (95% CI: 1.53-1.63) mm. In summary, an SSM of the hemipelvis was developed, which describes the shape variations in a Caucasian population and is able to reconstruct an aberrant hemipelvis. Principal component analyses demonstrated that, in a general population, anatomical shape variations were mostly related to differences in the size of the pelvis (e.g., PC1 describes 68% of the total shape variation, which is attributed to size). Differences between the male and female pelvis were most pronounced in the iliac wing and pubic rami regions. These regions are often subject to injuries. Future clinical applications of our newly developed SSM may be relevant for SSM-based semi-automatic virtual reconstruction of a fractured hemipelvis as part of preoperative planning. Lastly, for companies, using our SSM might be interesting in order to assess which sizes of pelvic implants should be produced to provide proper-fitting implants for most of the population.
Collapse
Affiliation(s)
| | - Hylke van der Wel
- Department of Oral and Maxillofacial Surgery/3D Lab, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Hennie Y Kuipers
- Department of Surgery, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- Department of Oral and Maxillofacial Surgery/3D Lab, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Kaj Ten Duis
- Department of Surgery, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Jelmer M Wolterink
- Department of Applied Mathematics, Technical Medical Centre, 7500 AE Enschede, The Netherlands
| | - Jean-Paul P M de Vries
- Department of Surgery, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Richte C L Schuurmann
- Department of Surgery, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
- Multimodality Medical Imaging Group, Technical Medical Centre, University of Twente, 7500 AE Enschede, The Netherlands
| | - Frank F A IJpma
- Department of Surgery, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
9
|
Deng L, Zhang Y, Qi J, Huang S, Yang X, Wang J. Enhancement of cone beam CT image registration by super-resolution pre-processing algorithm. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:4403-4420. [PMID: 36896505 DOI: 10.3934/mbe.2023204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
In order to enhance cone-beam computed tomography (CBCT) image information and improve the registration accuracy for image-guided radiation therapy, we propose a super-resolution (SR) image enhancement method. This method uses super-resolution techniques to pre-process the CBCT prior to registration. Three rigid registration methods (rigid transformation, affine transformation, and similarity transformation) and a deep learning deformed registration (DLDR) method with and without SR were compared. The five evaluation indices, the mean squared error (MSE), mutual information, Pearson correlation coefficient (PCC), structural similarity index (SSIM), and PCC + SSIM, were used to validate the results of registration with SR. Moreover, the proposed method SR-DLDR was also compared with the VoxelMorph (VM) method. In rigid registration with SR, the registration accuracy improved by up to 6% in the PCC metric. In DLDR with SR, the registration accuracy was improved by up to 5% in PCC + SSIM. When taking the MSE as the loss function, the accuracy of SR-DLDR is equivalent to that of the VM method. In addition, when taking the SSIM as the loss function, the registration accuracy of SR-DLDR is 6% higher than that of VM. SR is a feasible method to be used in medical image registration for planning CT (pCT) and CBCT. The experimental results show that the SR algorithm can improve the accuracy and efficiency of CBCT image alignment regardless of which alignment algorithm is used.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin 150080, China
| | - Yuanzhi Zhang
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin 150080, China
| | - Jingjing Qi
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin 150080, China
| | - Sijuan Huang
- Department of Radiation Oncology; Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| | - Xin Yang
- Department of Radiation Oncology; Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou 510060, China
| | - Jing Wang
- Faculty of Rehabilitation Medicine, Biofeedback Laboratory, Guangzhou Xinhua University, Guangzhou 510520, China
| |
Collapse
|
10
|
Kuiper RJA, Sakkers RJB, van Stralen M, Arbabi V, Viergever MA, Weinans H, Seevinck PR. Efficient cascaded V-net optimization for lower extremity CT segmentation validated using bone morphology assessment. J Orthop Res 2022; 40:2894-2907. [PMID: 35239226 PMCID: PMC9790725 DOI: 10.1002/jor.25314] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 01/13/2022] [Accepted: 02/02/2022] [Indexed: 02/04/2023]
Abstract
Semantic segmentation of bone from lower extremity computerized tomography (CT) scans can improve and accelerate the visualization, diagnosis, and surgical planning in orthopaedics. However, the large field of view of these scans makes automatic segmentation using deep learning based methods challenging, slow and graphical processing unit (GPU) memory intensive. We investigated methods to more efficiently represent anatomical context for accurate and fast segmentation and compared these with state-of-the-art methodology. Six lower extremity bones from patients of two different datasets were manually segmented from CT scans, and used to train and optimize a cascaded deep learning approach. We varied the number of resolution levels, receptive fields, patch sizes, and number of V-net blocks. The best performing network used a multi-stage, cascaded V-net approach with 1283 -643 -323 voxel patches as input. The average Dice coefficient over all bones was 0.98 ± 0.01, the mean surface distance was 0.26 ± 0.12 mm and the 95th percentile Hausdorff distance 0.65 ± 0.28 mm. This was a significant improvement over the results of the state-of-the-art nnU-net, with only approximately 1/12th of training time, 1/3th of inference time and 1/4th of GPU memory required. Comparison of the morphometric measurements performed on automatic and manual segmentations showed good correlation (Intraclass Correlation Coefficient [ICC] >0.8) for the alpha angle and excellent correlation (ICC >0.95) for the hip-knee-ankle angle, femoral inclination, femoral version, acetabular version, Lateral Centre-Edge angle, acetabular coverage. The segmentations were generally of sufficient quality for the tested clinical applications and were performed accurately and quickly compared to state-of-the-art methodology from the literature.
Collapse
Affiliation(s)
- Ruurd J. A. Kuiper
- Department of OrthopaedicsUniversity Medical Center UtrechtUtrechtThe Netherlands,Image Sciences InstituteUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Ralph J. B. Sakkers
- Department of OrthopaedicsUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Marijn van Stralen
- Image Sciences InstituteUniversity Medical Center UtrechtUtrechtThe Netherlands,MRIguidance B.V.UtrechtThe Netherlands
| | - Vahid Arbabi
- Department of OrthopaedicsUniversity Medical Center UtrechtUtrechtThe Netherlands,Department of Mechanical EngineeringUniversity of BirjandBirjandIran
| | - Max A. Viergever
- Image Sciences InstituteUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Harrie Weinans
- Department of OrthopaedicsUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Peter R. Seevinck
- Image Sciences InstituteUniversity Medical Center UtrechtUtrechtThe Netherlands,MRIguidance B.V.UtrechtThe Netherlands
| |
Collapse
|
11
|
Schnider E, Huck A, Toranelli M, Rauter G, Müller-Gerbl M, Cattin PC. Improved distinct bone segmentation from upper-body CT using binary-prediction-enhanced multi-class inference. Int J Comput Assist Radiol Surg 2022; 17:2113-2120. [PMID: 35595948 PMCID: PMC9515055 DOI: 10.1007/s11548-022-02650-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 04/20/2022] [Indexed: 11/28/2022]
Abstract
PURPOSE Automated distinct bone segmentation has many applications in planning and navigation tasks. 3D U-Nets have previously been used to segment distinct bones in the upper body, but their performance is not yet optimal. Their most substantial source of error lies not in confusing one bone for another, but in confusing background with bone-tissue. METHODS In this work, we propose binary-prediction-enhanced multi-class (BEM) inference, which takes into account an additional binary background/bone-tissue prediction, to improve the multi-class distinct bone segmentation. We evaluate the method using different ways of obtaining the binary prediction, contrasting a two-stage approach to four networks with two segmentation heads. We perform our experiments on two datasets: An in-house dataset comprising 16 upper-body CT scans with voxelwise labelling into 126 distinct classes, and a public dataset containing 50 synthetic CT scans, with 41 different classes. RESULTS The most successful network with two segmentation heads achieves a class-median Dice coefficient of 0.85 on cross-validation with the upper-body CT dataset. These results outperform both our previously published 3D U-Net baseline with standard inference, and previously reported results from other groups. On the synthetic dataset, we also obtain improved results when using BEM-inference. CONCLUSION Using a binary bone-tissue/background prediction as guidance during inference improves distinct bone segmentation from upper-body CT scans and from the synthetic dataset. The results are robust to multiple ways of obtaining the bone-tissue segmentation and hold for the two-stage approach as well as for networks with two segmentation heads.
Collapse
Affiliation(s)
- Eva Schnider
- Department of Biomedical Engineering, University of Basel, Gewerbestrasse 14, Allschwil, 4123, Switzerland.
| | - Antal Huck
- Department of Biomedical Engineering, University of Basel, Gewerbestrasse 14, Allschwil, 4123, Switzerland
| | - Mireille Toranelli
- Department of Biomedicine, Musculoskeletal Research, University of Basel, Basel, Switzerland
| | - Georg Rauter
- Department of Biomedical Engineering, University of Basel, Gewerbestrasse 14, Allschwil, 4123, Switzerland
| | - Magdalena Müller-Gerbl
- Department of Biomedicine, Musculoskeletal Research, University of Basel, Basel, Switzerland
| | - Philippe C Cattin
- Department of Biomedical Engineering, University of Basel, Gewerbestrasse 14, Allschwil, 4123, Switzerland
| |
Collapse
|
12
|
Gan W, Sun Y, Eldeniz C, Liu J, An H, Kamilov US. Deformation-Compensated Learning for Image Reconstruction Without Ground Truth. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2371-2384. [PMID: 35344490 PMCID: PMC9497435 DOI: 10.1109/tmi.2022.3163018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality.
Collapse
|
13
|
Clinical application of automated virtual orbital reconstruction for orbital fracture management with patient-specific implants: A prospective comparative study. J Craniomaxillofac Surg 2022; 50:686-691. [DOI: 10.1016/j.jcms.2022.05.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 03/02/2022] [Accepted: 05/26/2022] [Indexed: 11/18/2022] Open
|
14
|
Abbasi S, Tavakoli M, Boveiri HR, Mosleh Shirazi MA, Khayami R, Khorasani H, Javidan R, Mehdizadeh A. Medical image registration using unsupervised deep neural network: A scoping literature review. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103444] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
15
|
Fu Y, Zhang H, Morris ED, Glide-Hurst CK, Pai S, Traverso A, Wee L, Hadzic I, Lønne PI, Shen C, Liu T, Yang X. Artificial Intelligence in Radiation Therapy. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:158-181. [PMID: 35992632 PMCID: PMC9385128 DOI: 10.1109/trpms.2021.3107454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Eric D. Morris
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA 90095, USA
| | - Carri K. Glide-Hurst
- Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53792, USA
| | - Suraj Pai
- Maastricht University Medical Centre, Netherlands
| | | | - Leonard Wee
- Maastricht University Medical Centre, Netherlands
| | | | - Per-Ivar Lønne
- Department of Medical Physics, Oslo University Hospital, PO Box 4953 Nydalen, 0424 Oslo, Norway
| | - Chenyang Shen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75002, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
16
|
An Efficient Dynamic Regulated Fuzzy Neural Network for Human Motion Retrieval and Analysis. Symmetry (Basel) 2021. [DOI: 10.3390/sym13081317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Human motion retrieval and analysis is a useful means of activity recognition to 3D human bodies. An efficient method is proposed to estimate human motion by using symmetric joint points and limb features of various limb parts based on regression task. We primarily obtain the 3D coordinates of symmetric joint points based on the located waist and hip points. By introducing three critical feature points on torso and symmetric joint points’ matching on motion video sequences, the 3D coordinates of symmetric joint points and its asymmetric limb features will not be affected by shading and interference of limb on different postures. With the asymmetric limb features of various human parts, a dynamic regulated Fuzzy neural network (DRFNN) is proposed to estimate human motion for different asymmetric postures using learning algorithm of network parameters and weights. Finally, human sequential actions corresponding to different asymmetric postures are presented according to the best retrieval results by DRFNN based on 3D human action database. Experiments show that compared with the traditional adaptive self-organizing fuzzy neural network (SOFNN) model, the proposed algorithm has higher estimation accuracy and better presentation results compared with the existing human motion analysis algorithms.
Collapse
|
17
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
18
|
Li-quan C, You L, Shen F, Shan Z, Chen J. Pose recognition in sports scenes based on deep learning skeleton sequence model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189834] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Human skeleton extraction is a basic problem in the field of computer vision. With the rapid progress of science and technology, it has become a hot issue in the field of target detection such as pedestrian recognition, behavior monitoring, and pedestrian gesture recognition. In recent years, due to the development of deep neural networks, modeling of human joints in acquired images has made progress in skeleton extraction. However, most models have low modeling accuracy, poor real-time performance, and poor model availability. problem. Aiming at the above-mentioned human target detection problem, this paper uses the deep learning skeleton sequence model gesture recognition method in sports scenes to study, aiming to provide a gesture recognition method with strong noise resistance, good real-time performance and accurate model. This article uses motion video frame images to train the VGG16 network. Using the network to extract skeleton information can strengthen the posture feature expression, and use HOG for feature extraction, and use the Adam algorithm to optimize the network to extract more posture features, thereby improving the posture of the network Recognition accuracy. Then adjust the hyperparameters and network structure of the basic network according to the training results, and obtain the key poses in the sports scene through the final classifier.
Collapse
Affiliation(s)
- Chen Li-quan
- Department of Physical Education and Sports Science, Mudanjiang Normal University, Mudanjiang
| | - Li You
- Department of Physical Education and Sports Science, Mudanjiang Normal University, Mudanjiang
| | - Fengjun Shen
- College of Sport Science and Physical Education, Myongji University, Yongin-si, Republic of Korea
| | - Zhaoqimeng Shan
- Graduate School of Business Administration, The University of Suwon, Republic of Korea
| | - Jiaxuan Chen
- International Elite College, Yonsei University, Wonju, South Korea
| |
Collapse
|
19
|
Abstract
This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | |
Collapse
|
20
|
Fu Y, Ippolito JE, Ludwig DR, Nizamuddin R, Li HH, Yang D. Technical Note: Automatic segmentation of CT images for ventral body composition analysis. Med Phys 2020; 47:5723-5730. [PMID: 32969050 DOI: 10.1002/mp.14465] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Revised: 08/28/2020] [Accepted: 09/04/2020] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Body composition is known to be associated with many diseases including diabetes, cancers, and cardiovascular diseases. In this paper, we developed a fully automatic body tissue decomposition procedure to segment three major compartments that are related to body composition analysis - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and muscle. Three additional compartments - the ventral cavity, lung, and bones - were also segmented during the segmentation process to assist segmentation of the major compartments. METHODS A convolutional neural network (CNN) model with densely connected layers was developed to perform ventral cavity segmentation. An image processing workflow was developed to segment the ventral cavity in any patient's computed tomography (CT) using the CNN model, then further segment the body tissue into multiple compartments using hysteresis thresholding followed by morphological operations. It is important to segment ventral cavity firstly to allow accurate separation of compartments with similar Hounsfield unit (HU) inside and outside the ventral cavity. RESULTS The ventral cavity segmentation CNN model was trained and tested with manually labeled ventral cavities in 60 CTs. Dice scores (mean ± standard deviation) for ventral cavity segmentation were 0.966 ± 0.012. Tested on CT datasets with intravenous (IV) and oral contrast, the Dice scores were 0.96 ± 0.02, 0.94 ± 0.06, 0.96 ± 0.04, 0.95 ± 0.04, and 0.99 ± 0.01 for bone, VAT, SAT, muscle, and lung, respectively. The respective Dice scores were 0.97 ± 0.02, 0.94 ± 0.07, 0.93 ± 0.06, 0.91 ± 0.04, and 0.99 ± 0.01 for non-contrast CT datasets. CONCLUSION A body tissue decomposition procedure was developed to automatically segment multiple compartments of the ventral body. The proposed method enables fully automated quantification of three-dimensional (3D) ventral body composition metrics from CT images.
Collapse
Affiliation(s)
- Yabo Fu
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Joseph E Ippolito
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Daniel R Ludwig
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Rehan Nizamuddin
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Harold H Li
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| | - Deshan Yang
- Washington University School of Medicine, 660 S Euclid Ave, Campus, Box 8131, St Louis, MO, 63110, USA
| |
Collapse
|
21
|
Dai X, Lei Y, Zhang Y, Qiu RLJ, Wang T, Dresser SA, Curran WJ, Patel P, Liu T, Yang X. Automatic multi-catheter detection using deeply supervised convolutional neural network in MRI-guided HDR prostate brachytherapy. Med Phys 2020; 47:4115-4124. [PMID: 32484573 DOI: 10.1002/mp.14307] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 05/19/2020] [Accepted: 05/24/2020] [Indexed: 12/19/2022] Open
Abstract
PURPOSE High-dose-rate (HDR) brachytherapy is an established technique to be used as monotherapy option or focal boost in conjunction with external beam radiation therapy (EBRT) for treating prostate cancer. Radiation source path reconstruction is a critical procedure in HDR treatment planning. Manually identifying the source path is labor intensive and time inefficient. In recent years, magnetic resonance imaging (MRI) has become a valuable imaging modality for image-guided HDR prostate brachytherapy due to its superb soft-tissue contrast for target delineation and normal tissue contouring. The purpose of this study is to investigate a deep-learning-based method to automatically reconstruct multiple catheters in MRI for prostate cancer HDR brachytherapy treatment planning. METHODS Attention gated U-Net incorporated with total variation (TV) regularization model was developed for multi-catheter segmentation in MRI. The attention gates were used to improve the accuracy of identifying small catheter points, while TV regularization was adopted to encode the natural spatial continuity of catheters into the model. The model was trained using the binary catheter annotation images offered by experienced physicists as ground truth paired with original MRI images. After the network was trained, MR images of a new prostate cancer patient receiving HDR brachytherapy were fed into the model to predict the locations and shapes of all the catheters. Quantitative assessments of our proposed method were based on catheter shaft and tip errors compared to the ground truth. RESULTS Our method detected 299 catheters from 20 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.37 ± 1.68 mm and a catheter shaft error of 0.93 ± 0.50 mm. For detection of catheter tips, our method resulted in 87% of the catheter tips within an error of less than ± 2.0 mm, and more than 71% of the tips can be localized within an absolute error of no >1.0 mm. For catheter shaft localization, 97% of catheters were detected with an error of <2.0 mm, while 63% were within 1.0 mm. CONCLUSIONS In this study, we proposed a novel multi-catheter detection method to precisely localize the tips and shafts of catheters in three-dimensional MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and outcome of MRI-guided HDR prostate brachytherapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Yupei Zhang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Sean A Dresser
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30332, USA
| |
Collapse
|
22
|
Fu Y, Lei Y, Wang T, Tian S, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Pelvic multi-organ segmentation on cone-beam CT for prostate adaptive radiotherapy. Med Phys 2020; 47:3415-3422. [PMID: 32323330 DOI: 10.1002/mp.14196] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 04/13/2020] [Accepted: 04/16/2020] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND AND PURPOSE The purpose of this study is to develop a deep learning-based approach to simultaneously segment five pelvic organs including prostate, bladder, rectum, left and right femoral heads on cone-beam CT (CBCT), as required elements for prostate adaptive radiotherapy planning. MATERIALS AND METHODS We propose to utilize both CBCT and CBCT-based synthetic MRI (sMRI) for the segmentation of soft tissue and bony structures, as they provide complementary information for pelvic organ segmentation. CBCT images have superior bony structure contrast and sMRIs have superior soft tissue contrast. Prior to segmentation, sMRI was generated using a cycle-consistent adversarial networks (CycleGAN), which was trained using paired CBCT-MR images. To combine the advantages of both CBCT and sMRI, we developed a cross-modality attention pyramid network with late feature fusion. Our method processes CBCT and sMRI inputs separately to extract CBCT-specific and sMRI-specific features prior to combining them in a late-fusion network for final segmentation. The network was trained and tested using 100 patients' datasets, with each dataset including the CBCT and manual physician contours. For comparison, we trained another two networks with different network inputs and architectures. The segmentation results were compared to manual contours for evaluations. RESULTS For the proposed method, dice similarity coefficients and mean surface distances between the segmentation results and the ground truth were 0.96 ± 0.03, 0.65 ± 0.67 mm; 0.91 ± 0.08, 0.93 ± 0.96 mm; 0.93 ± 0.04, 0.72 ± 0.61 mm; 0.95 ± 0.05, 1.05 ± 1.40 mm; and 0.95 ± 0.05, 1.08 ± 1.48 mm for bladder, prostate, rectum, left and right femoral heads, respectively. As compared to the other two competing methods, our method has shown superior performance in terms of the segmentation accuracy. CONCLUSION We developed a deep learning-based segmentation method to rapidly and accurately segment five pelvic organs simultaneously from daily CBCTs. The proposed method could be used in the clinic to support rapid target and organs-at-risk contouring for prostate adaptive radiation therapy.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
23
|
Lei Y, Fu Y, Wang T, Liu Y, Patel P, Curran WJ, Liu T, Yang X. 4D-CT deformable image registration using multiscale unsupervised deep learning. Phys Med Biol 2020; 65:085003. [PMID: 32097902 PMCID: PMC7775640 DOI: 10.1088/1361-6560/ab79c4] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Deformable image registration (DIR) of 4D-CT images is important in multiple radiation therapy applications including motion tracking of soft tissue or fiducial markers, target definition, image fusion, dose accumulation and treatment response evaluations. It is very challenging to accurately and quickly register 4D-CT abdominal images due to its large appearance variances and bulky sizes. In this study, we proposed an accurate and fast multi-scale DIR network (MS-DIRNet) for abdominal 4D-CT registration. MS-DIRNet consists of a global network (GlobalNet) and local network (LocalNet). GlobalNet was trained using down-sampled whole image volumes while LocalNet was trained using sampled image patches. MS-DIRNet consists of a generator and a discriminator. The generator was trained to directly predict a deformation vector field (DVF) based on the moving and target images. The generator was implemented using convolutional neural networks with multiple attention gates. The discriminator was trained to differentiate the deformed images from the target images to provide additional DVF regularization. The loss function of MS-DIRNet includes three parts which are image similarity loss, adversarial loss and DVF regularization loss. The MS-DIRNet was trained in a completely unsupervised manner meaning that ground truth DVFs are not needed. Different from traditional DIRs that calculate DVF iteratively, MS-DIRNet is able to calculate the final DVF in a single forward prediction which could significantly expedite the DIR process. The MS-DIRNet was trained and tested on 25 patients' 4D-CT datasets using five-fold cross validation. For registration accuracy evaluation, target registration errors (TREs) of MS-DIRNet were compared to clinically used software. Our results showed that the MS-DIRNet with an average TRE of 1.2 ± 0.8 mm outperformed the commercial software with an average TRE of 2.5 ± 0.8 mm in 4D-CT abdominal DIR, demonstrating the superior performance of our method in fiducial marker tracking and overall soft tissue alignment.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA, 30322
| | | | | | | | | | | | | | | |
Collapse
|
24
|
Chepurnyi Y, Chernohorskyi D, Prykhodko D, Poutala A, Kopchak A. Reliability of orbital volume measurements based on computed tomography segmentation: Validation of different algorithms in orbital trauma patients. J Craniomaxillofac Surg 2020; 48:574-581. [PMID: 32291132 DOI: 10.1016/j.jcms.2020.03.007] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/08/2020] [Accepted: 03/19/2020] [Indexed: 10/24/2022] Open
Abstract
PURPOSE To compare the most common methods of segmentation for evaluation of the bony orbit in orbital trauma patients. MATERIALS AND METHODS Computed tomography scans (before and after treatment) from 15 patients with unilateral blowout fractures and who underwent orbital reconstructions were randomly selected for this study. Orbital volume measurements, volume difference measurements, prolapsed soft tissue volumes, and bony defect areas were made using manual, semi-automated, and automated segmentation methods. RESULTS Volume difference values between intact and damaged orbits after surgery using the manual mode were 0.5 ± 0.3 cm3, 0.5 ± 0.4 cm3 applying semi-automated method, and 0.76 ± 0.5 cm3, determined by automated segmentation (р = 0.216); the mean volumes (MVs) for prolapsed tissues were 3.0 ± 1.9 cm3, 3.0 ± 2.3 cm3, and 2.8 ± 3.9 cm3 (p = 0.152); and orbital wall defect areas were 4.7 ± 2.8 cm2, 4.75 ± 3.1 cm2, and 4.9 ± 3.3 cm2 (p = 0.674), respectively. CONCLUSIONS The analyzed segmentation methods had the same accuracy in evaluation of volume differences between two orbits of the same patient, defect areas, and prolapsed soft tissue volumes but not in absolute values of the orbital volume due to the existing diversity in determination of anterior closing. The automated method is recommended for common clinical cases, as it is less time-consuming with high precision and reproducibility.
Collapse
Affiliation(s)
- Yurii Chepurnyi
- Department of Stomatology, Bogomolets National Medical University, T. Shevchenko Blvd., 13, 01601, Kyiv, Ukraine.
| | - Denys Chernohorskyi
- Department of Stomatology, Bogomolets National Medical University, T. Shevchenko Blvd., 13, 01601, Kyiv, Ukraine
| | - Danylo Prykhodko
- "Imatek Medical (Co "), Prosp, Peremogy, 123, 03179, Kyiv, Ukraine
| | - Arto Poutala
- "Disior Ltd", FI27875878, Terkko Health Hub, Haartmaninkatu 4, 00290, Helsinki, Finland
| | - Andriy Kopchak
- Department of Stomatology, Bogomolets National Medical University, T. Shevchenko Blvd., 13, 01601, Kyiv, Ukraine
| |
Collapse
|
25
|
Lenchik L, Heacock L, Weaver AA, Boutin RD, Cook TS, Itri J, Filippi CG, Gullapalli RP, Lee J, Zagurovskaya M, Retson T, Godwin K, Nicholson J, Narayana PA. Automated Segmentation of Tissues Using CT and MRI: A Systematic Review. Acad Radiol 2019; 26:1695-1706. [PMID: 31405724 PMCID: PMC6878163 DOI: 10.1016/j.acra.2019.07.006] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 07/17/2019] [Accepted: 07/17/2019] [Indexed: 01/10/2023]
Abstract
RATIONALE AND OBJECTIVES The automated segmentation of organs and tissues throughout the body using computed tomography and magnetic resonance imaging has been rapidly increasing. Research into many medical conditions has benefited greatly from these approaches by allowing the development of more rapid and reproducible quantitative imaging markers. These markers have been used to help diagnose disease, determine prognosis, select patients for therapy, and follow responses to therapy. Because some of these tools are now transitioning from research environments to clinical practice, it is important for radiologists to become familiar with various methods used for automated segmentation. MATERIALS AND METHODS The Radiology Research Alliance of the Association of University Radiologists convened an Automated Segmentation Task Force to conduct a systematic review of the peer-reviewed literature on this topic. RESULTS The systematic review presented here includes 408 studies and discusses various approaches to automated segmentation using computed tomography and magnetic resonance imaging for neurologic, thoracic, abdominal, musculoskeletal, and breast imaging applications. CONCLUSION These insights should help prepare radiologists to better evaluate automated segmentation tools and apply them not only to research, but eventually to clinical practice.
Collapse
Affiliation(s)
- Leon Lenchik
- Department of Radiology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157.
| | - Laura Heacock
- Department of Radiology, NYU Langone, New York, New York
| | - Ashley A Weaver
- Department of Biomedical Engineering, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Robert D Boutin
- Department of Radiology, University of California Davis School of Medicine, Sacramento, California
| | - Tessa S Cook
- Department of Radiology, University of Pennsylvania, Philadelphia Pennsylvania
| | - Jason Itri
- Department of Radiology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157
| | - Christopher G Filippi
- Department of Radiology, Donald and Barbara School of Medicine at Hofstra/Northwell, Lenox Hill Hospital, NY, New York
| | - Rao P Gullapalli
- Department of Radiology, University of Maryland School of Medicine, Baltimore, Maryland
| | - James Lee
- Department of Radiology, University of Kentucky, Lexington, Kentucky
| | | | - Tara Retson
- Department of Radiology, University of California San Diego, San Diego, California
| | - Kendra Godwin
- Medical Library, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Joey Nicholson
- NYU Health Sciences Library, NYU School of Medicine, NYU Langone Health, New York, New York
| | - Ponnada A Narayana
- Department of Diagnostic and Interventional Imaging, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas
| |
Collapse
|
26
|
Fu Y, Wu X, Thomas AM, Li HH, Yang D. Automatic large quantity landmark pairs detection in 4DCT lung images. Med Phys 2019; 46:4490-4501. [PMID: 31318989 PMCID: PMC8311742 DOI: 10.1002/mp.13726] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Revised: 06/20/2019] [Accepted: 07/11/2019] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To automatically and precisely detect a large quantity of landmark pairs between two lung computed tomography (CT) images to support evaluation of deformable image registration (DIR). We expect that the generated landmark pairs will significantly augment the current lung CT benchmark datasets in both quantity and positional accuracy. METHODS A large number of landmark pairs were detected within the lung between the end-exhalation (EE) and end-inhalation (EI) phases of the lung four-dimensional computed tomography (4DCT) datasets. Thousands of landmarks were detected by applying the Harris-Stephens corner detection algorithm on the probability maps of the lung vasculature tree. A parametric image registration method (pTVreg) was used to establish initial landmark correspondence by registering the images at EE and EI phases. A multi-stream pseudo-siamese (MSPS) network was then developed to further improve the landmark pair positional accuracy by directly predicting three-dimensional (3D) shifts to optimally align the landmarks in EE to their counterparts in EI. Positional accuracies of the detected landmark pairs were evaluated using both digital phantoms and publicly available landmark pairs. RESULTS Dense sets of landmark pairs were detected for 10 4DCT lung datasets, with an average of 1886 landmark pairs per case. The mean and standard deviation of target registration error (TRE) were 0.47 ± 0.45 mm with 98% of landmark pairs having a TRE smaller than 2 mm for 10 digital phantom cases. Tests using 300 manually labeled landmark pairs in 10 lung 4DCT benchmark datasets (DIRLAB) produced TRE results of 0.73 ± 0.53 mm with 97% of landmark pairs having a TRE smaller than 2 mm. CONCLUSION A new method was developed to automatically and precisely detect a large quantity of landmark pairs between lung CT image pairs. The detected landmark pairs could be used as benchmark datasets for more accurate and informative quantitative evaluation of DIR algorithms.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Xue Wu
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Allan M. Thomas
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Harold H. Li
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| | - Deshan Yang
- Department of Radiation Oncology, Washington University in Saint Louis, St. Louis, MO, USA
| |
Collapse
|
27
|
Taghizadeh E, Terrier A, Becce F, Farron A, Büchler P. Automated CT bone segmentation using statistical shape modelling and local template matching. Comput Methods Biomech Biomed Engin 2019; 22:1303-1310. [DOI: 10.1080/10255842.2019.1661391] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Elham Taghizadeh
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Alexandre Terrier
- Laboratory of Biomechanical Orthopedics, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Fabio Becce
- Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Alain Farron
- Service of Orthopedics and Traumatology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Philippe Büchler
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| |
Collapse
|
28
|
Minnema J, van Eijnatten M, Kouw W, Diblen F, Mendrik A, Wolff J. CT image segmentation of bone for medical additive manufacturing using a convolutional neural network. Comput Biol Med 2018; 103:130-139. [DOI: 10.1016/j.compbiomed.2018.10.012] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 10/11/2018] [Accepted: 10/13/2018] [Indexed: 11/16/2022]
|
29
|
Automatic analysis algorithm for acquiring standard dental and mandibular shape data using cone-beam computed tomography. Sci Rep 2018; 8:13516. [PMID: 30202001 PMCID: PMC6131388 DOI: 10.1038/s41598-018-31869-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 08/29/2018] [Indexed: 01/15/2023] Open
Abstract
This study aims to introduce a new algorithm developed using retrospective cone-beam computed tomography (CBCT) data to obtain a standard dental and mandibular arch shape automatically for an optimal panoramic focal trough. A custom-made program was developed to analyze each arch shape of randomly collected 30 CBCT images. First, volumetric data of the mandible were binarized and projected in the axial direction to obtain 2-dimensional arch images. Second, 30 patients’ mandibular arches were superimposed on the center of the bilateral distal contact points of the mandibular canines to generate an average arch shape. Third, the center and boundary of a panoramic focal trough were obtained using smoothing splines. As a result, the minimum thickness and transition of the focal trough could be obtained. If this new algorithm is applied to big data of retrospective CBCT images, standard focal troughs could be established by race, sex, and age group, which would improve the image quality of dental panoramic radiography.
Collapse
|
30
|
Fu Y, Liu S, Li HH, Li H, Yang D. An adaptive motion regularization technique to support sliding motion in deformable image registration. Med Phys 2018; 45:735-747. [DOI: 10.1002/mp.12734] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 01/28/2023] Open
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology; School of Medicine; Washington University in Saint Louis; 4921 Parkview Place St. Louis MO 63110 USA
| | - Shi Liu
- Department of Radiation Oncology; School of Medicine; Washington University in Saint Louis; 4921 Parkview Place St. Louis MO 63110 USA
| | - H. Harold Li
- Department of Radiation Oncology; School of Medicine; Washington University in Saint Louis; 4921 Parkview Place St. Louis MO 63110 USA
| | - Hua Li
- Department of Radiation Oncology; School of Medicine; Washington University in Saint Louis; 4921 Parkview Place St. Louis MO 63110 USA
| | - Deshan Yang
- Department of Radiation Oncology; School of Medicine; Washington University in Saint Louis; 4921 Parkview Place St. Louis MO 63110 USA
| |
Collapse
|
31
|
Teske H, Bartelheimer K, Meis J, Bendl R, Stoiber EM, Giske K. Construction of a biomechanical head and neck motion model as a guide to evaluation of deformable image registration. Phys Med Biol 2017; 62:N271-N284. [PMID: 28350540 DOI: 10.1088/1361-6560/aa69b6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
The use of deformable image registration methods in the context of adaptive radiotherapy leads to uncertainties in the simulation of the administered dose distributions during the treatment course. Evaluation of these methods is a prerequisite to decide if a plan adaptation will improve the individual treatment. Current approaches using manual references limit the validity of evaluation, especially for low-contrast regions. In particular, for the head and neck region, the highly flexible anatomy and low soft tissue contrast in control images pose a challenge to image registration and its evaluation. Biomechanical models promise to overcome this issue by providing anthropomorphic motion modelling of the patient. We introduce a novel biomechanical motion model for the generation and sampling of different postures of the head and neck anatomy. Motion propagation behaviour of the individual bones is defined by an underlying kinematic model. This model interconnects the bones by joints and thus is capable of providing a wide range of motion. Triggered by the motion of the individual bones, soft tissue deformation is described by an extended heterogeneous tissue model based on the chainmail approach. This extension, for the first time, allows the propagation of decaying rotations within soft tissue without the necessity for explicit tissue segmentation. Overall motion simulation and sampling of deformed CT scans including a basic noise model is achieved within 30 s. The proposed biomechanical motion model for the head and neck site generates displacement vector fields on a voxel basis, approximating arbitrary anthropomorphic postures of the patient. It was developed with the intention of providing input data for the evaluation of deformable image registration.
Collapse
Affiliation(s)
- Hendrik Teske
- Division of Medical Physics in Radiation Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany. National Center for Radiation Research in Oncology (NCRO), Heidelberg Institute for Radiation Oncology (HIRO), Heidelberg, Germany
| | | | | | | | | | | |
Collapse
|