1
|
Sarmadi H, Muñoz-Salinas R, Álvaro Berbís M, Luna A, Medina-Carnicer R. Joint scene and object tracking for cost-Effective augmented reality guided patient positioning in radiation therapy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106296. [PMID: 34380076 DOI: 10.1016/j.cmpb.2021.106296] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 07/17/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE The research is done in the field of Augmented Reality (AR) for patient positioning in radiation therapy is scarce. We propose an efficient and cost-effective algorithm for tracking the scene and the patient to interactively assist the patient's positioning process by providing visual feedback to the operator. Up to our knowledge, this is the first framework that can be employed for mobile interactive AR to guide patient positioning. METHODS We propose a pointcloud processing method that, combined with a fiducial marker-mapper algorithm and the generalized ICP algorithm, tracks the patient and the camera precisely and efficiently only using the CPU unit. The 3D reference model and body marker map alignment is calculated employing an efficient body reconstruction algorithm. RESULTS Our quantitative evaluation shows that the proposed method achieves a translational and rotational error of 4.17 mm/0.82∘ at 9 fps. Furthermore, the qualitative results demonstrate the usefulness of our algorithm in patient positioning on different human subjects. CONCLUSION Since our algorithm achieves a relatively high frame rate and accuracy employing a regular laptop (without a dedicated GPU), it is a very cost-effective AR-based patient positioning method. It also opens the way for other researchers by introducing a framework that could be improved upon for better mobile interactive AR patient positioning solutions in the future.
Collapse
Affiliation(s)
- Hamid Sarmadi
- Instituto Maimónides de Investigación en Biomedicina (IMIBIC). Avenida Menéndez Pidal s/n, Córdoba, 14004, Spain.
| | - Rafael Muñoz-Salinas
- Computing and Numerical Analysis Department, Edificio Einstein. Campus de Rabanales, Córdoba University, Córdoba, 14071, Spain; Instituto Maimónides de Investigación en Biomedicina (IMIBIC). Avenida Menéndez Pidal s/n, Córdoba, 14004, Spain.
| | - M Álvaro Berbís
- HT Médica, Hospital San Juan de Dios. Avda Brillante 106, Córdoba, 14012, Spain.
| | - Antonio Luna
- HT Médica, Clínica las Nieves, Carmelo Torres 2, Jaén, 23007, Spain.
| | - R Medina-Carnicer
- Computing and Numerical Analysis Department, Edificio Einstein. Campus de Rabanales, Córdoba University, Córdoba, 14071, Spain; Instituto Maimónides de Investigación en Biomedicina (IMIBIC). Avenida Menéndez Pidal s/n, Córdoba, 14004, Spain.
| |
Collapse
|
2
|
Kyme AZ, Fulton RR. Motion estimation and correction in SPECT, PET and CT. Phys Med Biol 2021; 66. [PMID: 34102630 DOI: 10.1088/1361-6560/ac093b] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 06/08/2021] [Indexed: 11/11/2022]
Abstract
Patient motion impacts single photon emission computed tomography (SPECT), positron emission tomography (PET) and X-ray computed tomography (CT) by giving rise to projection data inconsistencies that can manifest as reconstruction artifacts, thereby degrading image quality and compromising accurate image interpretation and quantification. Methods to estimate and correct for patient motion in SPECT, PET and CT have attracted considerable research effort over several decades. The aims of this effort have been two-fold: to estimate relevant motion fields characterizing the various forms of voluntary and involuntary motion; and to apply these motion fields within a modified reconstruction framework to obtain motion-corrected images. The aims of this review are to outline the motion problem in medical imaging and to critically review published methods for estimating and correcting for the relevant motion fields in clinical and preclinical SPECT, PET and CT. Despite many similarities in how motion is handled between these modalities, utility and applications vary based on differences in temporal and spatial resolution. Technical feasibility has been demonstrated in each modality for both rigid and non-rigid motion, but clinical feasibility remains an important target. There is considerable scope for further developments in motion estimation and correction, and particularly in data-driven methods that will aid clinical utility. State-of-the-art machine learning methods may have a unique role to play in this context.
Collapse
Affiliation(s)
- Andre Z Kyme
- School of Biomedical Engineering, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| | - Roger R Fulton
- Sydney School of Health Sciences, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| |
Collapse
|
3
|
Test-Retest, Inter-Rater and Intra-Rater Reliability for Spatiotemporal Gait Parameters Using SANE (an eaSy gAit aNalysis systEm) as Measuring Instrument. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10175781] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Studies have demonstrated the validity of Kinect-based systems to measure spatiotemporal parameters of gait. However, few studies have addressed test-retest, inter-rater and intra-rater reliability for spatiotemporal gait parameters. This study aims to assess test-retest, inter-rater and intra-rater reliability of SANE (eaSy gAit aNalysis system) as a measuring instrument for spatiotemporal gait parameters. SANE comprises a depth sensor and a software that automatically estimates spatiotemporal gait parameters using distances between ankles without the need to manually indicate where each gait cycle begins and ends. Gait analysis was conducted by 2 evaluators for 12 healthy subjects during 4 sessions. The reliability was evaluated using Intraclass Correlation Coefficients (ICC). In addition, the Standard Error of the Measurement (SEM), and Smallest Detectable Change (SDC) was calculated. SANE showed from an acceptable to an excellent test-retest, inter-rater and intra-rater reliability; test-retest reliability ranged from 0.62 to 0.81, inter-rater reliability ranged from 0.70 to 0.95 and intra-rater ranged from 0.74 to 0.92. The subject behavior had a greater effect on the reliability of SANE than the evaluator performance. The reliability values of SANE were comparable with other similar studies. SANE, as a feasible and markerless system, has large potential for assessing spatiotemporal gait parameters.
Collapse
|
4
|
Lin Q, Cai K, Yang R, Xiao W, Huang J, Zhan Y, Zhuang J. Geometric calibration of markerless optical surgical navigation system. Int J Med Robot 2019; 15:e1978. [PMID: 30556944 DOI: 10.1002/rcs.1978] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 12/11/2018] [Accepted: 12/12/2018] [Indexed: 12/19/2022]
Abstract
BACKGROUND Patient-to-image registration is required for image-guided surgical navigation, but marker-based registration is time consuming and is subject to manual error. Markerless registration is an alternative solution to avoid these issues. METHODS This study designs a calibration board and proposes a geometric calibration method to calibrate the near-infrared tracking and structured light components of the proposed optical surgical navigation system simultaneously. RESULTS A planar board and a cylinder are used to evaluate the accuracy of calibration. The mean error for the board experiment is 0.035 mm, and the diameter error for the cylinder experiment is 0.119 mm. A calibration board is reconstructed to evaluate the accuracy of the calibration, and the measured mean error is 0.012 mm. A head phantom is reconstructed and tracked by the proposed optical surgical navigation system. The tracking error is less than 0.3 mm. CONCLUSIONS Experimental results show that the proposed method obtains high accessibility and accuracy and satisfies application requirements.
Collapse
Affiliation(s)
- Qinyong Lin
- School of Medicine, South China University of Technology, Guangzhou, China
| | - Ken Cai
- School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,College of Automation, Zhongkai University of Agriculture and Engineering, Guangzhou, China
| | - Rongqian Yang
- Department of Biomedical Engineering, South China University of Technology, Guangzhou, China.,School of Medicine, Yale University, New Haven, Connecticut.,Guangdong Engineering Technology Research Center for Translational Medicine of Mental Disorders, Guangzhou, China
| | - Weihu Xiao
- Department of Biomedical Engineering, South China University of Technology, Guangzhou, China
| | - Jinhua Huang
- Department of Minimally Invasive Interventional Radiology, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Yinwei Zhan
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China
| | - Jian Zhuang
- Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong General Hospital, Guangdong Academy of Medical Science, Guangzhou, China
| |
Collapse
|
5
|
Depth accuracy of the RealSense F200: Low-cost 4D facial imaging. Sci Rep 2017; 7:16263. [PMID: 29176666 PMCID: PMC5701257 DOI: 10.1038/s41598-017-16608-7] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2017] [Accepted: 11/07/2017] [Indexed: 01/23/2023] Open
Abstract
The RealSense F200 represents a new generation of economically viable 4-dimensional imaging (4D) systems for home use. However, its 3D geometric (depth) accuracy has not been clinically tested. Therefore, this study determined the depth accuracy of the RealSense, in a cohort of patients with a unilateral facial palsy (n = 34), by using the clinically validated 3dMD system as a gold standard. The patients were simultaneously recorded with both systems, capturing six Sunnybrook poses. This study has shown that the RealSense depth accuracy was not affected by a facial palsy (1.48 ± 0.28 mm), compared to a healthy face (1.46 ± 0.26 mm). Furthermore, the Sunnybrook poses did not influence the RealSense depth accuracy (p = 0.76). However, the distance of the patients to the RealSense was shown to affect the accuracy of the system, where the highest depth accuracy of 1.07 mm was measured at a distance of 35 cm. Overall, this study has shown that the RealSense can provide reliable and accurate depth data when recording a range of facial movements. Therefore, when the portability, low-costs, and availability of the RealSense are taken into consideration, the camera is a viable option for 4D close range imaging in telehealth.
Collapse
|
6
|
Singh V, Ma K, Tamersoy B, Chang YJ, Wimmer A, O’Donnell T, Chen T. DARWIN: Deformable Patient Avatar Representation With Deep Image Network. LECTURE NOTES IN COMPUTER SCIENCE 2017. [DOI: 10.1007/978-3-319-66185-8_56] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
7
|
Abstract
Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements <2 mm) and precision (mean point to plane error <2 mm) at an average resolution of at least 390 points per cm2. Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p < 0.001). The choice of object color can influence measurement range and precision. Although Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.
Collapse
|
8
|
Mewes A, Hensen B, Wacker F, Hansen C. Touchless interaction with software in interventional radiology and surgery: a systematic literature review. Int J Comput Assist Radiol Surg 2016; 12:291-305. [PMID: 27647327 DOI: 10.1007/s11548-016-1480-6] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Accepted: 08/31/2016] [Indexed: 11/25/2022]
Abstract
PURPOSE In this article, we systematically examine the current state of research of systems that focus on touchless human-computer interaction in operating rooms and interventional radiology suites. We further discuss the drawbacks of current solutions and underline promising technologies for future development. METHODS A systematic literature search of scientific papers that deal with touchless control of medical software in the immediate environment of the operation room and interventional radiology suite was performed. This includes methods for touchless gesture interaction, voice control and eye tracking. RESULTS Fifty-five research papers were identified and analyzed in detail including 33 journal publications. Most of the identified literature (62 %) deals with the control of medical image viewers. The others present interaction techniques for laparoscopic assistance (13 %), telerobotic assistance and operating room control (9 % each) as well as for robotic operating room assistance and intraoperative registration (3.5 % each). Only 8 systems (14.5 %) were tested in a real clinical environment, and 7 (12.7 %) were not evaluated at all. CONCLUSION In the last 10 years, many advancements have led to robust touchless interaction approaches. However, only a few have been systematically evaluated in real operating room settings. Further research is required to cope with current limitations of touchless software interfaces in clinical environments. The main challenges for future research are the improvement and evaluation of usability and intuitiveness of touchless human-computer interaction and the full integration into productive systems as well as the reduction of necessary interaction steps and further development of hands-free interaction.
Collapse
Affiliation(s)
- André Mewes
- Faculty of Computer Science, University of Magdeburg, Magdeburg, Germany.
| | - Bennet Hensen
- Institute for Diagnostic and Interventional Radiology, Medical School Hanover, Hanover, Germany
| | - Frank Wacker
- Institute for Diagnostic and Interventional Radiology, Medical School Hanover, Hanover, Germany
| | - Christian Hansen
- Faculty of Computer Science, University of Magdeburg, Magdeburg, Germany
| |
Collapse
|
9
|
Xiao D, Luo H, Jia F, Zhang Y, Li Y, Guo X, Cai W, Fang C, Fan Y, Zheng H, Hu Q. A Kinect™camera based navigation system for percutaneous abdominal puncture. Phys Med Biol 2016; 61:5687-705. [DOI: 10.1088/0031-9155/61/15/5687] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
10
|
Precise 3D/2D calibration between a RGB-D sensor and a C-arm fluoroscope. Int J Comput Assist Radiol Surg 2016; 11:1385-95. [PMID: 26811080 DOI: 10.1007/s11548-015-1347-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Accepted: 12/30/2015] [Indexed: 10/22/2022]
Abstract
PURPOSE Calibration and registration are the first steps for augmented reality and mixed reality applications. In the medical field, the calibration between an RGB-D camera and a C-arm fluoroscope is a new topic which introduces challenges. METHOD A convenient and efficient calibration phantom is designed by combining the traditional calibration object of X-ray images with a checkerboard plane. After the localization of the 2D marker points in the X-ray images and the corresponding 3D points from the RGB-D images, we calculate the projection matrix from the RGB-D sensor coordinates to the X-ray, instead of estimating the extrinsic and intrinsic parameters simultaneously. VALIDATION In order to evaluate the effect of every step of our calibration process, we performed five experiments by combining different steps leading to the calibration. We also compared our calibration method to Tsai's method to evaluate the advancement of our solution. At last, we simulated the process of estimating the rotation movement of the RGB-D camera using MATLAB and demonstrate that calculating the projection matrix can reduce the angle error of the rotation. RESULTS A RMS reprojection error of 0.5 mm is achieved using our calibration method which is promising for surgical applications. Our calibration method is more accurate when compared to Tsai's method. Lastly, the simulation result shows that using a projection matrix has a lower error than using intrinsic and extrinsic parameters in the rotation estimation. CONCLUSIONS We designed and evaluated a 3D/2D calibration method for the combination of a RGB-D camera and a C-arm fluoroscope.
Collapse
|
11
|
Towards markerless navigation for percutaneous needle insertions. Int J Comput Assist Radiol Surg 2015; 11:107-17. [PMID: 26018847 DOI: 10.1007/s11548-015-1156-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2014] [Accepted: 01/26/2015] [Indexed: 10/23/2022]
Abstract
PURPOSE Percutaneous needle insertions are increasingly used for diagnosis and treatment of abdominal lesions. The challenging part of computed tomography (CT)-guided punctures is the transfer of the insertion trajectory planned in the CT image to the patient. Conventionally, this often results in several needle repositionings and control CT scans. To address this issue, several navigation systems for percutaneous needle insertions have been presented; however, none of them has thus far become widely accepted in clinical routine. Their benefit for the patient could not exceed the additional higher costs and the increased complexity in terms of bulky tracking systems and specialized markers for registration and tracking. METHODS We present the first markerless and trackerless navigation concept for real-time patient localization and instrument guidance. It has specifically been designed to be integrated smoothly into the clinical workflow and does not require markers or an external tracking system. The main idea is the utilization of a range imaging device that allows for contactless and radiation-free acquisition of both range and color information used for patient localization and instrument guidance. RESULTS A first feasibility study in phantom and porcine models yielded a median targeting accuracy of 6.9 and 19.4 mm, respectively. CONCLUSIONS Although system performance remains to be improved for clinical use, expected advances in camera technology as well as consideration of respiratory motion and automation of the individual steps will make this approach an interesting alternative for guiding percutaneous needle insertions.
Collapse
|
12
|
Jiang L, Zhang S, Yang J, Zhuang X, Zhang L, Gu L. A robust automated markerless registration framework for neurosurgery navigation. Int J Med Robot 2014; 11:436-47. [PMID: 25328118 DOI: 10.1002/rcs.1626] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 09/04/2014] [Accepted: 09/12/2014] [Indexed: 11/09/2022]
Abstract
BACKGROUND The registration of a pre-operative image with the intra-operative patient is a crucial aspect for the success of navigation in neurosurgery. METHODS First, the intra-operative face is reconstructed, using a structured light technique, while the pre-operative face is segmented from head CT/MRI images. In order to perform neurosurgery navigation, a markerless surface registration method is designed by aligning the intra-operative face to the pre-operative face. We propose an efficient and robust registration approach based on the scale invariant feature transform (SIFT), and compare it with iterative closest point (ICP) and coherent point drift (CPD) through a new evaluation standard. RESULTS Our registration method was validated by studies of 10 volunteers and one synthetic model. The average symmetrical surface distances (ASDs) for ICP, CPD and our registration method were 2.24 ± 0.53, 2.18 ± 0.41 and 2.30 ± 0.69 mm, respectively. The average running times of ICP, CPD and our registration method were 343.46, 3847.56 and 0.58 s, respectively. CONCLUSION Our system can quickly reconstruct the intra-operative face, and then efficiently and accurately align it to the pre-operative image, meeting the registration requirements in neurosurgery navigation. It avoids a tedious set-up process for surgeons.
Collapse
Affiliation(s)
- Long Jiang
- School of Biomedical Engineering, Shanghai Jiao Tong University, People's Republic of China
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, NC, USA
| | - Jie Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, People's Republic of China
| | - Xiahai Zhuang
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, People's Republic of China
| | - Lixia Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, People's Republic of China
| | - Lixu Gu
- School of Biomedical Engineering, Shanghai Jiao Tong University, People's Republic of China
| |
Collapse
|
13
|
Mobile markerless augmented reality and its application in forensic medicine. Int J Comput Assist Radiol Surg 2014; 10:573-86. [PMID: 25149272 DOI: 10.1007/s11548-014-1106-9] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2014] [Accepted: 07/30/2014] [Indexed: 10/24/2022]
Abstract
PURPOSE During autopsy, forensic pathologists today mostly rely on visible indication, tactile perception and experience to determine the cause of death. Although computed tomography (CT) data is often available for the bodies under examination, these data are rarely used due to the lack of radiological workstations in the pathological suite. The data may prevent the forensic pathologist from damaging evidence by allowing him to associate, for example, external wounds to internal injuries. To facilitate this, we propose a new multimodal approach for intuitive visualization of forensic data and evaluate its feasibility. METHODS A range camera is mounted on a tablet computer and positioned in a way such that the camera simultaneously captures depth and color information of the body. A server estimates the camera pose based on surface registration of CT and depth data to allow for augmented reality visualization of the internal anatomy directly on the tablet. Additionally, projection of color information onto the CT surface is implemented. RESULTS We validated the system in a postmortem pilot study using fiducials attached to the skin for quantification of a mean target registration error of [Formula: see text] mm. CONCLUSIONS The system is mobile, markerless, intuitive and real-time capable with sufficient accuracy. It can support the forensic pathologist during autopsy with augmented reality and textured surfaces. Furthermore, the system enables multimodal documentation for presentation in court. Despite its preliminary prototype status, it has high potential due to its low price and simplicity.
Collapse
|
14
|
Wilms M, Werner R, Blendowski M, Ortmüller J, Handels H. Simulation of range imaging-based estimation of respiratory lung motion. Influence of noise, signal dimensionality and sampling patterns. Methods Inf Med 2014; 53:257-63. [PMID: 24993030 DOI: 10.3414/me13-01-0137] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2013] [Accepted: 04/18/2014] [Indexed: 12/25/2022]
Abstract
OBJECTIVES A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions). METHODS A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented. RESULTS This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines. CONCLUSIONS Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.
Collapse
Affiliation(s)
- M Wilms
- Matthias Wilms, Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany, E-mail:
| | | | | | | | | |
Collapse
|