1
|
Rashmi S, Srinath S, Patil K, Murthy PS, Deshmukh S. Lateral Cephalometric Landmark Annotation Using Histogram Oriented Gradients Extracted from Region of Interest Patches. J Maxillofac Oral Surg 2023; 22:806-812. [PMID: 38105853 PMCID: PMC10719201 DOI: 10.1007/s12663-023-02025-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 09/24/2023] [Indexed: 12/19/2023] Open
Abstract
Introduction Two-dimensional cephalometric image analysis plays a crucial role in orthodontic diagnosis and treatment planning. While deep learning-based algorithms have emerged to automate the laborious task of anatomical landmark annotation, their effectiveness is hampered by the challenges of acquiring and labelling clinical data. In this study, we propose a model that leverages conventional machine learning techniques to enhance the accuracy of landmark detection using limited dataset. Materials and methods Our methodology involves coarse localization through region of interest (ROI) extraction and fine localization utilizing histogram-oriented gradient (HOG) feature. The image patch containing landmark pixels is classified using the light gradient boosting machine (LGBM) algorithm. To evaluate our model's performance, we conducted rigorous tests on the ISBI Cephalometric dataset and Dental Cepha dataset, aiming to achieve accuracy within a 2 mm radial precision range. We also employed cross-validation to assess our approach, providing a robust evaluation. Results Our model's performance on the ISBI Cephalometric dataset showed an accuracy rate of 77.11% within the desired 2 mm radial precision range. The cross-validation results further confirmed the effectiveness of our approach, yielding a mean accuracy of 78.17%. Additionally, we applied our model to the Dental Cepha dataset, where we achieved a remarkable landmark detection accuracy of 84%. Conclusion The results demonstrate that traditional machine learning techniques can be effective for accurate landmark detection in cephalometric images, even with limited data. Our findings highlight the potential of these techniques for clinical applications, where large datasets of labelled images may not be available.
Collapse
Affiliation(s)
- S. Rashmi
- Deptartment of Computer Science and Engineering, Sri Jayachamarajendra College of Engineering, JSS Science and Technology University, Mysuru, India
| | - S. Srinath
- Deptartment of Computer Science and Engineering, Sri Jayachamarajendra College of Engineering, JSS Science and Technology University, Mysuru, India
| | - Karthikeya Patil
- Deptartment of Oral Medicine and Radiology, JSS Dental College & Hospital, JSS Academy of Higher Education & Research, Mysuru, India
| | - Prashanth Sadashiva Murthy
- Deptartment of Pediatric & Preventive Dentistry, JSS Dental College & Hospital, JSS Academy of Higher Education & Research, Mysuru, India
| | - Seema Deshmukh
- Deptartment of Pediatric & Preventive Dentistry, JSS Dental College & Hospital, JSS Academy of Higher Education & Research, Mysuru, India
| |
Collapse
|
2
|
Wang C, Cui Z, Yang J, Han M, Carneiro G, Shen D. BowelNet: Joint Semantic-Geometric Ensemble Learning for Bowel Segmentation From Both Partially and Fully Labeled CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1225-1236. [PMID: 36449590 DOI: 10.1109/tmi.2022.3225667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Accurate bowel segmentation is essential for diagnosis and treatment of bowel cancers. Unfortunately, segmenting the entire bowel in CT images is quite challenging due to unclear boundary, large shape, size, and appearance variations, as well as diverse filling status within the bowel. In this paper, we present a novel two-stage framework, named BowelNet, to handle the challenging task of bowel segmentation in CT images, with two stages of 1) jointly localizing all types of the bowel, and 2) finely segmenting each type of the bowel. Specifically, in the first stage, we learn a unified localization network from both partially- and fully-labeled CT images to robustly detect all types of the bowel. To better capture unclear bowel boundary and learn complex bowel shapes, in the second stage, we propose to jointly learn semantic information (i.e., bowel segmentation mask) and geometric representations (i.e., bowel boundary and bowel skeleton) for fine bowel segmentation in a multi-task learning scheme. Moreover, we further propose to learn a meta segmentation network via pseudo labels to improve segmentation accuracy. By evaluating on a large abdominal CT dataset, our proposed BowelNet method can achieve Dice scores of 0.764, 0.848, 0.835, 0.774, and 0.824 in segmenting the duodenum, jejunum-ileum, colon, sigmoid, and rectum, respectively. These results demonstrate the effectiveness of our proposed BowelNet framework in segmenting the entire bowel from CT images.
Collapse
|
3
|
Strait J, Chkrebtii O, Kurtek S. Parallel tempering strategies for model-based landmark detection on shapes. COMMUN STAT-SIMUL C 2022; 51:1415-1435. [PMID: 35755486 PMCID: PMC9216184 DOI: 10.1080/03610918.2019.1670843] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
In the field of shape analysis, landmarks are defined as a low-dimensional, representative set of important features of an object's shape that can be used to identify regions of interest along its outline. An important problem is to infer the number and arrangement of landmarks, given a set of shapes drawn from a population. One proposed approach defines a posterior distribution over landmark locations by associating each landmark configuration with a linear reconstruction of the shape. In practice, sampling from the resulting posterior density is challenging using standard Markov chain Monte Carlo (MCMC) methods because multiple configurations of landmarks can describe a complex shape similarly well, manifesting in a multi-modal posterior with well-separated modes. Standard MCMC methods traverse multi-modal posteriors poorly and, even when multiple modes are identified, the relative amount of time spent in each one can be misleading. We apply new advances in the parallel tempering literature to the problem of landmark detection, providing guidance on implementation generalized to other applications within shape analysis. Proposal adaptation is used during burn-in to ensure efficient traversal of the parameter space while maintaining computational efficiency. We demonstrate this algorithm on simulated data and common shapes obtained from computer vision scenes.
Collapse
|
4
|
Brosch T, Peters J, Groth A, Weber FM, Weese J. Model-based segmentation using neural network-based boundary detectors: Application to prostate and heart segmentation in MR images. MACHINE LEARNING WITH APPLICATIONS 2021. [DOI: 10.1016/j.mlwa.2021.100078] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
5
|
He T, Yao J, Tian W, Yi Z, Tang W, Guo J. Cephalometric landmark detection by considering translational invariance in the two-stage framework. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.08.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
6
|
|
7
|
Liang X, Zhao W, Hristov DH, Buyyounouski MK, Hancock SL, Bagshaw H, Zhang Q, Xie Y, Xing L. A deep learning framework for prostate localization in cone beam CT-guided radiotherapy. Med Phys 2020; 47:4233-4240. [PMID: 32583418 PMCID: PMC10823910 DOI: 10.1002/mp.14355] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 06/11/2020] [Accepted: 06/17/2020] [Indexed: 01/31/2024] Open
Abstract
PURPOSE To develop a deep learning-based model for prostate planning target volume (PTV) localization on cone beam computed tomography (CBCT) to improve the workflow of CBCT-guided patient setup. METHODS A two-step task-based residual network (T2 RN) is proposed to automatically identify inherent landmarks in prostate PTV. The input to the T2 RN is the pretreatment CBCT images of the patient, and the output is the deep learning-identified landmarks in the PTV. To ensure robust PTV localization, the T2 RN model is trained by using over thousand sets of CT images with labeled landmarks, each of the CTs corresponds to a different scenario of patient position and/or anatomy distribution generated by synthetically changing the planning CT (pCT) image. The changes, including translation, rotation, and deformation, represent vast possible clinical situations of anatomy variations during a course of radiation therapy (RT). The trained patient-specific T2 RN model is tested by using 240 CBCTs from six patients. The testing CBCTs consists of 120 original CBCTs and 120 synthetic CBCTs. The synthetic CBCTs are generated by applying rotation/translation transformations to each of the original CBCT. RESULTS The systematic/random setup errors between the model prediction and the reference are found to be <0.25/2.46 mm and 0.14/1.41° in translation and rotation dimensions, respectively. Pearson's correlation coefficient between model prediction and the reference is higher than 0.94 in translation and rotation dimensions. The Bland-Altman plots show good agreement between the two techniques. CONCLUSIONS A novel T2 RN deep learning technique is established to localize the prostate PTV for RT patient setup. Our results show that highly accurate marker-less prostate setup is achievable by leveraging the state-of-the-art deep learning strategy.
Collapse
Affiliation(s)
- Xiaokun Liang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055 China
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Dimitre H. Hristov
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | | | - Steven L. Hancock
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Hilary Bagshaw
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Qin Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055 China
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| |
Collapse
|
8
|
Zhang J, Liu M, Wang L, Chen S, Yuan P, Li J, Shen SGF, Tang Z, Chen KC, Xia JJ, Shen D. Context-guided fully convolutional networks for joint craniomaxillofacial bone segmentation and landmark digitization. Med Image Anal 2019; 60:101621. [PMID: 31816592 DOI: 10.1016/j.media.2019.101621] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Revised: 07/01/2019] [Accepted: 11/19/2019] [Indexed: 12/24/2022]
Abstract
Cone-beam computed tomography (CBCT) scans are commonly used in diagnosing and planning surgical or orthodontic treatment to correct craniomaxillofacial (CMF) deformities. Based on CBCT images, it is clinically essential to generate an accurate 3D model of CMF structures (e.g., midface, and mandible) and digitize anatomical landmarks. This process often involves two tasks, i.e., bone segmentation and anatomical landmark digitization. Because landmarks usually lie on the boundaries of segmented bone regions, the tasks of bone segmentation and landmark digitization could be highly associated. Also, the spatial context information (e.g., displacements from voxels to landmarks) in CBCT images is intuitively important for accurately indicating the spatial association between voxels and landmarks. However, most of the existing studies simply treat bone segmentation and landmark digitization as two standalone tasks without considering their inherent relationship, and rarely take advantage of the spatial context information contained in CBCT images. To address these issues, we propose a Joint bone Segmentation and landmark Digitization (JSD) framework via context-guided fully convolutional networks (FCNs). Specifically, we first utilize displacement maps to model the spatial context information in CBCT images, where each element in the displacement map denotes the displacement from a voxel to a particular landmark. An FCN is learned to construct the mapping from the input image to its corresponding displacement maps. Using the learned displacement maps as guidance, we further develop a multi-task FCN model to perform bone segmentation and landmark digitization jointly. We validate the proposed JSD method on 107 subjects, and the experimental results demonstrate that our method is superior to the state-of-the-art approaches in both tasks of bone segmentation and landmark digitization.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA.
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA.
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA.
| | - Si Chen
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100191, China
| | - Peng Yuan
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Jianfu Li
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Steve Guo-Fang Shen
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Zhen Tang
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Ken-Chung Chen
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - James J Xia
- Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, TX 77030, USA.
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
9
|
Das PK, Meher S, Panda R, Abraham A. A Review of Automated Methods for the Detection of Sickle Cell Disease. IEEE Rev Biomed Eng 2019; 13:309-324. [PMID: 31107662 DOI: 10.1109/rbme.2019.2917780] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Detection of sickle cell disease is a crucial job in medical image analysis. It emphasizes elaborate analysis of proper disease diagnosis after accurate detection followed by a classification of irregularities, which plays a vital role in the sickle cell disease diagnosis, treatment planning, and treatment outcome evaluation. Proper segmentation of complex cell clusters makes sickle cell detection more accurate and robust. Cell morphology has a key role in the detection of the sickle cell because the shapes of the normal blood cell and sickle cell differ significantly. This review emphasizes state-of-the-art methods and recent advances in detection, segmentation, and classification of sickle cell disease. We discuss key challenges encountered during the segmentation of overlapping blood cells. Moreover, standard validation measures that have been employed to yield performance analysis of various methods are also discussed. The methodologies and experiments in this review will be useful to further research and work in this area.
Collapse
|
10
|
Strait J, Chkrebtii O, Kurtek S. Automatic Detection and Uncertainty Quantification of Landmarks on Elastic Curves. J Am Stat Assoc 2019; 114:1002-1017. [PMID: 31595098 DOI: 10.1080/01621459.2018.1527224] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
A population quantity of interest in statistical shape analysis is the location of landmarks, which are points that aid in reconstructing and representing shapes of objects. We provide an automated, model-based approach to inferring landmarks given a sample of shape data. The model is formulated based on a linear reconstruction of the shape, passing through the specified points, and a Bayesian inferential approach is described for estimating unknown landmark locations. The question of how many landmarks to select is addressed in two different ways: (1) by defining a criterion-based approach, and (2) joint estimation of the number of landmarks along with their locations. Efficient methods for posterior sampling are also discussed. We motivate our approach using several simulated examples, as well as data obtained from applications in computer vision, biology and medical imaging.
Collapse
|
11
|
Wärmländer SKTS, Garvin H, Guyomarc'h P, Petaros A, Sholts SB. Landmark Typology in Applied Morphometrics Studies: What's the Point? Anat Rec (Hoboken) 2018; 302:1144-1153. [PMID: 30365240 DOI: 10.1002/ar.24005] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 04/30/2018] [Accepted: 07/01/2018] [Indexed: 01/09/2023]
Abstract
Landmarks are the hallmark of biological shape analysis as discrete anatomical points of correspondence. Various systems have been developed for their classification. In the most widely used system, developed by Bookstein in the 1990s, landmarks are divided into three distinct types based on their anatomical locations and biological significance. As Bookstein and others have argued that different landmark types possess different qualities, e.g., that Type 3 landmarks contain deficient information about shape variation and are less reliably measured, researchers began using landmark types as justification for selecting or avoiding particular landmarks for measurement or analysis. Here, we demonstrate considerable variation in landmark classifications among 17 studies using geometric morphometrics (GM), due to disagreement in the application of both Bookstein's landmark typology and individual landmark definitions. A review of the literature furthermore shows little correlation between landmark type and measurement reproducibility, especially when factors such as differences in measurement tools (calipers, digitizer, or computer software) and data sources (dry crania, 3D models, or 2D images) are considered. Although landmark typology is valuable when teaching biological shape analysis, we find that employing it in research design introduces confusion without providing useful information. Instead, researchers should choose landmark configurations based on their ability to test specific research hypotheses, and research papers should include justifications of landmark choices along with landmark definitions, details on landmark collection methods, and appropriate interobserver and intraobserver analyses. Hence, while the landmarks themselves are crucial for GM, we argue that their typology is of little use in applied studies. Anat Rec, 302:1144-1153, 2019. © 2018 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Sebastian K T S Wärmländer
- Department of Biochemistry and Biophysics, Stockholm University, 106 91, Stockholm, Sweden.,UCLA/Getty Conservation Programme, Cotsen Institute of Archaeology, UCLA, Los Angeles, California.,Division of Commercial and Business Law, Linköping University, 581 83, Linköping, Sweden
| | - Heather Garvin
- Department of Anatomy, Des Moines University, Des Moines, Iowa
| | - Pierre Guyomarc'h
- UMR 5199 PACEA, Université de Bordeaux, Allée Geoffroy St Hilaire, B8, 33615, Pessac, France
| | - Anja Petaros
- Department of Forensic Medicine, National Board of Forensic Medicine, Artillerigatan 12, 587 58, Linköping, Sweden
| | - Sabrina B Sholts
- Department of Anthropology, National Museum of Natural History, Smithsonian Institution, Washington, District of Columbia
| |
Collapse
|
12
|
Zheng G, Hommel H, Akcoltekin A, Thelen B, Stifter J, Peersman G. A novel technology for 3D knee prosthesis planning and treatment evaluation using 2D X-ray radiographs: a clinical evaluation. Int J Comput Assist Radiol Surg 2018; 13:1151-1158. [PMID: 29785589 DOI: 10.1007/s11548-018-1789-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 05/09/2018] [Indexed: 10/16/2022]
Abstract
PURPOSE To present a clinical validation of a novel technology called "3X" which allows for 3D prosthesis planning and treatment evaluation in total knee arthroplasty (TKA) using only 2D X-ray radiographs. MATERIALS AND METHODS After local institution review board approvals, 3X was evaluated on 43 cases (23 for preoperative planning and 20 for postoperative treatment evaluation). All the patients underwent CT scans according to a standard protocol. The results measured on the CT data were regarded as the ground truth. Additionally, two X-ray images were acquired for each affected leg and were used by 3X technology to derive patient-specific measurements of the leg. In total, we compared seven parameters for planning TKA and five parameters for postoperative prosthesis alignment. RESULTS Our experimental results demonstrated that the mean distances between the surface models reconstructed from 2D X-rays and the associated surface models obtained from 3D CT data were smaller than 1.5 mm. The average differences for all angular parameters were smaller than [Formula: see text]. In over 78% cases 3X technology derived the same femoral component size as the CT-based ground truth and this value went down to 70% when 3X technology was used to predict the size of tibial component. CONCLUSION 3X is a technology that allows for true 3D preoperative planning and postoperative treatment evaluation based on 2D X-ray radiographs.
Collapse
Affiliation(s)
- Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland.
| | - Hagen Hommel
- Clinic for Orthopedic, Sports Medicine and Rehabilitation, Krankenhaus Mrkisch Oderland GmbH, Wriezen, Germany
| | - Alper Akcoltekin
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland
| | - Benedikt Thelen
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, Switzerland
| | | | - Geert Peersman
- Institute for Orthopaedic Research and Training, KU Leuven, Campus Pellenberg, Louvain, Belgium
| |
Collapse
|
13
|
X-ray-transform Invariant Anatomical Landmark Detection for Pelvic Trauma Surgery. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00937-3_7] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
14
|
Xiong J, Shao Y, Ma J, Ren Y, Wang Q, Zhao J. Lung field segmentation using weighted sparse shape composition with robust initialization. Med Phys 2017; 44:5916-5929. [PMID: 28875551 DOI: 10.1002/mp.12561] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2016] [Revised: 08/10/2017] [Accepted: 08/30/2017] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Lung field segmentation for chest radiography is critical to pulmonary disease diagnosis. In this paper, we propose a new deformable model using weighted sparse shape composition with robust initialization to achieve robust and accurate lung field segmentation. METHODS Our method consists of three steps: initialization, deformation and regularization. The steps of deformation and regularization are iteratively employed until convergence. First, since a deformable model is sensitive to the initial shape, a robust initialization is obtained by using a novel voting strategy, which allows the reliable patches on the image to vote for each landmark of the initial shape. Then, each point of the initial shape independently deforms to the lung boundary under the guidance of the appearance model, which can distinguish lung tissues from nonlung tissues near the boundary. Finally, the deformed shape is regularized by weighted sparse shape composition (SSC) model, which is constrained by both boundary information and the correlations between each point of the deformed shape. RESULTS Our method has been evaluated on 247 chest radiographs from well-known dataset Japanese Society of Radiological Technology (JSRT) and achieved high overlap scores (0.955 ± 0.021). CONCLUSIONS The experimental results show that the proposed deformable segmentation model is more robust and accurate than the traditional appearance and shape model on the JSRT database. Our method also shows higher accuracy than most state-of-the-art methods.
Collapse
Affiliation(s)
- Junfeng Xiong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yeqin Shao
- School of Transportation, Nantong University, Jiangsu, 226019, China
| | - Jingchen Ma
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yacheng Ren
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Qian Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.,SJTU-UIH Institute for Medical Imaging Technology, Shanghai Jiao Tong University, Shanghai, 200240, China.,MED-X Research Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.,SJTU-UIH Institute for Medical Imaging Technology, Shanghai Jiao Tong University, Shanghai, 200240, China.,MED-X Research Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
15
|
Zhang J, Liu M, Shen D. Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:4753-4764. [PMID: 28678706 PMCID: PMC5729285 DOI: 10.1109/tip.2017.2721106] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time, using limited training data. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection.
Collapse
|
16
|
|
17
|
Dora L, Agrawal S, Panda R, Abraham A. State-of-the-Art Methods for Brain Tissue Segmentation: A Review. IEEE Rev Biomed Eng 2017. [PMID: 28622675 DOI: 10.1109/rbme.2017.2715350] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Brain tissue segmentation is one of the most sought after research areas in medical image processing. It provides detailed quantitative brain analysis for accurate disease diagnosis, detection, and classification of abnormalities. It plays an essential role in discriminating healthy tissues from lesion tissues. Therefore, accurate disease diagnosis and treatment planning depend merely on the performance of the segmentation method used. In this review, we have studied the recent advances in brain tissue segmentation methods and their state-of-the-art in neuroscience research. The review also highlights the major challenges faced during tissue segmentation of the brain. An effective comparison is made among state-of-the-art brain tissue segmentation methods. Moreover, a study of some of the validation measures to evaluate different segmentation methods is also discussed. The brain tissue segmentation, content in terms of methodologies, and experiments presented in this review are encouraging enough to attract researchers working in this field.
Collapse
|
18
|
Zhang J, Liu M, Gao Y, Shen D. Alzheimer's Disease Diagnosis Using Landmark-Based Features From Longitudinal Structural MR Images. IEEE J Biomed Health Inform 2017; 21:1607-1616. [PMID: 28534798 DOI: 10.1109/jbhi.2017.2704614] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Structural magnetic resonance imaging (MRI) has been proven to be an effective tool for Alzheimer's disease (AD) diagnosis. While conventional MRI-based AD diagnosis typically uses images acquired at a single time point, a longitudinal study is more sensitive in detecting early pathological changes of AD, making it more favorable for accurate diagnosis. In general, there are two challenges faced in MRI-based diagnosis. First, extracting features from structural MR images requires time-consuming nonlinear registration and tissue segmentation, whereas the longitudinal study with involvement of more scans further exacerbates the computational costs. Moreover, the inconsistent longitudinal scans (i.e., different scanning time points and also the total number of scans) hinder extraction of unified feature representations in longitudinal studies. In this paper, we propose a landmark-based feature extraction method for AD diagnosis using longitudinal structural MR images, which does not require nonlinear registration or tissue segmentation in the application stage and is also robust to inconsistencies among longitudinal scans. Specifically, first, the discriminative landmarks are automatically discovered from the whole brain using training images, and then efficiently localized using a fast landmark detection method for testing images, without the involvement of any nonlinear registration and tissue segmentation; and second, high-level statistical spatial features and contextual longitudinal features are further extracted based on those detected landmarks, which can characterize spatial structural abnormalities and longitudinal landmark variations. Using these spatial and longitudinal features, a linear support vector machine is finally adopted to distinguish AD subjects or mild cognitive impairment (MCI) subjects from healthy controls (HCs). Experimental results on the Alzheimer's Disease Neuroimaging Initiative database demonstrate the superior performance and efficiency of the proposed method, with classification accuracies of 88.30% for AD versus HC and 79.02% for MCI versus HC, respectively.
Collapse
|
19
|
Norajitra T, Maier-Hein KH. 3D Statistical Shape Models Incorporating Landmark-Wise Random Regression Forests for Omni-Directional Landmark Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:155-168. [PMID: 27541630 DOI: 10.1109/tmi.2016.2600502] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
3D Statistical Shape Models (3D-SSM) are widely used for medical image segmentation. However, during segmentation, they typically perform a very limited unidirectional search for suitable landmark positions in the image, relying on weak learners or use-case specific appearance models that solely take local image information into account. As a consequence, segmentation errors arise, and results in general depend on the accuracy of a previous model initialization. Furthermore, these methods become subject to a tedious and use-case dependent parameter tuning in order to obtain optimized results. To overcome these limitations, we propose an extension of 3D-SSM by landmark-wise random regression forests that perform an enhanced omni-directional search for landmark positions, thereby taking rich non-local image information into account. In addition, we provide a long distance model fitting based on a multi-scale approach, that allows an accurate and reproducible segmentation even from distant image positions, thus enabling an application without model initialization. Finally, translation of the proposed method to different organs is straightforward and requires no adaptation of the training process. In segmentation experiments on 45 clinical CT volumes, the proposed omni-directional search significantly increased accuracy and displayed great precision regardless of model initialization. Furthermore, for liver, spleen and kidney segmentation in a competitive multi-organ labeling challenge on publicly available data, the proposed method achieved similar or better results than the state of the art. Finally, liver segmentation results were obtained that successfully compete with specialized state-of-the-art methods from the well-known liver segmentation challenge SLIVER.
Collapse
|
20
|
Zhang J, Gao Y, Gao Y, Munsell BC, Shen D. Detecting Anatomical Landmarks for Fast Alzheimer's Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:2524-2533. [PMID: 27333602 PMCID: PMC5153382 DOI: 10.1109/tmi.2016.2582386] [Citation(s) in RCA: 72] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Structural magnetic resonance imaging (MRI) is a very popular and effective technique used to diagnose Alzheimer's disease (AD). The success of computer-aided diagnosis methods using structural MRI data is largely dependent on the two time-consuming steps: 1) nonlinear registration across subjects, and 2) brain tissue segmentation. To overcome this limitation, we propose a landmark-based feature extraction method that does not require nonlinear registration and tissue segmentation. In the training stage, in order to distinguish AD subjects from healthy controls (HCs), group comparisons, based on local morphological features, are first performed to identify brain regions that have significant group differences. In general, the centers of the identified regions become landmark locations (or AD landmarks for short) capable of differentiating AD subjects from HCs. In the testing stage, using the learned AD landmarks, the corresponding landmarks are detected in a testing image using an efficient technique based on a shape-constrained regression-forest algorithm. To improve detection accuracy, an additional set of salient and consistent landmarks are also identified to guide the AD landmark detection. Based on the identified AD landmarks, morphological features are extracted to train a support vector machine (SVM) classifier that is capable of predicting the AD condition. In the experiments, our method is evaluated on landmark detection and AD classification sequentially. Specifically, the landmark detection error (manually annotated versus automatically detected) of the proposed landmark detector is 2.41 mm , and our landmark-based AD classification accuracy is 83.7%. Lastly, the AD classification performance of our method is comparable to, or even better than, that achieved by existing region-based and voxel-based methods, while the proposed method is approximately 50 times faster.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA
| | - Yue Gao
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA. Department of Computer Science, University of North Carolina, Chapel Hill, NC, USA
| | - Brent C. Munsell
- Department of Computer Science, College of Charleston, Charleston, SC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA. Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
21
|
Zhang J, Gao Y, Wang L, Tang Z, Xia JJ, Shen D. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-Guided Partially-Joint Regression Forest Model and Multiscale Statistical Features. IEEE Trans Biomed Eng 2016; 63:1820-1829. [PMID: 26625402 PMCID: PMC4879598 DOI: 10.1109/tbme.2015.2503421] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. METHODS We propose a segmentation-guided partially-joint regression forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization method to extract high-level multiscale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. RESULTS Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2 mm. CONCLUSION Our model has addressed challenges of both interpatient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. SIGNIFICANCE Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency.
Collapse
Affiliation(s)
- Jun Zhang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA ()
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA
- Department of Computer Science, University of North Carolina, Chapel Hill, NC, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA
| | - Zhen Tang
- Houston Methodist Hospital, Houston, TX, USA
| | - James J. Xia
- Houston Methodist Hospital, Houston, TX, USA
- Weill Medical College, Cornell University, New York, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina, Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
22
|
Laurent P, Cresson T, Vazquez C, Hagemeister N, de Guise JA. A multi-criteria evaluation platform for segmentation algorithms. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:6441-6444. [PMID: 28269721 DOI: 10.1109/embc.2016.7592203] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The purpose of this paper is to present a platform for evaluating segmentation algorithms that detect anatomical structures in medical images. Structure detection being subject to human interpretation, we first describe a method to define a ground truth model, i.e. a generated bronze standard, that will be the reference for subsequent analysis. This bronze standard will be characterized in order to retrieve its confidence level that will later be used to normalize the algorithm evaluation. We then describe how the developed platform helps in evaluating algorithm performances described using five evaluation criteria: accuracy, reliability, robustness, under/over segmentation sensitivity and outlier sensitivity. First, we explain how to extract those evaluation criteria using specific normalized metrics commonly found in the literature, then we present how to combine all the information in order to get a global evaluation of segmentation algorithms. Lastly, a radar-style graph analysis is presented for easy multi-criteria interpretation.
Collapse
|
23
|
Shao Y, Gao Y, Wang Q, Yang X, Shen D. Locally-constrained boundary regression for segmentation of prostate and rectum in the planning CT images. Med Image Anal 2015; 26:345-56. [PMID: 26439938 DOI: 10.1016/j.media.2015.06.007] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Revised: 04/17/2015] [Accepted: 06/17/2015] [Indexed: 11/24/2022]
Abstract
Automatic and accurate segmentation of the prostate and rectum in planning CT images is a challenging task due to low image contrast, unpredictable organ (relative) position, and uncertain existence of bowel gas across different patients. Recently, regression forest was adopted for organ deformable segmentation on 2D medical images by training one landmark detector for each point on the shape model. However, it seems impractical for regression forest to guide 3D deformable segmentation as a landmark detector, due to large number of vertices in the 3D shape model as well as the difficulty in building accurate 3D vertex correspondence for each landmark detector. In this paper, we propose a novel boundary detection method by exploiting the power of regression forest for prostate and rectum segmentation. The contributions of this paper are as follows: (1) we introduce regression forest as a local boundary regressor to vote the entire boundary of a target organ, which avoids training a large number of landmark detectors and building an accurate 3D vertex correspondence for each landmark detector; (2) an auto-context model is integrated with regression forest to improve the accuracy of the boundary regression; (3) we further combine a deformable segmentation method with the proposed local boundary regressor for the final organ segmentation by integrating organ shape priors. Our method is evaluated on a planning CT image dataset with 70 images from 70 different patients. The experimental results show that our proposed boundary regression method outperforms the conventional boundary classification method in guiding the deformable model for prostate and rectum segmentations. Compared with other state-of-the-art methods, our method also shows a competitive performance.
Collapse
Affiliation(s)
- Yeqin Shao
- Institute of Image Processing & Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China; Nantong University, Jiangsu 226019, China
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, United States; Department of Computer Science, University of North Carolina at Chapel Hill, NC 27599, United States
| | - Qian Wang
- Med-X Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xin Yang
- Institute of Image Processing & Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, United States; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
24
|
Wang CW, Huang CT, Hsieh MC, Li CH, Chang SW, Li WC, Vandaele R, Marée R, Jodogne S, Geurts P, Chen C, Zheng G, Chu C, Mirzaalian H, Hamarneh G, Vrtovec T, Ibragimov B. Evaluation and Comparison of Anatomical Landmark Detection Methods for Cephalometric X-Ray Images: A Grand Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1890-900. [PMID: 25794388 DOI: 10.1109/tmi.2015.2412951] [Citation(s) in RCA: 83] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Cephalometric analysis is an essential clinical and research tool in orthodontics for the orthodontic analysis and treatment planning. This paper presents the evaluation of the methods submitted to the Automatic Cephalometric X-Ray Landmark Detection Challenge, held at the IEEE International Symposium on Biomedical Imaging 2014 with an on-site competition. The challenge was set to explore and compare automatic landmark detection methods in application to cephalometric X-ray images. Methods were evaluated on a common database including cephalograms of 300 patients aged six to 60 years, collected from the Dental Department, Tri-Service General Hospital, Taiwan, and manually marked anatomical landmarks as the ground truth data, generated by two experienced medical doctors. Quantitative evaluation was performed to compare the results of a representative selection of current methods submitted to the challenge. Experimental results show that three methods are able to achieve detection rates greater than 80% using the 4 mm precision range, but only one method achieves a detection rate greater than 70% using the 2 mm precision range, which is the acceptable precision range in clinical practice. The study provides insights into the performance of different landmark detection approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.
Collapse
|
25
|
Chen C, Belavy D, Yu W, Chu C, Armbrecht G, Bansmann M, Felsenberg D, Zheng G. Localization and Segmentation of 3D Intervertebral Discs in MR Images by Data Driven Estimation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1719-1729. [PMID: 25700441 DOI: 10.1109/tmi.2015.2403285] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6-2.0 mm, and for segmentation a mean Dice metric of 85%-88% and a mean surface distance of 1.3-1.4 mm.
Collapse
|
26
|
Xie W, Franke J, Chen C, Grützner PA, Schumann S, Nolte LP, Zheng G. A complete-pelvis segmentation framework for image-free total hip arthroplasty (THA): methodology and clinical study. Int J Med Robot 2014; 11:166-80. [PMID: 25258044 DOI: 10.1002/rcs.1619] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2014] [Revised: 08/25/2014] [Accepted: 08/27/2014] [Indexed: 11/09/2022]
Abstract
BACKGROUND Complete-pelvis segmentation in antero-posterior pelvic radiographs is required to create a patient-specific three-dimensional pelvis model for surgical planning and postoperative assessment in image-free navigation of total hip arthroplasty. METHODS A fast and robust framework for accurately segmenting the complete pelvis is presented, consisting of two consecutive modules. In the first module, a three-stage method was developed to delineate the left hemi-pelvis based on statistical appearance and shape models. To handle complex pelvic structures, anatomy-specific information processing techniques were employed. As the input to the second module, the delineated left hemi-pelvis was then reflected about an estimated symmetry line of the radiograph to initialize the right hemi-pelvis segmentation. The right hemi-pelvis was segmented by the same three-stage method, RESULTS Two experiments conducted on respectively 143 and 40 AP radiographs demonstrated a mean segmentation accuracy of 1.61±0.68 mm. A clinical study to investigate the postoperative assessment of acetabular cup orientations based on the proposed framework revealed an average accuracy of 1.2°±0.9° and 1.6°±1.4° for anteversion and inclination, respectively. Delineation of each radiograph costs less than one minute. CONCLUSIONS Despite further validation needed, the preliminary results implied the underlying clinical applicability of the proposed framework for image-free THA.
Collapse
Affiliation(s)
- Weiguo Xie
- Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, CH-3014, Bern, Switzerland.,Graduate School for Cellular and Biomedical Sciences, University of Bern, Switzerland.,BG Trauma Centre Ludwigshafen at Heidelberg University Hospital, Ludwigshafen, Germany
| | - Jochen Franke
- BG Trauma Centre Ludwigshafen at Heidelberg University Hospital, Ludwigshafen, Germany
| | - Cheng Chen
- Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, CH-3014, Bern, Switzerland
| | - Paul A Grützner
- BG Trauma Centre Ludwigshafen at Heidelberg University Hospital, Ludwigshafen, Germany
| | - Steffen Schumann
- Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, CH-3014, Bern, Switzerland
| | - Lutz-P Nolte
- Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, CH-3014, Bern, Switzerland
| | - Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, CH-3014, Bern, Switzerland
| |
Collapse
|