1
|
Computer-aided design and 3-dimensional artificial/convolutional neural network for digital partial dental crown synthesis and validation. Sci Rep 2023; 13:1561. [PMID: 36709380 PMCID: PMC9884213 DOI: 10.1038/s41598-023-28442-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 01/18/2023] [Indexed: 01/29/2023] Open
Abstract
The current multiphase, invitro study developed and validated a 3-dimensional convolutional neural network (3D-CNN) to generate partial dental crowns (PDC) for use in restorative dentistry. The effectiveness of desktop laser and intraoral scanners in generating data for the purpose of 3D-CNN was first evaluated (phase 1). There were no significant differences in surface area [t-stat(df) = - 0.01 (10), mean difference = - 0.058, P > 0.99] and volume [t-stat(df) = 0.357(10)]. However, the intraoral scans were chosen for phase 2 as they produced a greater level of volumetric details (343.83 ± 43.52 mm3) compared to desktop laser scanning (322.70 ± 40.15 mm3). In phase 2, 120 tooth preparations were digitally synthesized from intraoral scans, and two clinicians designed the respective PDCs using computer-aided design (CAD) workflows on a personal computer setup. Statistical comparison by 3-factor ANOVA demonstrated significant differences in surface area (P < 0.001), volume (P < 0.001), and spatial overlap (P < 0.001), and therefore only the most accurate PDCs (n = 30) were picked to train the neural network (Phase 3). The current 3D-CNN produced a validation accuracy of 60%, validation loss of 0.68-0.87, sensitivity of 1.00, precision of 0.50-0.83, and serves as a proof-of-concept that 3D-CNN can predict and generate PDC prostheses in CAD for restorative dentistry.
Collapse
|
2
|
Dong G, Dai J, Li N, Zhang C, He W, Liu L, Chan Y, Li Y, Xie Y, Liang X. 2D/3D Non-Rigid Image Registration via Two Orthogonal X-ray Projection Images for Lung Tumor Tracking. Bioengineering (Basel) 2023; 10:bioengineering10020144. [PMID: 36829638 PMCID: PMC9951849 DOI: 10.3390/bioengineering10020144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/10/2023] [Accepted: 01/16/2023] [Indexed: 01/24/2023] Open
Abstract
Two-dimensional (2D)/three-dimensional (3D) registration is critical in clinical applications. However, existing methods suffer from long alignment times and high doses. In this paper, a non-rigid 2D/3D registration method based on deep learning with orthogonal angle projections is proposed. The application can quickly achieve alignment using only two orthogonal angle projections. We tested the method with lungs (with and without tumors) and phantom data. The results show that the Dice and normalized cross-correlations are greater than 0.97 and 0.92, respectively, and the registration time is less than 1.2 seconds. In addition, the proposed model showed the ability to track lung tumors, highlighting the clinical potential of the proposed method.
Collapse
Affiliation(s)
- Guoya Dong
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300130, China
- Hebei Key Laboratory of Bioelectromagnetics and Neural Engineering, Tianjin 300130, China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Tianjin 300130, China
| | - Jingjing Dai
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300130, China
- Hebei Key Laboratory of Bioelectromagnetics and Neural Engineering, Tianjin 300130, China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Tianjin 300130, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan 523808, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yinping Chan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yunhui Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Correspondence:
| |
Collapse
|
3
|
Ha HG, Han G, Lee S, Nam K, Joung S, Park I, Hong J. Robot-patient registration for optical tracker-free robotic fracture reduction surgery. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 228:107239. [PMID: 36410266 DOI: 10.1016/j.cmpb.2022.107239] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 10/25/2022] [Accepted: 11/08/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Image-guided robotic surgery for fracture reduction is a medical procedure in which surgeons control a surgical robot to align the fractured bones by using a navigation system that shows the rotation and distance of bone movement. In such robotic surgeries, it is necessary to estimate the relationship between the robot and patient (bone), a task known as robot-patient registration, to realize the navigation. Through the registration, a fracture state in real-world can be simulated in virtual space of the navigation system. METHODS This paper proposes an approach to realize robot-patient registration for an optical-tracker-free robotic fracture-reduction system. Instead of the optical tracker which is a three-dimensional position localizer, X-ray images are used to realize the robot-patient registration, combining the relationship of both the robot and patient with regards to C-arm. The proposed method consists of two steps of registration, where initial registration is followed by refined registration which adopts particle swarm optimization with the minimum cross-reprojection error based on bidirectional X-ray images. To address the unrecognizable features due to interference between the robot and bone, we also developed attachable robot features. The allocated robot features could be clearly extracted from the X-ray images, and precise registration could be realized through the particle swarm optimization. RESULTS The proposed method was evaluated in phantom and ex-vivo experiments involving a caprine cadaver. For the phantom experiments, the average translational and rotational errors were 1.88 mm and 2.45°, respectively, and the corresponding errors in the ex vivo experiments were 2.64 mm and 3.32° The results demonstrated the effectiveness of the proposed robot-patient registration. CONCLUSIONS The proposed method enable to estimate the three-dimensional relationship between fractured bones in real-world by using only two-dimensional images, and the relationship is accurately simulated in virtual reality for the navigation. Therefore, a reduction procedure for successful treatment of bone fractures in image-guided robotic surgery can be expected with the aid of the proposed registration method.
Collapse
Affiliation(s)
- Ho-Gun Ha
- Division of Intelligent Robot, DGIST, 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-Gun, Daegu 42988, Republic of Korea.
| | - Gukyeong Han
- Department of Robotics and Mechatronics Engineering, DGIST, 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu 42988, Republic of Korea
| | - Seongpung Lee
- R&D Center, Curexo Inc., 4-5, Yanghyeon-ro 405 Beon-gil, Jungwon-gu, Seongnam-si, Gyeonggi-do 13438, Republic of Korea
| | - Kwonsun Nam
- R&D Center, SAMICK THK Co., Ltd., Jinwi2sandan-ro, Jinwi-myeon, Pyeongtaek-si, Gyeonggi-do 17708, Republic of Korea
| | - Sanghyun Joung
- Medical Device and Robot Institute of Park, Kyungpook National University, Global plaza 1006, 80, Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
| | - Ilhyung Park
- Medical Device and Robot Institute of Park, Kyungpook National University, Global plaza 1006, 80, Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea; Department of Orthopaedic Surgery, School of Medicine, Kyungpook National University Hospital, 130 Dongdeok-ro, Jung-gu, Daegu 41944, Republic of Korea
| | - Jaesung Hong
- Department of Robotics and Mechatronics Engineering, DGIST, 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu 42988, Republic of Korea
| |
Collapse
|
4
|
Min Lee J, Baek SH, Soo Lee Y. Vital protocols for PolyWare™ measurement reliability and accuracy. Front Surg 2022; 9:997848. [PMID: 36632526 PMCID: PMC9826794 DOI: 10.3389/fsurg.2022.997848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/21/2022] [Indexed: 12/27/2022] Open
Abstract
Background and objective PolyWare™ software (PW) has been exclusively used in the majority of polyethylene wear studies of total hip arthroplasty (THA). PW measurements can be significantly inaccurate and unrepeatable, depending on imaging conditions or subjective manipulation choices. In this regard, this study aims to shed light on the conditions needed to achieve the best accuracy and reliability of PW measurements. Methods The experiment looked at how PW fluctuated based on several measurement conditions. x-ray images of in-vitro THA prostheses were acquired under a clinical x-ray scanning condition. A linear wear rate of 6.67 mm was simulated in combination with an acetabular lateral inclination of 36.6° and anteversion of 9.0°. Results Among all the imported x-ray images, those with a resolution of 1,076 × 1,076 exhibited the best standard deviation in wear measurements as small as 0.01 mm and the lowest frequencies of blurriness. The edge detection area specified as non-square and off the femoral head center exhibited the most blurriness. The x-ray image that scans a femoral head eccentrically placed by 15 cm superior to the x-ray beam center led to a maximum acetabular anteversion measurement error of 5.3°. Conclusion Because PW has been the only polyethylene wear measurement tool used, identifying its sources of error and devising a countermeasure are of the utmost importance. The results call for PW users to observe the following measurement protocols: (1) the original x-ray image must be a 1,076 × 1,076 square; (2) the edge detection area must be specified as a square with edge lengths of 5 times the diameter of the femoral head, centered at the femoral head center; and (3) the femoral head center or acetabular center must be positioned as close to the center line of the x-ray beam as possible when scanning.
Collapse
Affiliation(s)
- Jong Min Lee
- Department of BioMedical Engineering, School of BioMedical Science, Daegu Catholic University, Gyungbuk, South Korea
| | - Seung-Hoon Baek
- Department of Orthopedic Surgery, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, South Korea,Correspondence: Yeon Soo Lee Seung-Hoon Baek
| | - Yeon Soo Lee
- Department of BioMedical Engineering, School of BioMedical Science, Daegu Catholic University, Gyungbuk, South Korea,Correspondence: Yeon Soo Lee Seung-Hoon Baek
| |
Collapse
|
5
|
Zhang Y, Qin H, Li P, Pei Y, Guo Y, Xu T, Zha H. Deformable registration of lateral cephalogram and cone-beam computed tomography image. Med Phys 2021; 48:6901-6915. [PMID: 34496039 DOI: 10.1002/mp.15214] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/14/2021] [Accepted: 08/26/2021] [Indexed: 01/17/2023] Open
Abstract
PURPOSE This study aimed to design and evaluate a novel method for the registration of 2D lateral cephalograms and 3D craniofacial cone-beam computed tomography (CBCT) images, providing patient-specific 3D structures from a 2D lateral cephalogram without additional radiation exposure. METHODS We developed a cross-modal deformable registration model based on a deep convolutional neural network. Our approach took advantage of a low-dimensional deformation field encoding and an iterative feedback scheme to infer coarse-to-fine volumetric deformations. In particular, we constructed a statistical subspace of deformation fields and parameterized the nonlinear mapping function from an image pair, consisting of the target 2D lateral cephalogram and the reference volumetric CBCT, to a latent encoding of the deformation field. Instead of the one-shot registration by the learned mapping function, a feedback scheme was introduced to progressively update the reference volumetric image and to infer coarse-to-fine deformations fields, accounting for the shape variations of anatomical structures. A total of 220 clinically obtained CBCTs were used to train and validate the proposed model, among which 120 CBCTs were used to generate a training dataset with 24k paired synthetic lateral cephalograms and CBCTs. The proposed approach was evaluated on the deformable 2D-3D registration of clinically obtained lateral cephalograms and CBCTs from growing and adult orthodontic patients. RESULTS Strong structural consistencies were observed between the deformed CBCT and the target lateral cephalogram in all criteria. The proposed method achieved state-of-the-art performances with the mean contour deviation of 0.41 ± 0.12 mm on the anterior cranial base, 0.48 ± 0.17 mm on the mandible, and 0.35 ± 0.08 mm on the maxilla, respectively. The mean surface mesh ranged from 0.78 to 0.97 mm on various craniofacial structures, and the LREs ranged from 0.83 to 1.24 mm on the growing datasets regarding 14 landmarks. The proposed iterative feedback scheme handled the structural details and improved the registration. The resultant deformed volumetric image was consistent with the target lateral cephalogram in both 2D projective planes and 3D volumetric space regarding the multicategory craniofacial structures. CONCLUSIONS The results suggest that the deep learning-based 2D-3D registration model enables the deformable alignment of 2D lateral cephalograms and CBCTs and estimates patient-specific 3D craniofacial structures.
Collapse
Affiliation(s)
- Yungeng Zhang
- Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China
| | - Haifang Qin
- Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China
| | - Peixin Li
- Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China
| | - Yuru Pei
- Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China
| | - Yuke Guo
- Luoyang Institute of Science and Technology, Luoyang, China
| | - Tianmin Xu
- School of Stomatology, Stomatology Hospital, Peking University, Beijing, China
| | - Hongbin Zha
- Key Laboratory of Machine Perception (MOE), Department of Machine Intelligence, Peking University, Beijing, China
| |
Collapse
|
6
|
Postolka B, List R, Thelen B, Schütz P, Taylor WR, Zheng G. Evaluation of an intensity-based algorithm for 2D/3D registration of natural knee videofluoroscopy data. Med Eng Phys 2020; 77:107-113. [PMID: 31980316 DOI: 10.1016/j.medengphy.2020.01.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 09/24/2019] [Accepted: 01/07/2020] [Indexed: 10/25/2022]
Abstract
The accurate quantification of in-vivo tibio-femoral kinematics is essential for understanding joint functionality, but determination of the 3D pose of bones from 2D single-plane fluoroscopic images remains challenging. We aimed to evaluate the accuracy, reliability and repeatability of an intensity-based 2D/3D registration algorithm. The accuracy was evaluated using fluoroscopic images of 2 radiopaque bones in 18 different poses, compared against a gold-standard fiducial calibration device. In addition, 3 natural femora and 3 natural tibiae were used to examine registration reliability and repeatability. Both manual fitting and intensity-based registration exhibited a mean absolute error of <1 mm in-plane. Overall, intensity-based registration of the femoral bone model revealed significantly higher translational and rotational errors than manual fitting, while no statistical differences (except for y-axis translation) were found for the tibial bone model. The repeatability of 108 intensity-based registrations showed mean in-plane standard deviations of 0.23-0.56 mm, but out-of-plane position repeatability was lower (mean SD: femur 7.98 mm, tibia 6.96 mm). SDs for rotations averaged 0.77-2.52°. While the algorithm registered some images extremely well, other images clearly required manual intervention. When the algorithm registered the bones repeatably, it was also accurate, suggesting an approach that includes manual intervention could become practical for efficient and accurate registration.
Collapse
Affiliation(s)
- Barbara Postolka
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - Renate List
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - Benedikt Thelen
- University of Berne, Institute for Surgical Technology & Biomechanics, Stauffacherstrasse 78, 3014 Bern, Switzerland.
| | - Pascal Schütz
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - William R Taylor
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - Guoyan Zheng
- University of Berne, Institute for Surgical Technology & Biomechanics, Stauffacherstrasse 78, 3014 Bern, Switzerland.
| |
Collapse
|
7
|
Reyneke CJF, Luthi M, Burdin V, Douglas TS, Vetter T, Mutsvangwa TEM. Review of 2-D/3-D Reconstruction Using Statistical Shape and Intensity Models and X-Ray Image Synthesis: Toward a Unified Framework. IEEE Rev Biomed Eng 2018; 12:269-286. [PMID: 30334808 DOI: 10.1109/rbme.2018.2876450] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Patient-specific three-dimensional (3-D) bone models are useful for a number of clinical applications such as surgery planning, postoperative evaluation, as well as implant and prosthesis design. Two-dimensional-to-3-D (2-D/3-D) reconstruction, also known as model-to-modality or atlas-based 2-D/3-D registration, provides a means of obtaining a 3-D model of a patient's bones from their 2-D radiographs when 3-D imaging modalities are not available. The preferred approach for estimating both shape and density information (that would be present in a patient's computed tomography data) for 2-D/3-D reconstruction makes use of digitally reconstructed radiographs and deformable models in an iterative, non-rigid, intensity-based approach. Based on a large number of state-of-the-art 2-D/3-D bone reconstruction methods, a unified mathematical formulation of the problem is proposed in a common conceptual framework, using unambiguous terminology. In addition, shortcomings, recent adaptations, and persisting challenges are discussed along with insights for future research.
Collapse
|
8
|
Abstract
Computer-assisted orthopedic surgery (CAOS) was introduced, developed, and implemented in musculoskeletal tumor surgery recently to enhance surgical precision in resecting malignant and benign tumors. The origins of computer-assisted surgery were in other subspecialties including maxillofacial surgery, spine surgery, and arthroplasty. Early studies have shown that CAOS can also be used safely for bone tumor resection surgery. Additional technological improvements may allow use of CAOS in soft tissue tumor surgery. It has the potential to improve surgical precision and accuracy, but more study is needed to evaluate clinical efficacy and long term results.
Collapse
Affiliation(s)
- Robert L Satcher
- Department of Orthopaedic Oncology, MD Anderson Cancer Center, 1400 Pressler Street, Unit 1448, Houston, TX 77030, USA.
| |
Collapse
|
9
|
Zvonarev PS, Farrell TJ, Hunter R, Wierzbicki M, Hayward JE, Sur RK. 2D/3D registration algorithm for lung brachytherapy. Med Phys 2013; 40:021913. [DOI: 10.1118/1.4788663] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
10
|
Mouse atlas registration with non-tomographic imaging modalities-a pilot study based on simulation. Mol Imaging Biol 2012; 14:408-19. [PMID: 21983855 DOI: 10.1007/s11307-011-0519-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
PURPOSE This study investigates methodologies for the estimation of small animal anatomy from non-tomographic modalities, such as planar X-ray projections, optical cameras, and surface scanners. The key goal is to register a digital mouse atlas to a combination of non-tomographic modalities, in order to provide organ-level anatomical references of small animals in 3D. PROCEDURES A 2D/3D registration method was developed to register the 3D atlas to the combination of non-tomographic imaging modalities. Eleven combinations of three non-tomographic imaging modalities were simulated, and the registration accuracy of each combination was evaluated. RESULTS Comparing the 11 combinations, the top-view X-ray projection combined with the side-view optical camera yielded the best overall registration accuracy of all organs. The use of a surface scanner improved the registration accuracy of skin, spleen, and kidneys. CONCLUSIONS The methodologies and evaluation presented in this study should provide helpful information for designing preclinical atlas-based anatomical data acquisition systems.
Collapse
|
11
|
Abstract
This paper presents a new approach for reconstructing a patient-specific shape model and internal relative intensity distribution of the proximal femur from a limited number (e.g., 2) of calibrated C-arm images or X-ray radiographs. Our approach uses independent shape and appearance models that are learned from a set of training data to encode the a priori information about the proximal femur. An intensity-based non-rigid 2D-3D registration algorithm is then proposed to deformably fit the learned models to the input images. The fitting is conducted iteratively by minimizing the dissimilarity between the input images and the associated digitally reconstructed radiographs of the learned models together with regularization terms encoding the strain energy of the forward deformation and the smoothness of the inverse deformation. Comprehensive experiments conducted on images of cadaveric femurs and on clinical datasets demonstrate the efficacy of the present approach.
Collapse
|