26
|
de Oliveira ADSB, Leonel LCPC, LaHood ER, Hallak H, Link MJ, Maleszewski JJ, Pinheiro-Neto CD, Morris JM, Peris-Celda M. Foundations and guidelines for high-quality three-dimensional models using photogrammetry: A technical note on the future of neuroanatomy education. ANATOMICAL SCIENCES EDUCATION 2023; 16:870-883. [PMID: 36934316 DOI: 10.1002/ase.2274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 01/05/2023] [Accepted: 03/14/2023] [Indexed: 06/18/2023]
Abstract
Hands-on dissections using cadaveric tissues for neuroanatomical education are not easily available in many educational institutions due to financial, safety, and ethical factors. Supplementary pedagogical tools, for instance, 3D models of anatomical specimens acquired with photogrammetry are an efficient alternative to democratize the 3D anatomical data. The aim of this study was to describe a technical guideline for acquiring realistic 3D anatomic models with photogrammetry and to improve the teaching and learning process in neuroanatomy. Seven specimens with different sizes, cadaveric tissues, and textures were used to demonstrate the step-by-step instructions for specimen preparation, photogrammetry setup, post-processing, and display of the 3D model. The photogrammetry scanning consists of three cameras arranged vertically facing the specimen to be scanned. In order to optimize the scanning process and the acquisition of optimal images, high-quality 3D models require complex and challenging adjustments in the positioning of the specimens within the scanner, as well as adjustments of the turntable, custom specimen holders, cameras, lighting, computer hardware, and its software. MeshLab® software was used for editing the 3D model before exporting it to MedReality® (Thyng, Chicago, IL) and SketchFab® (Epic, Cary, NC) platforms. Both allow manipulation of the models using various angles and magnifications and are easily accessed using mobile, immersive, and personal computer devices free of charge for viewers. Photogrammetry scans offer a 360° view of the 3D models ubiquitously accessible on any device independent of operating system and should be considered as a tool to optimize and democratize the teaching of neuroanatomy.
Collapse
|
27
|
Bertazzo TL, D'Ornellas MC. Protocol for capturing 3D facial meshes for rhinoseptoplasty planning. Braz J Otorhinolaryngol 2023; 89:101289. [PMID: 37467657 PMCID: PMC10372377 DOI: 10.1016/j.bjorl.2023.101289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 05/09/2023] [Accepted: 06/27/2023] [Indexed: 07/21/2023] Open
Abstract
OBJECTIVES To present and execute a protocol for the capture of 3D facial images using photogrammetry through the open access software Blender and its add-on OrtogOnBlender (OOB) and to evaluate the compatibility of the 3D meshes generated with Computed tomography (CT) of the sinuses. METHODS Individuals >18 years old, candidates for Rhinoseptoplasty in a tertiary hospital, were submitted to a photographic session to perform the standardized protocol. In the session, divided into 3 phases, sequential photos were taken for processing the photogrammetry in the OOB and producing 3D meshes of the face. The photogrammetry reconstructions were compared with the reference mesh of the soft tissue surface of the Sinus CT scan to assess compatibility between them. RESULTS 21 patients were included, 67% female. 3 photogrammetry meshes and 1 CT reference mesh were generated, which demonstrated matching compatibility, as most of the mean distances between cloud points were <1.48 mm. Phase 3 of the session with the highest number of photos (54.36 ± 15.05) generated the most satisfactory mesh with the best resolution. CONCLUSIONS The proposed protocol is reproducible and feasible in clinical practice, generated satisfactory 3D meshes of the face, being a potential tool for surgical planning and comparison of results. For the implementation of photogrammetry for use in 3D anthropometry, it is necessary to validate this method. LEVEL OF EVIDENCE: 3 OCEBM Levels of Evidence Working Group.1 "The Oxford 2011 Levels of Evidence". Oxford Centre for Evidence-Based Medicine. http://www.cebm.net/index.aspx?o=5653.
Collapse
|
28
|
Hussein MO. Photogrammetry technology in implant dentistry: A systematic review. J Prosthet Dent 2023; 130:318-326. [PMID: 34801243 DOI: 10.1016/j.prosdent.2021.09.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 09/22/2021] [Accepted: 09/22/2021] [Indexed: 11/28/2022]
Abstract
STATEMENT OF PROBLEM Photogrammetry technology may be useful in implant dentistry, but a systematic review is lacking and is indicated before routine use in clinical practice. PURPOSE The purpose of this systematic review was to assess the role of the photogrammetry technology used in implant dentistry and determine its validity as an accurate tool with clinical applications. MATERIAL AND METHODS Four major databases, PubMed MEDLINE, Google Scholar, Scopus, and Web of Science, were selected to retrieve articles published from January 2011 to February 2021 based on custom criteria. The search was augmented by a manual search. After screening of the collected articles, data, including study design and setting, type of application, digitizer used, reference body, method of evaluation, and overall outcomes, were extracted. RESULTS Twenty articles were included based on the selection criteria. Most of the articles confirmed that the use of photogrammetry was promising as an implant coordinate transfer system. However, few articles showed its use for 3-dimensional scanning, which might require more development. CONCLUSIONS The initial reports of using photogrammetry technology considered this method as a valid and reliable clinical tool in implant dentistry. More studies to develop the photogrammetry technology and to assess the results with evidence-based research are recommended to enhance its application in different clinical situations.
Collapse
|
29
|
Krijt LL, Kapetanović A, Sijmons WJL, Bruggink R, Baan F, Bergé SJ, Noverraz RRM, Xi T, Schols JGJH. What is the impact of miniscrew-assisted rapid palatal expansion on the midfacial soft tissues? A prospective three-dimensional stereophotogrammetry study. Clin Oral Investig 2023; 27:5343-5351. [PMID: 37507601 PMCID: PMC10492756 DOI: 10.1007/s00784-023-05154-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 07/11/2023] [Indexed: 07/30/2023]
Abstract
OBJECTIVES To evaluate the midfacial soft tissue changes of the face in patients treated with miniscrew-assisted rapid palatal expansion (MARPE). MATERIALS AND METHODS 3D facial images and intra-oral scans (IOS) were obtained before expansion (T0), immediately after completion of expansion (T1), and 1 year after expansion (T2). The 3D images were superimposed and two 3D distance maps were generated to measure the midfacial soft tissue changes: immediate effects between timepoints T0 and T1 and overall effects between T0 and T2. Changes of the alar width were also measured and dental expansion was measured as the interpremolar width (IPW) on IOS. RESULTS Twenty-nine patients (22 women, 7 men, mean age 25.9 years) were enrolled. The soft tissue in the regions of the nose, left of philtrum, right of philtrum, and upper lip tubercle demonstrated a statistically significant anterior movement of 0.30 mm, 0.93 mm, 0.74 mm, and 0.81 mm, respectively (p < 0.01) immediately after expansion (T0-T1). These changes persisted as an overall effect (T0-T2). The alar width initially increased by 1.59 mm, and then decreased by 0.08 mm after 1 year, but this effect was not significant. The IPW increased by 4.58 mm and remained stable 1 year later. There was no significant correlation between the increase in IPW and alar width (r = 0.35, p = 0.06). CONCLUSIONS Our findings indicate that MARPE results in significant but small changes of the soft tissue in the peri-oral and nasal regions. However, the clinical importance of these findings is limited. CLINICAL RELEVANCE MARPE is an effective treatment modality to expand the maxilla, incurring only minimal and clinically insignificant changes to the midfacial soft tissues.
Collapse
|
30
|
Krogager ME, Fugleholm K, Mathiesen TI, Spiriev T. Simplified Easy-Accessible Smartphone-Based Photogrammetry for 3-Dimensional Anatomy Presentation Exemplified With a Photorealistic Cadaver-Based Model of the Intracranial and Extracranial Course of the Facial Nerve. Oper Neurosurg (Hagerstown) 2023; 25:e71-e77. [PMID: 37321193 DOI: 10.1227/ons.0000000000000748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 03/09/2023] [Indexed: 06/17/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Smartphone-based photogrammetry (SMPhP) was recently presented as a practical and simple algorithm to create photorealistic 3-dimensional (3D) models that benefit from volumetric presentation of real anatomic dissections. Subsequently, there is a need to adapt the techniques for realistic depiction of layered anatomic structures, such as the course of cranial nerves and deep intracranial structures; the feasibility must be tested empirically. This study sought to adapt and test the technique for visualization of the combined intracranial and extracranial course of the facial nerve's complex anatomy and analyze feasibility and limitations. METHODS We dissected 1 latex-injected cadaver head to depict the facial nerve from the meatal to the extracranial portion. A smartphone camera alone was used to photograph the specimen, and dynamic lighting was applied to improve presentation of deep anatomic structures. Three-dimensional models were created with a cloud-based photogrammetry application. RESULTS Four 3D models were generated. Two models showed the extracranial portions of the facial nerve before and after removal of the parotid gland; 1 model showed the facial nerve in the fallopian canal after mastoidectomy, and 1 model showed the intratemporal segments. Relevant anatomic structures were annotated through a web-viewer platform. The photographic quality of the 3D models provided sufficient resolution for imaging of the extracranial and mastoid portions of the facial nerve, whereas imaging of the meatal segment only lacked sufficient precision and resolution. CONCLUSION A simple and accessible SMPhP algorithm allows 3D visualization of complex intracranial and extracranial neuroanatomy with sufficient detail to realistically depict superficial and deeper anatomic structures.
Collapse
|
31
|
El Menshawy A, Omar W, El Adawy S. Preservation of heritage buildings in Alexandria, Egypt: an application of heritage digitisation process phases and new documentation methods. F1000Res 2023; 11:1044. [PMID: 36999087 PMCID: PMC10043631.2 DOI: 10.12688/f1000research.123158.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/21/2023] [Indexed: 03/29/2023] Open
Abstract
Background: Throughout the history of the city, the architecture of Alexandria, Egypt, has been in contact with world cultures, especially those of the Mediterranean sphere. Alexandria is rich with cultural features dating back seven thousand years. In Alexandria, the heritage value of the city has decreased since the beginning of the third millennium of the Common Era because there is no suitable digital documentation system for these more recent assets. The development of a new technique for preserving heritage buildings is required. For example, image- based techniques can gather data using photography, panoramic photography, and close-range photogrammetry. In this research, we primarily seek to implement Heritage Digitisation Process Phases (HDPP) by introducing both the Building Information Modelling (BIM) environment and the point clouds for achieving a Historic Building Information Modelling (HBIM) model and to establish new documentation methods in architectural conservation and built-heritage preservation, i.e., Virtual Reality (VR) and Website Heritage Documentation (WHD). Methods: The methodology is designed to preserve and manage cultural heritage using HDPP for the promotion of heritage building preservation in Alexandria. Results: The results show that the application of HDPP has led to the creation of a digital database about the Société Immobilière building, which was chosen as a case study for this research. Conclusions: Implementation of HDPP and usage of new documentation methods i.e., VR and WHD create a digital path to help strengthen its image and connect the place to users, recreational areas are created to communicate and explore the city’s architectural history.
Collapse
|
32
|
Bartella AK, Laser J, Kamal M, Krause M, Neuhaus M, Pausch NC, Sander AK, Lethaus B, Zimmerer R. Accuracy of low-cost alternative facial scanners: a prospective cohort study. Oral Maxillofac Surg 2023; 27:33-41. [PMID: 35249150 PMCID: PMC9938030 DOI: 10.1007/s10006-022-01050-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 02/13/2022] [Indexed: 05/27/2023]
Abstract
INTRODUCTION Three-dimensional facial scans have recently begun to play an increasingly important role in the peri-therapeutic management of oral and maxillofacial and head and neck surgery cases. Face scan images can be generated by optical facial scanners utilizing line-laser, stereophotography, or structured light modalities, as well as from volumetric data: for example, from cone beam computed tomography (CBCT). This study aimed to evaluate whether two low-cost procedures for the creation of three-dimensional face scan images were capable of producing sufficiently accurate data sets for clinical analysis. MATERIALS AND METHODS Fifty healthy volunteers were included in the study. Two test objects with defined dimensions (Lego bricks) were attached to the forehead and the left cheek of each volunteer. Facial anthropometric values (i.e., the distances between the medial canthi, the lateral canthi, the nasal alae, and the angles of the mouth) were first measured manually. Subsequently, face scans were performed with a smart device and manual photogrammetry and the values obtained were compared with the manually measured data sets. RESULTS The anthropometric distances deviated, on average, 2.17 mm from the manual measurements (smart device scanning deviation 3.01 mm, photogrammetry deviation 1.34 mm), with seven out of eight deviations being statistically significant. For the Lego brick, from a total of 32 angles, 19 values demonstrated a significant difference from the original 90° angles. The average deviation was 6.5° (smart device scanning deviation 10.1°, photogrammetry deviation 2.8°). CONCLUSION Manual photogrammetry demonstrated greater accuracy when creating three-dimensional face scan images; however, smart devices are more user-friendly. Dental professionals should monitor camera and smart device technical improvements carefully when choosing and adequate technique for 3D scanning.
Collapse
|
33
|
Leménager M, Burkiewicz J, Schoen DJ, Joly S. Studying flowers in 3D using photogrammetry. THE NEW PHYTOLOGIST 2023; 237:1922-1933. [PMID: 36263728 DOI: 10.1111/nph.18553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 09/28/2022] [Indexed: 06/16/2023]
Abstract
Flowers are intricate and integrated three-dimensional (3D) structures predominantly studied in 2D due to the difficulty in quantitatively characterising their morphology in 3D. Given the recent development of analytical methods for high-dimensional data, the reconstruction of flower models in three dimensions represents the limiting factor to studying flowers in 3D. We developed a floral photogrammetry protocol to reconstruct 3D models of flowers based on images taken with a digital single-lens reflex camera, a turntable and a portable lightbox. We demonstrate that photogrammetry allows a rapid and accurate reconstruction of 3D models of flowers from 2D images. It can reconstruct all visible parts of flowers and has the advantage of keeping colour information. We illustrated its use by studying the shape and colour of 18 Gesneriaceae species. Photogrammetry is an affordable alternative to micro-computed tomography (micro-CT) that requires minimal investment and equipment, allowing it to be used directly in the field. It has the potential to stimulate research on the evolution and ecology of flowers by providing a simple way to access 3D morphological data from a variety of flower types.
Collapse
|
34
|
Al-Rudainy D, Adel Al-Lami H, Yang L. Validity and reliability of three-dimensional modeling of orthodontic dental casts using smartphone-based photogrammetric technology. J World Fed Orthod 2023; 12:9-14. [PMID: 36528481 DOI: 10.1016/j.ejwf.2022.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 10/10/2022] [Accepted: 11/18/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND The development of intraoral scanning technology has effectively enhanced the digital documentation of orthodontic dental casts. Albeit, the expense of this technology is the main limitation. The purpose of the present study was to assess the validity and reliability of virtual three-dimensional (3D) models of orthodontic dental casts, which were constructed using smartphone-based 3D photogrammetry. METHODS A smartphone was used to capture a set of two-dimensional images for 30 orthodontic dental casts. The captured images were processed to construct 3D virtual images using Agisoft and 3DF Zephyr software programs. To evaluate the accuracy of the virtual 3D models obtained by the two software programs, the virtual 3D models were compared with cone-beam computed tomography scans of the 30 dental casts. Colored maps were used to express the absolute distances between the points of each compared two surfaces; then, the means of the 100%, 95th, and 90th of the absolute distances were calculated. A Wilcoxon signed-rank test was applied to detect any significant differences. RESULTS The differences between the constructed 3D images and the cone-beam computed tomography scans were not statistically significant and were accepted clinically. The deviations were mostly in the interproximal areas and in the occlusal details (sharp cusps and deep pits and fissures). CONCLUSIONS This study found that smartphone-based stereophotogrammetry is an accurate and reliable method for 3D modeling of orthodontic dental casts, with errors less than the accepted clinically detectable error of 0.5 mm. Smartphone photogrammetry succeeded in presenting occlusal details, but it was difficult to accurately reproduce interproximal areas.
Collapse
|
35
|
de Albuquerque PMNM, de Oliveira DA, do Nascimento Alves LI, da Silva Alves Gomes VM, Bezerra LMR, de Souza Melo TM, de Alencar GG, da Silva Tenório A, de Siqueira GR. The accuracy of computerized biophotogrammetry in diagnosing changes in the cervical spine and its reliability for the cervical lordosis angle. J Back Musculoskelet Rehabil 2023; 36:187-198. [PMID: 35964169 DOI: 10.3233/bmr-210375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
BACKGROUND Accuracy studies of biophotogrammetry protocols require standardization similar to radiography. OBJECTIVE To estimate the diagnostic accuracy of a biophotogrammetric assessment protocol for cervical hyperlordosis, compared to radiography, and its intra- and inter-examiner reliability for measuring the cervical lordosis angle. METHODS A study of diagnostic accuracy in women complaining of cervical pain. Two photos were taken using the CorelDraw biophotogrammetric protocol and one radiograph using the Cobb C1-C7 method. The Intra- and Inter-examiner reliability was calculated using the Kappa index and the intraclass correlation coefficient (ICC). The Bland-Altman plot and the ROC curve were presented. RESULTS The sample consisted of 19 women. The accuracy of biophotogrammetry was 94.73% and the reliability between biophotogrammetry and radiography presented an ICC of 0.84 and a Kappa of 0.87. The excellent intra (ICC = 0.94) and inter-examiner (ICC = 0.86) reliability of the biophotogrammetry was confirmed. The area under the ROC curve was 93.5%. The Bland-Altman plot indicated differences between the two instruments close to the mean (1.5∘). CONCLUSION The biophotogrammetric protocol proved to be accurate in diagnosing cervical hyperlordosis, with excellent reliability between the biophotogrammetric and radiographic assessments. It also demonstrated excellent intra- and inter-examiner reliability in measuring the cervical lordosis angle.
Collapse
|
36
|
Jasińska A, Pyka K, Pastucha E, Midtiby HS. A Simple Way to Reduce 3D Model Deformation in Smartphone Photogrammetry. SENSORS (BASEL, SWITZERLAND) 2023; 23:728. [PMID: 36679525 PMCID: PMC9860635 DOI: 10.3390/s23020728] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Recently, the term smartphone photogrammetry gained popularity. This suggests that photogrammetry may become a simple measurement tool by virtually every smartphone user. The research was undertaken to clarify whether it is appropriate to use the Structure from Motion-Multi Stereo View (SfM-MVS) procedure with self-calibration as it is done in Uncrewed Aerial Vehicle photogrammetry. First, the geometric stability of smartphone cameras was tested. Fourteen smartphones were calibrated on the checkerboard test field. The process was repeated multiple times. These observations were found: (1) most smartphone cameras have lower stability of the internal orientation parameters than a Digital Single-Lens Reflex (DSLR) camera, and (2) the principal distance and position of the principal point are constantly changing. Then, based on images from two selected smartphones, 3D models of a small sculpture were developed. The SfM-MVS method was used, with self-calibration and pre-calibration variants. By comparing the resultant models with the reference DSLR-created model it was shown that introducing calibration obtained in the test field instead of self-calibration improves the geometry of 3D models. In particular, deformations of local concavities and convexities decreased. In conclusion, there is real potential in smartphone photogrammetry, but it also has its limits.
Collapse
|
37
|
Verykokou S, Ioannidis C. An Overview on Image-Based and Scanner-Based 3D Modeling Technologies. SENSORS (BASEL, SWITZERLAND) 2023; 23:596. [PMID: 36679393 PMCID: PMC9861742 DOI: 10.3390/s23020596] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 12/23/2022] [Accepted: 01/02/2023] [Indexed: 05/27/2023]
Abstract
Advances in the scientific fields of photogrammetry and computer vision have led to the development of automated multi-image methods that solve the problem of 3D reconstruction. Simultaneously, 3D scanners have become a common source of data acquisition for 3D modeling of real objects/scenes/human bodies. This article presents a comprehensive overview of different 3D modeling technologies that may be used to generate 3D reconstructions of outer or inner surfaces of different kinds of targets. In this context, it covers the topics of 3D modeling using images via different methods, it provides a detailed classification of 3D scanners by additionally presenting the basic operating principles of each type of scanner, and it discusses the problem of generating 3D models from scans. Finally, it outlines some applications of 3D modeling, beyond well-established topographic ones.
Collapse
|
38
|
Beri A, Pisulkar SK, Bagde AD, Bansod A, Dahihandekar C, Paikrao B. Evaluation of accuracy of photogrammetry with 3D scanning and conventional impression method for craniomaxillofacial defects using a software analysis. Trials 2022; 23:1048. [PMID: 36575547 PMCID: PMC9793656 DOI: 10.1186/s13063-022-07005-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Accepted: 12/12/2022] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND Facial mutilation and deformities can be caused by cancer, tumours, injuries, infections, and inherited or acquired deformities and has the potential to degrade one's quality of life by interfering with fundamental tasks like communication, breathing, feeding, and aesthetics. Depending on the type of defect, producing maxillofacial prostheses for the rehabilitation of patients with various defects can be challenging and complex. The prosthesis is used to replace missing or damaged parts of the cranium and face, like the nose, auricle, orbit, and surrounding tissues, as well as missing areas of soft and hard tissue, with the primary goal of increasing the patient's quality of life by rehabilitating oral functions such as speech, swallowing, and mastication. Traditional maxillofacial prosthesis impression and fabrication processes include a number of complicated steps that are costly, time-consuming, and uncomfortable for the patient. These rely on the knowledge of the maxillofacial team, dental clinicians, and maxillofacial technician. The foundation of the impression is the keystone for creating a prosthesis. However, this is the most time-consuming and difficult chair-side operation in maxillofacial prosthesis manufacturing since it requires prolonged interaction with the patient. The field of prosthesis fabrication is being transformed by the digital revolution. Digital technology allows for more accurate impression data to be gathered in less time (3 to 5 min) than traditional methods, lowering patient anxiety. Digital impressions eliminate the need for messy impression materials and provide patients with a more pleasant experience. This method bypasses the procedure of traditional gypsum model fabrication. This eliminates the disparity caused by a dimensional distortion of the impression material and gypsum setting expansion. Traditional dental impression processes leave enough room for errors, such as voids or flaws, air bubbles, or deformities, while current technology for prosthesis planning has emerged as an alternative means to improve patient acceptability and pleasure, not only because the end result is a precisely fitted restoration but also because the chair-side adjustments required are reduced. The most frequent approaches for creating 3D virtual models are the following. To begin, 3D scanning is employed, in which the subjects are scanned in three dimensions, and the point cloud data is used to create a virtual digital model. METHODS It will be a hospital-based randomised control trial, carried out at the Department of Prosthodontics, Sharad Pawar Dental College, Sawangi (Meghe), Wardha, a part of Datta Meghe Institute of Medical Sciences (Deemed University). A total of 45 patients will be selected from the outpatient department (OPD) of the Department of Prosthodontics. All the patients will be provided written consent before their participation in the study. METHODOLOGY 1. Patient screening will be done, and the patient will be allocated to three techniques that are the conventional manual method, photogrammetry method, and 3D scanning in a randomised manner 2. The impression of the defect will be recorded by conventional manual method, photogrammetry method, and 3D scanning 3. The defect will be modelled in three ways: first is as per the manual dimension taken on the patient, second is the organisation of photographic image taken with lab standards and third is plotting of point cloud data to generate the virtual 3D model 4. For photogrammetric prosthesis design, finite photos/images will be taken at multiple angles to model the 3D virtual design. With the use of minimum photographs, the 3D modelling can be performed by using freeware, and a mould is obtained 5. The CAD software was used to design the prosthesis, and the final negative mould can be printed using additive manufacturing 6. The mould fabricated by all three methods will be analysed by a software using reverse engineering technology Study design: Randomised control trial Duration: 2 years Sample size: 45 patients DISCUSSION: Rodrigo Salazar-Gamarra1, Rosemary Seelaus, and Jorge Vicente Lopes da Silva et al., in the year 2016, discussed, as part of a method for manufacturing face prostheses utilising a mobile device, free software, and a photo capture protocol, that 2D captures of the anatomy of a patient with a facial defect were converted into a 3D model using monoscopic photogrammetry and a mobile device. The visual and technical integrity of the resulting digital models was assessed. The technological approach and models that resulted were thoroughly explained and evaluated for technical and clinical value. Marta Revilla-León, Wael Att, and Dr Med Dent et al. (2020) used a coordinate measuring equipment which was used to assess the accuracy of complete arch implant impression processes utilising conventional, photogrammetry, and intraoral scanning. Corina Marilena Cristache and Ioana Tudor Liliana Moraru et al. in the year 2021 provided an update on defect data acquisition, editing, and design using open-source and commercially available software in digital workflow in maxillofacial prosthodontics. This research looked at randomised clinical trials, case reports, case series, technical comments, letters to the editor, and reviews involving humans that were written in English and included detailed information on data acquisition, data processing software, and maxillofacial prosthetic part design. TRIAL REGISTRATION CTRI/2022/08/044524. Registered on September 16, 2022.
Collapse
|
39
|
Lin KY, Tseng YH, Chiang KW. Interpretation and Transformation of Intrinsic Camera Parameters Used in Photogrammetry and Computer Vision. SENSORS (BASEL, SWITZERLAND) 2022; 22:9602. [PMID: 36559969 PMCID: PMC9787778 DOI: 10.3390/s22249602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 12/04/2022] [Accepted: 12/05/2022] [Indexed: 06/17/2023]
Abstract
The precision modelling of intrinsic camera geometry is a common issue in the fields of photogrammetry (PH) and computer vision (CV). However, in both fields, intrinsic camera geometry has been modelled differently, which has led researchers to adopt different definitions of intrinsic camera parameters (ICPs), including focal length, principal point, radial distortion, decentring distortion, affinity and shear. These ICPs are indispensable for vision-based measurements. These differences can confuse researchers from one field when using ICPs obtained from a camera calibration software package developed in another field. This paper clarifies the ICP definitions used in each field and proposes an ICP transformation algorithm. The originality of this study lies in its use of least-squares adjustment, applying the image points involving ICPs defined in PH and CV image frames to convert a complete set of ICPs. This ICP transformation method is more rigorous than the simplified formulas used in conventional methods. Selecting suitable image points can increase the accuracy of the generated adjustment model. In addition, the proposed ICP transformation method enables users to apply mixed software in the fields of PH and CV. To validate the transformation algorithm, two cameras with different view angles were calibrated using typical camera calibration software packages applied in each field to obtain ICPs. Experimental results demonstrate that our proposed transformation algorithm can be used to convert ICPs derived from different software packages. Both the PH-to-CV and CV-to-PH transformation processes were executed using complete mathematical camera models. We also compared the rectified images and distortion plots generated using different ICPs. Furthermore, by comparing our method with the state of art method, we confirm the performance improvement of ICP conversions between PH and CV models.
Collapse
|
40
|
Trojnacki M, Dąbek P, Jaroszek P. Analysis of the Influence of the Geometrical Parameters of the Body Scanner on the Accuracy of Reconstruction of the Human Figure Using the Photogrammetry Technique. SENSORS (BASEL, SWITZERLAND) 2022; 22:9181. [PMID: 36501882 PMCID: PMC9739902 DOI: 10.3390/s22239181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 11/21/2022] [Accepted: 11/23/2022] [Indexed: 06/17/2023]
Abstract
This article concerns the research of the HUBO full-body scanner, which includes the analysis and selection of the scanner's geometrical parameters in order to obtain the highest possible accuracy of the reconstruction of a human figure. In the scanner version analyzed in this paper, smartphone cameras are used as sensors. In order to process the collected photos into a 3D model, the photogrammetry technique is applied. As part of the work, dependencies between the geometrical parameters of the scanner are derived, which allows to significantly reduce the number of degrees of freedom in the selection of its geometrical parameters. Based on these dependencies, a numerical analysis is carried out, as a result of which the initial values of the geometrical parameters are pre-selected and distribution of scanner cameras is visualized. As part of the experimental research, the influence of selected scanner parameters on the scanning accuracy is analyzed. For the experimental research, a specially prepared dummy was used instead of the participation of a real human, which allowed to ensure the constancy of the scanned object. The accuracy of the object reconstruction was assessed in relation to the reference 3D model obtained with a scanner of superior measurement uncertainty. On the basis of the conducted research, a method for the selection of the scanner's geometrical parameters was finally verified, leading to the arrangement of cameras around a human, which guarantees high accuracy of the reconstruction. Additionally, to quantify the results, the quality rates were used, taking into account not only the obtained measurement uncertainty of the scanner, but also the processing time and the resulting efficiency.
Collapse
|
41
|
Cao Y, Ding B, Chen J, Liu W, Guo P, Huang L, Yang J. Photometric-Stereo-Based Defect Detection System for Metal Parts. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22218374. [PMID: 36366075 PMCID: PMC9655976 DOI: 10.3390/s22218374] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 10/27/2022] [Accepted: 10/27/2022] [Indexed: 05/27/2023]
Abstract
Automated inspection technology based on computer vision is now widely used in the manufacturing industry with high speed and accuracy. However, metal parts always appear in high gloss or shadow on the surface, resulting in the overexposure of the captured images. It is necessary to adjust the light direction and view to keep defects out of overexposure and shadow areas. However, it is too tedious to adjust the position of the light direction and view the variety of parts' geometries. To address this problem, we design a photometric-stereo-based defect detection system (PSBDDS), which combines the photometric stereo with defect detection to eliminate the interference of highlights and shadows. Based on the PSBDDS, we introduce a photometric-stereo-based defect detection framework, which takes images captured in multiple directional lights as input and obtains the normal map through the photometric stereo model. Then, the detection model uses the normal map as input to locate and classify defects. Existing learning-based photometric stereo methods and defect detection methods have achieved good performance in their respective fields. However, photometric stereo datasets and defect detection datasets are not sufficient for training and testing photometric-stereo-based defect detection methods, thus we create a photometric stereo defect detection (PSDD) dataset using our PSBDDS to eliminate gaps between learning-based photometric stereo and defect detection methods. Furthermore, experimental results prove the effectiveness of the proposed PSBBD and PSDD dataset.
Collapse
|
42
|
El Menshawy A, Omar W, El Adawy S. Preservation of heritage buildings in Alexandria, Egypt: an application of heritage digitisation process phases and new documentation methods. F1000Res 2022; 11:1044. [PMID: 36999087 PMCID: PMC10043631 DOI: 10.12688/f1000research.123158.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/24/2022] [Indexed: 11/20/2022] Open
Abstract
Background: Throughout the history of the city, the architecture of Alexandria, Egypt, has been in contact with world cultures, especially those of the Mediterranean sphere. Alexandria is rich with cultural features dating back seven thousand years. In Alexandria, the heritage value of the city has decreased because there is no suitable documentation system for these more recent assets. The development of a new technique for preserving heritage buildings is required. For example, image- based techniques can gather data using photography, panoramic photography, and close-range photogrammetry. In this research, we primarily seek to implement Heritage Digitisation Process Phases (HDPP) and establish new documentation methods in architectural conservation and built-heritage preservation, i.e., Virtual Reality (VR) and Website Heritage Documentation (WHD). Methods: The methodology is designed to preserve and manage cultural heritage using HDPP for the promotion of heritage building preservation in Alexandria. Results: The results show that the application of HDPP has led to the creation of a digital database about the Société Immobilière building, which was chosen as a case study for this research. Conclusions: Implementation of HDPP and usage of new documentation methods i.e., VR and WHD create a digital path to help strengthen its image and connect the place to users, recreational areas are created to communicate and explore the city’s architectural history.
Collapse
|
43
|
Dilian O, Kimmel R, Tezmah-Shahar R, Agmon M. Can We Quantify Aging-Associated Postural Changes Using Photogrammetry? A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:6640. [PMID: 36081099 PMCID: PMC9459795 DOI: 10.3390/s22176640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 08/28/2022] [Accepted: 08/30/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Aging is widely known to be associated with changes in standing posture. Recent advancements in the field of computerized image processing have allowed for improved analyses of several health conditions using photographs. However, photogrammetry's potential for assessing aging-associated postural changes is yet unclear. Thus, the aim of this review is to evaluate the potential of photogrammetry in quantifying age-related postural changes. MATERIALS AND METHODS We searched the databases PubMed Central, Scopus, Embase, and SciELO from the beginning of records to March 2021. Inclusion criteria were: (a) participants were older adults aged ≥60; (b) standing posture was assessed by photogrammetric means. PRISMA guidelines were followed. We used the Newcastle-Ottawa Scale to assess methodological quality. RESULTS Of 946 articles reviewed, after screening and the removal of duplicates, 11 reports were found eligible for full-text assessment, of which 5 full studies met the inclusion criteria. Significant changes occurring with aging included deepening of thoracic kyphosis, flattening of lumbar lordosis, and increased sagittal inclination. CONCLUSIONS These changes agree with commonly described aging-related postural changes. However, detailed quantification of these changes was not found; the photogrammetrical methods used were often unvalidated and did not adhere to known protocols. These methodological difficulties call for further studies using validated photogrammetrical methods and improved research methodologies.
Collapse
|
44
|
Tocci F, Figorilli S, Vasta S, Violino S, Pallottino F, Ortenzi L, Costa C. Advantages in Using Colour Calibration for Orthophoto Reconstruction. SENSORS (BASEL, SWITZERLAND) 2022; 22:6490. [PMID: 36080948 PMCID: PMC9460411 DOI: 10.3390/s22176490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/23/2022] [Accepted: 08/26/2022] [Indexed: 06/15/2023]
Abstract
UAVs are sensor platforms increasingly used in precision agriculture, especially for crop and environmental monitoring using photogrammetry. In this work, light drone flights were performed on three consecutive days (with different weather conditions) on an experimental agricultural field to evaluate the photogrammetric performances due to colour calibration. Thirty random reconstructions from the three days and six different areas of the field were performed. The results showed that calibrated orthophotos appeared greener and brighter than the uncalibrated ones, better representing the actual colours of the scene. Parameter reporting errors were always lower in the calibrated reconstructions and the other quantitative parameters were always lower in the non-calibrated ones, in particular, significant differences were observed in the percentage of camera stations on the total number of images and the reprojection error. The results obtained showed that it is possible to obtain better orthophotos, by means of a calibration algorithm, to rectify the atmospheric conditions that affect the image obtained. This proposed colour calibration protocol could be useful when integrated into robotic platforms and sensors for the exploration and monitoring of different environments.
Collapse
|
45
|
Kaneda A, Nakagawa T, Tamura K, Noshita K, Nakao H. A proposal of a new automated method for SfM/MVS 3D reconstruction through comparisons of 3D data by SfM/MVS and handheld laser scanners. PLoS One 2022; 17:e0270660. [PMID: 35857749 PMCID: PMC9299387 DOI: 10.1371/journal.pone.0270660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 06/14/2022] [Indexed: 11/26/2022] Open
Abstract
SfM/MVS photogrammetry has received increasing attention due to its convenience, broadening the range of its applications into archaeology and anthropology. Because the accuracy of SfM/MVS depends on photography, one important issue is that incorrect or low-density point clouds are found in 3D models due to poor overlapping between images. A systematic way of taking photographs solve these problems, though it has not been well established and the accuracy has not been examined either, with some exceptions. The present study aims to (i) develop an efficient method for recording pottery using an automated turntable and (ii) assess its accuracy through a comparison with 3D models made by laser scanning. We recorded relatively simple pottery manufactured by prehistoric farmers in the Japanese archipelago using SfM/MVS photogrammetry and laser scanning. Further, by measuring the Hausdorff distance between 3D models made using these two methods, we show that their difference is negligibly small, suggesting that our method is sufficiently accurate to record pottery.
Collapse
|
46
|
Chang HY, Lin TH. Portrait imaging relighting system based on a simplified photometric stereo method. APPLIED OPTICS 2022; 61:4379-4386. [PMID: 36256275 DOI: 10.1364/ao.451662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 04/26/2022] [Indexed: 06/16/2023]
Abstract
This study proposes a portrait image relighting system based on a simplified photometric stereo method. The system, comprising a controllable digital single lens reflex camera and five polarized flashlights, can obtain a color shade-less image and synthesize a normal map from shaded images. When calibrating the photometric stereo, the normal map is taken as a linear combination of shaded images and clamped with respect to specific normal directions on a white-coated sphere. The relit images were generated through inverse rendering in a predefined virtual environment. To evaluate personal preference, 24 adult subjects were recruited to conduct subjective assessments comparing the deep portrait relighting method results. From experiments regarding different scenarios, we concluded that the proposed system based on a simplified photometric stereo performs acceptably for relighting portrait images.
Collapse
|
47
|
Göldner D, Karakostis FA, Falcucci A. Practical and technical aspects for the 3D scanning of lithic artefacts using micro-computed tomography techniques and laser light scanners for subsequent geometric morphometric analysis. Introducing the StyroStone protocol. PLoS One 2022; 17:e0267163. [PMID: 35446900 PMCID: PMC9022823 DOI: 10.1371/journal.pone.0267163] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 04/03/2022] [Indexed: 11/18/2022] Open
Abstract
Here, we present a new method to scan a large number of lithic artefacts using three-dimensional scanning technology. Despite the rising use of high-resolution 3D surface scanners in archaeological sciences, no virtual studies have focused on the 3D digitization and analysis of small lithic implements such as bladelets, microblades, and microflakes. This is mostly due to difficulties in creating reliable 3D meshes of these artefacts resulting from several inherent features (i.e., size, translucency, and acute edge angles), which compromise the efficiency of structured light or laser scanners and photogrammetry. Our new protocol StyroStone addresses this problem by proposing a step-by-step procedure relying on the use of micro-computed tomographic technology, which is able to capture the 3D shape of small lithic implements in high detail. We tested a system that enables us to scan hundreds of artefacts together at once within a single scanning session lasting a few hours. As also bigger lithic artefacts (i.e., blades) are present in our sample, this protocol is complemented by a short guide on how to effectively scan such artefacts using a structured light scanner (Artec Space Spider). Furthermore, we estimate the accuracy of our scanning protocol using principal component analysis of 3D Procrustes shape coordinates on a sample of meshes of bladelets obtained with both micro-computed tomography and another scanning device (i.e., Artec Micro). A comprehensive review on the use of 3D geometric morphometrics in lithic analysis and other computer-based approaches is provided in the introductory chapter to show the advantages of improving 3D scanning protocols and increasing the digitization of our prehistoric human heritage.
Collapse
|
48
|
Cerasoni JN, do Nascimento Rodrigues F, Tang Y, Hallett EY. Do-It-Yourself digital archaeology: Introduction and practical applications of photography and photogrammetry for the 2D and 3D representation of small objects and artefacts. PLoS One 2022; 17:e0267168. [PMID: 35427405 PMCID: PMC9012351 DOI: 10.1371/journal.pone.0267168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 04/03/2022] [Indexed: 11/29/2022] Open
Abstract
Photography and photogrammetry have recently become among the most widespread and preferred visualisation methods for the representation of small objects and artefacts. People want to see the past, not only know about it; and the ability to visualise objects into virtually realistic representations is fundamental for researchers, students and educators. Here, we present two new methods, the ‘Small Object and Artefact Photography’ (‘SOAP’) and the ‘High Resolution “DIY” Photogrammetry’ (‘HRP’) protocols. The ‘SOAP’ protocol involves the photographic application of modern digital techniques for the representation of any small object. The ‘HRP’ protocol involves the photographic capturing, digital reconstruction and three-dimensional representation of small objects. These protocols follow optimised step-by-step explanations for the production of high-resolution two- and three-dimensional object imaging, achievable with minimal practice and access to basic equipment and softwares. These methods were developed to allow anyone to easily and inexpensively produce high-quality images and models for any use, from simple graphic visualisations to complex analytical, statistical and spatial analyses.
Collapse
|
49
|
Barreto MA, Perez-Gonzalez J, Herr HM, Huegel JC. ARACAM: A RGB-D Multi-View Photogrammetry System for Lower Limb 3D Reconstruction Applications. SENSORS 2022; 22:s22072443. [PMID: 35408058 PMCID: PMC9003530 DOI: 10.3390/s22072443] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 03/01/2022] [Accepted: 03/11/2022] [Indexed: 12/19/2022]
Abstract
In the world, there is a growing need for lower limb prostheses due to a rising number of amputations caused primarily, by diabetic foot. Researchers enable functional and comfortable prostheses through prosthetic design by integrating new technologies applied to the traditional handcrafted method for prosthesis fabrication that is still current. That is why computer vision shows to be a promising tool for the integration of 3D reconstruction that may be useful for prosthetic design. This work has the objective to design, prototype, and test a functional system to scan plaster cast molds, which may serve as a platform for future technologies for lower limb reconstruction applications. The image capture system comprises 5 stereoscopic color and depth cameras, each with 4 DOF mountings on an enveloping frame, as well as algorithms for calibration, segmentation, registration, and surface reconstruction. The segmentation metrics of dice coefficient and Hausdorff distance (HD) show strong visual similarity with an average similarity of 87% and average error of 6.40 mm, respectively. Moving forward, the system was tested on a known 3D printed model obtained from a computer tomography scan to which comparison results via HD show an average error of ≤1.93 mm thereby making the system competitive against the systems reviewed from the state-of-the-art.
Collapse
|
50
|
Cote DJ, Strickland BA, Ruzevick J, Zada G. Commentary: Qlone®: A Simple Method to Create 360-Degree Photogrammetry-Based 3-Dimensional Model of Cadaveric Specimens. Oper Neurosurg (Hagerstown) 2022; 22:e101. [PMID: 35007211 DOI: 10.1227/ons.0000000000000037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 09/13/2021] [Indexed: 11/19/2022] Open
|