1
|
Cruz RLJ, Ross MT, Nightingale R, Pickering E, Allenby MC, Woodruff MA, Powell SK. An automated parametric ear model to improve frugal 3D scanning methods for the advanced manufacturing of high-quality prosthetic ears. Comput Biol Med 2023; 162:107033. [PMID: 37271110 DOI: 10.1016/j.compbiomed.2023.107033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 04/17/2023] [Accepted: 05/10/2023] [Indexed: 06/06/2023]
Abstract
Ear prostheses are commonly used for restoring aesthetics to those suffering missing or malformed external ears. Traditional fabrication of these prostheses is labour intensive and requires expert skill from a prosthetist. Advanced manufacturing including 3D scanning, modelling and 3D printing has the potential to improve this process, although more work is required before it is ready for routine clinical use. In this paper, we introduce a parametric modelling technique capable of producing high quality 3D models of the human ear from low-fidelity, frugal, patient scans; significantly reducing time, complexity and cost. Our ear model can be tuned to fit the frugal low-fidelity 3D scan through; (a) manual tuning, or (b) our automated particle filter approach. This potentially enables low-cost smartphone photogrammetry-based 3D scanning for high quality personalised 3D printed ear prosthesis. In comparison to standard photogrammetry, our parametric model improves completeness, from (81 ± 5)% to (87 ± 4)%, with only a modest reduction in accuracy, with root mean square error (RMSE) increasing from (1.0 ± 0.2) mm to (1.5 ± 0.2) mm (relative to metrology rated reference 3D scans, n = 14). Despite this reduction in the RMS accuracy, our parametric model improves the overall quality, realism, and smoothness. Our automated particle filter method differs only modestly compared to manual adjustments. Overall, our parametric ear model can significantly improve quality, smoothness and completeness of 3D models produced from 30-photograph photogrammetry. This enables frugal high-quality 3D ear models to be produced for use in the advanced manufacturing of ear prostheses.
Collapse
Affiliation(s)
- Rena L J Cruz
- QUT Centre for Biomedical Technologies, School of Mechanical, Medical, and Process Engineering, Faculty of Engineering, Queensland University of Technology (QUT), Brisbane, Qld, Australia
| | - Maureen T Ross
- QUT Centre for Biomedical Technologies, School of Mechanical, Medical, and Process Engineering, Faculty of Engineering, Queensland University of Technology (QUT), Brisbane, Qld, Australia
| | - Renee Nightingale
- QUT Centre for Biomedical Technologies, School of Mechanical, Medical, and Process Engineering, Faculty of Engineering, Queensland University of Technology (QUT), Brisbane, Qld, Australia
| | - Edmund Pickering
- QUT Centre for Biomedical Technologies, School of Mechanical, Medical, and Process Engineering, Faculty of Engineering, Queensland University of Technology (QUT), Brisbane, Qld, Australia
| | - Mark C Allenby
- QUT Centre for Biomedical Technologies, School of Mechanical, Medical, and Process Engineering, Faculty of Engineering, Queensland University of Technology (QUT), Brisbane, Qld, Australia
| | - Maria A Woodruff
- QUT Centre for Biomedical Technologies, School of Mechanical, Medical, and Process Engineering, Faculty of Engineering, Queensland University of Technology (QUT), Brisbane, Qld, Australia
| | - Sean K Powell
- QUT Centre for Biomedical Technologies, School of Mechanical, Medical, and Process Engineering, Faculty of Engineering, Queensland University of Technology (QUT), Brisbane, Qld, Australia.
| |
Collapse
|
2
|
Tran VD, Nguyen TN, Ballit A, Dao TT. Novel Baseline Facial Muscle Database Using Statistical Shape Modeling and In Silico Trials toward Decision Support for Facial Rehabilitation. Bioengineering (Basel) 2023; 10:737. [PMID: 37370668 DOI: 10.3390/bioengineering10060737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 06/10/2023] [Accepted: 06/16/2023] [Indexed: 06/29/2023] Open
Abstract
Backgrounds and Objective: Facial palsy is a complex pathophysiological condition affecting the personal and professional lives of the involved patients. Sudden muscle weakness or paralysis needs to be rehabilitated to recover a symmetric and expressive face. Computer-aided decision support systems for facial rehabilitation have been developed. However, there is a lack of facial muscle baseline data to evaluate the patient states and guide as well as optimize the rehabilitation strategy. In this present study, we aimed to develop a novel baseline facial muscle database (static and dynamic behaviors) using the coupling between statistical shape modeling and in-silico trial approaches. Methods: 10,000 virtual subjects (5000 males and 5000 females) were generated from a statistical shape modeling (SSM) head model. Skull and muscle networks were defined so that they statistically fit with the head shapes. Two standard mimics: smiling and kissing were generated. The muscle strains of the lengths in neutral and mimic positions were computed and recorded thanks to the muscle insertion and attachment points on the animated head and skull meshes. For validation, five head and skull meshes were reconstructed from the five computed tomography (CT) image sets. Skull and muscle networks were then predicted from the reconstructed head meshes. The predicted skull meshes were compared with the reconstructed skull meshes based on the mesh-to-mesh distance metrics. The predicted muscle lengths were also compared with those manually defined on the reconstructed head and skull meshes. Moreover, the computed muscle lengths and strains were compared with those in our previous studies and the literature. Results: The skull prediction's median deviations from the CT-based models were 2.2236 mm, 2.1371 mm, and 2.1277 mm for the skull shape, skull mesh, and muscle attachment point regions, respectively. The median deviation of the muscle lengths was 4.8940 mm. The computed muscle strains were compatible with the reported values in our previous Kinect-based method and the literature. Conclusions: The development of our novel facial muscle database opens new avenues to accurately evaluate the facial muscle states of facial palsy patients. Based on the evaluated results, specific types of facial mimic rehabilitation exercises can also be selected optimally to train the target muscles. In perspective, the database of the computed muscle lengths and strains will be integrated into our available clinical decision support system for automatically detecting malfunctioning muscles and proposing patient-specific rehabilitation serious games.
Collapse
Affiliation(s)
- Vi-Do Tran
- Faculty of Electrical and Electronics Engineering, Ho Chi Minh City University of Technology and Education, Thu Duc City 71300, Ho Chi Minh City, Vietnam
| | - Tan-Nhu Nguyen
- School of Engineering, Eastern International University, Thu Dau Mot City 75100, Binh Duong Province, Vietnam
| | - Abbass Ballit
- Univ. Lille, CNRS, Centrale Lille, UMR 9013-LaMcube-Laboratoire de Mécanique, Multiphysique, Multiéchelle, F-59000 Lille, France
| | - Tien-Tuan Dao
- Univ. Lille, CNRS, Centrale Lille, UMR 9013-LaMcube-Laboratoire de Mécanique, Multiphysique, Multiéchelle, F-59000 Lille, France
| |
Collapse
|
3
|
Mai HN, Win TT, Tong MS, Lee CH, Lee KB, Kim SY, Lee HW, Lee DH. Three-dimensional morphometric analysis of facial units in virtual smiling facial images with different smile expressions. J Adv Prosthodont 2023; 15:1-10. [PMID: 36908751 PMCID: PMC9992697 DOI: 10.4047/jap.2023.15.1.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 12/06/2022] [Accepted: 01/31/2023] [Indexed: 03/06/2023] Open
Abstract
PURPOSE Accuracy of image matching between resting and smiling facial models is affected by the stability of the reference surfaces. This study aimed to investigate the morphometric variations in subdivided facial units during resting, posed and spontaneous smiling. MATERIALS AND METHODS The posed and spontaneous smiling faces of 33 adults were digitized and registered to the resting faces. The morphological changes of subdivided facial units at the forehead (upper and lower central, upper and lower lateral, and temple), nasal (dorsum, tip, lateral wall, and alar lobules), and chin (central and lateral) regions were assessed by measuring the 3D mesh deviations between the smiling and resting facial models. The one-way analysis of variance, Duncan post hoc tests, and Student's t-test were used to determine the differences among the groups (α = .05). RESULTS The smallest morphometric changes were observed at the upper and central forehead and nasal dorsum; meanwhile, the largest deviation was found at the nasal alar lobules in both the posed and spontaneous smiles (P < .001). The spontaneous smile generally resulted in larger facial unit changes than the posed smile, and significant difference was observed at the alar lobules, central chin, and lateral chin units (P < .001). CONCLUSION The upper and central forehead and nasal dorsum are reliable areas for image matching between resting and smiling 3D facial images. The central chin area can be considered an additional reference area for posed smiles; however, special cautions should be taken when selecting this area as references for spontaneous smiles.
Collapse
Affiliation(s)
- Hang-Nga Mai
- Institute for Translational Research in Dentistry, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea.,Dental School of Hanoi University of business and technology, Hanoi, Vietnam
| | - Thaw Thaw Win
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - Minh Son Tong
- School of Dentistry, Hanoi Medical University, Hanoi, Vietnam
| | - Cheong-Hee Lee
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - Kyu-Bok Lee
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - So-Yeun Kim
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - Hyun-Woo Lee
- Department of Oral and Maxillofacial Surgery, Uijeongbu Eulji Medical Center, Eulji University School of Dentistry, Uijeongbu, Republic of Korea
| | - Du-Hyeong Lee
- Institute for Translational Research in Dentistry, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea.,Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| |
Collapse
|
4
|
Prasad S, Arunachalam S, Boillat T, Ghoneima A, Gandedkar N, Diar-Bakirly S. Wearable Orofacial Technology and Orthodontics. Dent J (Basel) 2023; 11:dj11010024. [PMID: 36661561 PMCID: PMC9858298 DOI: 10.3390/dj11010024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 12/19/2022] [Accepted: 12/30/2022] [Indexed: 01/12/2023] Open
Abstract
Wearable technology to augment traditional approaches are increasingly being added to the arsenals of treatment providers. Wearable technology generally refers to electronic systems, devices, or sensors that are usually worn on or are in close proximity to the human body. Wearables may be stand-alone or integrated into materials that are worn on the body. What sets medical wearables apart from other systems is their ability to collect, store, and relay information regarding an individual's current body status to other devices operating on compatible networks in naturalistic settings. The last decade has witnessed a steady increase in the use of wearables specific to the orofacial region. Applications range from supplementing diagnosis, tracking treatment progress, monitoring patient compliance, and better understanding the jaw's functional and parafunctional activities. Orofacial wearable devices may be unimodal or incorporate multiple sensing modalities. The objective data collected continuously, in real time, in naturalistic settings using these orofacial wearables provide opportunities to formulate accurate and personalized treatment strategies. In the not-too-distant future, it is anticipated that information about an individual's current oral health status may provide patient-centric personalized care to prevent, diagnose, and treat oral diseases, with wearables playing a key role. In this review, we examine the progress achieved, summarize applications of orthodontic relevance and examine the future potential of orofacial wearables.
Collapse
Affiliation(s)
- Sabarinath Prasad
- Department of Orthodontics, Hamdan Bin Mohammed College of Dental Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai 50505, United Arab Emirates
- Correspondence:
| | - Sivakumar Arunachalam
- Orthodontics and Dentofacial Orthopedics, School of Dentistry, International Medical University, Kuala Lumpur 57000, Malaysia
| | - Thomas Boillat
- Design Lab, College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai 50505, United Arab Emirates
| | - Ahmed Ghoneima
- Department of Orthodontics, Hamdan Bin Mohammed College of Dental Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai 50505, United Arab Emirates
| | - Narayan Gandedkar
- Discipline of Orthodontics & Paediatric Dentistry, School of Dentistry, University of Sydney, Sydney, NSW 2006, Australia
| | - Samira Diar-Bakirly
- Department of Orthodontics, Hamdan Bin Mohammed College of Dental Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai 50505, United Arab Emirates
| |
Collapse
|
5
|
Population affinity and variation of sexual dimorphism in three-dimensional facial forms: comparisons between Turkish and Japanese populations. Sci Rep 2021; 11:16634. [PMID: 34404851 PMCID: PMC8371176 DOI: 10.1038/s41598-021-96029-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 07/28/2021] [Indexed: 11/22/2022] Open
Abstract
Examining the extent to which sex differences in three-dimensional (3D) facial soft tissue configurations are similar across diverse populations could suggest the source of the indirect evolutionary benefits of facial sexual dimorphism traits. To explore this idea, we selected two geographically distinct populations. Three-dimensional model faces were derived from 272 Turkish and Japanese men and women; their facial morphologies were evaluated using landmark and surface-based analyses. We found four common facial features related to sexual dimorphism. Both Turkish and Japanese females had a shorter lower face height, a flatter forehead, greater sagittal cheek protrusion in the infraorbital region but less prominence of the cheek in the parotid-masseteric region, and an antero-posteriorly smaller nose when compared with their male counterparts. The results indicated the possible phylogenetic contribution of the masticatory organ function and morphogenesis on sexual dimorphism of the human face in addition to previously reported biological and psychological characteristics, including sexual maturity, reproductive potential, mating success, general health, immune response, age, and personality.
Collapse
|
6
|
Lee D, Tanikawa C, Yamashiro T. Impairment in facial expression generation in patients with repaired unilateral cleft lip: Effects of the physical properties of facial soft tissues. PLoS One 2021; 16:e0249961. [PMID: 33886591 PMCID: PMC8061991 DOI: 10.1371/journal.pone.0249961] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 03/26/2021] [Indexed: 11/18/2022] Open
Abstract
Patients with repaired unilateral cleft lip with palate (UCLP) often show dysmorphology and distorted facial motion clinically, which can cause psychological issues. However, no report has clarified the details concerning distorted facial motion and the corresponding possible causative factors. In this study, we hypothesized that the physical properties of the scar and surrounding facial soft tissue might affect facial displacement while smiling in patients with UCLP (Cleft group). We thus examined the three-dimensional (3D) facial displacement while smiling in the Cleft and Control groups in order to determine whether or not the physical properties of facial soft tissues differ between the Cleft and Control groups and to examine the relationship between the physical properties of facial soft tissues on 3D facial displacement while smiling. Three-dimensional images at rest and while smiling as well as the facial physical properties (e.g. viscoelasticity) of both groups were recorded. Differences in terms of physical properties and facial displacement while smiling between the two groups were examined. To examine the relationship between facial surface displacement while smiling and physical properties, a canonical correlation analysis (CCA) was conducted. As a result, three typical abnormal features of smiling in the Cleft group compared with the Control group were noted: less upward and backward displacement on the scar area, downward movement of the lower lip, and a greater asymmetric displacement, including greater lateral displacement of the subalar on the cleft side while smiling and greater alar backward displacement on the non-cleft side. The Cleft group also showed greater elastic modulus at the upper lip on the cleft side, suggesting hardened soft tissue at the scar. The CCA showed that this hard scar significantly affected facial displacement, inducing less upward and backward displacement on the scar area and downward movement of the lower lip in patients with UCLP (correlation coefficient = 0.82, p = 0.04); however, there was no significant relationship between greater nasal alar lateral movement and physical properties of the skin at the scar. Based on these results, personalizing treatment options for dysfunction in facial expression generation may require quantification of the 3D facial morphology and physical properties of facial soft tissues.
Collapse
Affiliation(s)
- Donghoon Lee
- Graduate School of Dentistry, Osaka University, Suita, Osaka, Japan
| | - Chihiro Tanikawa
- Graduate School of Dentistry, Osaka University, Suita, Osaka, Japan
- Center for Advanced Medical Engineering and Informatics, Osaka University, Suita, Osaka Japan
- * E-mail: ,
| | | |
Collapse
|
7
|
Nguyen TN, Dakpe S, Ho Ba Tho MC, Dao TT. Kinect-driven Patient-specific Head, Skull, and Muscle Network Modelling for Facial Palsy Patients. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105846. [PMID: 33279251 DOI: 10.1016/j.cmpb.2020.105846] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 11/12/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Facial palsy negatively affects both professional and personal life qualities of involved patients. Classical facial rehabilitation strategies can recover facial mimics into their normal and symmetrical movements and appearances. However, there is a lack of objective, quantitative, and in-vivo facial texture and muscle activation bio-feedbacks for personalizing rehabilitation programs and diagnosing recovering progresses. Consequently, this study proposed a novel patient-specific modelling method for generating a full patient specific head model from a visual sensor and then computing the facial texture and muscle activation in real-time for further clinical decision making. METHODS The modeling workflow includes (1) Kinect-to-head, (2) head-to-skull, and (3) muscle network definition & generation processes. In the Kinect-to-head process, subject-specific data acquired from a new user in neutral mimic were used for generating his/her geometrical head model with facial texture. In particular, a template head model was deformed to optimally fit with high-definition facial points acquired by the Kinect sensor. Moreover, the facial texture was also merged from his/her facial images in left, right, and center points of view. In the head-to-skull process, a generic skull model was deformed so that its shape was statistically fitted with his/her geometrical head model. In the muscle network definition & generation process, a muscle network was defined from the head and skull models for computing muscle strains during facial movements. Muscle insertion points and muscle attachment points were defined as vertex positions on the head model and the skull model respectively based on the standard facial anatomy. Three healthy subjects and two facial palsy patients were selected for validating the proposed method. In neutral positions, magnetic resonance imaging (MRI)-based head and skull models were compared with Kinect-based head and skull models. In mimic positions, infrared depth-based head models in smiling and [u]-pronouncing mimics were compared with appropriate animated Kinect-driven head models. The Hausdorff distance metric was used for these comparisons. Moreover, computed muscle lengths and strains in the tested facial mimics were validated with reported values in literature. RESULTS With the current hardware configuration, the patient-specific head model with skull and muscle network could be fast generated within 17.16±0.37s and animated in real-time with the framerate of 40 fps. In neutral positions, the best mean error was 1.91 mm for the head models and 3.21 mm for the skull models. On facial regions, the best mean errors were 1.53 mm and 2.82 mm for head and skull models respectively. On muscle insertion/attachment point regions, the best mean errors were 1.09 mm and 2.16 mm for head and skull models respectively. In mimic positions, these errors were 2.02 mm in smiling mimics and 2.00 mm in [u]-pronouncing mimics for the head models on facial regions. All above error values were computed on a one-time validation procedure. Facial muscles exhibited muscle shortening and muscle elongating for smiling and pronunciation of sound [u] respectively. Extracted muscle features (i.e. muscle length and strain) are in agreement with experimental and literature data. CONCLUSIONS This study proposed a novel modeling method for fast generating and animating patient-specific biomechanical head model with facial texture and muscle activation bio-feedbacks. The Kinect-driven muscle strains could be applied for further real-time muscle-oriented facial paralysis grading and other facial analysis applications.
Collapse
Affiliation(s)
- Tan-Nhu Nguyen
- Université de technologie de Compiègne, Alliance Sorbonne Universités, CNRS, UMR 7338 Biomécaniques and Bioengineering, Centre de recherche Royallieu, CS 60 319 Compiègne, France.
| | - Stéphanie Dakpe
- Department of maxillo-facial surgery, CHU AMIENS-PICARDIE, Amiens, France; CHIMERE Team, University of Picardie Jules Verne, 80000 Amiens France.
| | - Marie-Christine Ho Ba Tho
- Université de technologie de Compiègne, Alliance Sorbonne Universités, CNRS, UMR 7338 Biomécaniques and Bioengineering, Centre de recherche Royallieu, CS 60 319 Compiègne, France.
| | - Tien-Tuan Dao
- Université de technologie de Compiègne, Alliance Sorbonne Universités, CNRS, UMR 7338 Biomécaniques and Bioengineering, Centre de recherche Royallieu, CS 60 319 Compiègne, France; Univ. Lille, CNRS, Centrale Lille, UMR 9013 - LaMcube - Laboratoire de Mécanique, Multiphysique, Multiéchelle, F-59000 Lille, France.
| |
Collapse
|
8
|
Gibelli D, Tarabbia F, Restelli S, Allevi F, Dolci C, Dell’Aversana Orabona G, Cappella A, Codari M, Sforza C, Biglioli F. Three-dimensional assessment of restored smiling mobility after reanimation of unilateral facial palsy by triple innervation technique. Int J Oral Maxillofac Surg 2020; 49:536-542. [DOI: 10.1016/j.ijom.2019.07.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 04/24/2019] [Accepted: 07/19/2019] [Indexed: 01/22/2023]
|
9
|
Hwang HW, Park JH, Moon JH, Yu Y, Kim H, Her SB, Srinivasan G, Aljanabi MNA, Donatelli RE, Lee SJ. Automated identification of cephalometric landmarks: Part 2- Might it be better than human?. Angle Orthod 2019; 90:69-76. [PMID: 31335162 DOI: 10.2319/022019-129.1] [Citation(s) in RCA: 94] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVES To compare detection patterns of 80 cephalometric landmarks identified by an automated identification system (AI) based on a recently proposed deep-learning method, the You-Only-Look-Once version 3 (YOLOv3), with those identified by human examiners. MATERIALS AND METHODS The YOLOv3 algorithm was implemented with custom modifications and trained on 1028 cephalograms. A total of 80 landmarks comprising two vertical reference points and 46 hard tissue and 32 soft tissue landmarks were identified. On the 283 test images, the same 80 landmarks were identified by AI and human examiners twice. Statistical analyses were conducted to detect whether any significant differences between AI and human examiners existed. Influence of image factors on those differences was also investigated. RESULTS Upon repeated trials, AI always detected identical positions on each landmark, while the human intraexaminer variability of repeated manual detections demonstrated a detection error of 0.97 ± 1.03 mm. The mean detection error between AI and human was 1.46 ± 2.97 mm. The mean difference between human examiners was 1.50 ± 1.48 mm. In general, comparisons in the detection errors between AI and human examiners were less than 0.9 mm, which did not seem to be clinically significant. CONCLUSIONS AI showed as accurate an identification of cephalometric landmarks as did human examiners. AI might be a viable option for repeatedly identifying multiple cephalometric landmarks.
Collapse
|
10
|
Tanikawa C, Takata S, Takano R, Yamanami H, Edlira Z, Takada K. Functional decline in facial expression generation in older women: A cross-sectional study using three-dimensional morphometry. PLoS One 2019; 14:e0219451. [PMID: 31291323 PMCID: PMC6636602 DOI: 10.1371/journal.pone.0219451] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Accepted: 06/24/2019] [Indexed: 11/18/2022] Open
Abstract
Elderly people show a decline in the ability to decode facial expressions, but also experience age-related facial structure changes that may render their facial expressions harder to decode. However, to date there is no empirical evidence to support the latter mechanism. The objective of this study was to assess the effects of age on facial morphology at rest and during smiling, in younger (n = 100; age range, 18-32 years) and older (n = 30; age range, 55-65 years) Japanese women. Three-dimensional images of each subject's face at rest and during smiling were obtained and wire mesh fitting was performed on each image to quantify the facial surface morphology. The mean node coordinates in each facial posture were compared between the groups using t-tests. Further, the node coordinates of the fitted mesh were entered into a principal component analysis (PCA) and a multifactor analysis of variance (MANOVA) to examine the direct interactions of aging and facial postures on the 3D facial morphology. The results indicated that there were significant age-related 3D facial changes in facial expression generation and the transition from resting to smiling produced a smaller amount of soft tissue movement in the older group than in the younger group. Further, 185 surface configuration variables were extracted and the variables were used to create four discriminant functions: the age-group discrimination for each facial expression, and the facial expression discrimination for each age group. For facial expression discrimination, the older group showed 80% accuracy with 2 of 66 significant variables, whereas the younger group showed 99% accuracy with 15 of 144 significant variables. These results indicate that in both facial expressions, the facial morphology was distinctly different in the younger and older subjects, and that in the older group, the facial morphology during smiling could not be as easily discriminated from the morphology at rest as in the younger group. These results may help to explain one aspect of the communication dysfunction observed in older people.
Collapse
Affiliation(s)
- Chihiro Tanikawa
- Department of Orthodontics and Dentofacial Orthopedics, Graduate School
of Dentistry, Osaka University, Suita, Osaka, Japan
- Center for Advanced Medical Engineering and Informatics, Osaka
University, Suita, Osaka, Japan
| | - Sadaki Takata
- Department of Fashion & Beauty Sciences, Osaka Shoin Women’s
University, Higashi-Osaka, Osaka, Japan
| | - Ruriko Takano
- Corporate Culture Department, Shiseido Co., ltd., Tokyo,
Japan
| | - Haruna Yamanami
- Shiseido Global Innovation Center, Shiseido Co., ltd., Yokohama,
Kanagawa, Japan
| | - Zere Edlira
- Department of Orthodontics and Dentofacial Orthopedics, Graduate School
of Dentistry, Osaka University, Suita, Osaka, Japan
| | - Kenji Takada
- Center for Advanced Medical Engineering and Informatics, Osaka
University, Suita, Osaka, Japan
- Faculty of Dentistry, National University of Singapore, Singapore,
Republic of Singapore
- * E-mail:
| |
Collapse
|