1
|
Park J, Yoon S, Kim H, Kim Y, Lee U, Yu H. Clinical validity and precision of deep learning-based cone-beam computed tomography automatic landmarking algorithm. Imaging Sci Dent 2024; 54:240-250. [PMID: 39371307 PMCID: PMC11450405 DOI: 10.5624/isd.20240009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 05/17/2024] [Accepted: 05/24/2024] [Indexed: 10/08/2024] Open
Abstract
Purpose This study was performed to assess the clinical validity and accuracy of a deep learning-based automatic landmarking algorithm for cone-beam computed tomography (CBCT). Three-dimensional (3D) CBCT head measurements obtained through manual and automatic landmarking were compared. Materials and Methods A total of 80 CBCT scans were divided into 3 groups: non-surgical (39 cases); surgical without hardware, namely surgical plates and mini-screws (9 cases); and surgical with hardware (32 cases). Each CBCT scan was analyzed to obtain 53 measurements, comprising 27 lengths, 21 angles, and 5 ratios, which were determined based on 65 landmarks identified using either a manual or a 3D automatic landmark detection method. Results In comparing measurement values derived from manual and artificial intelligence landmarking, 6 items displayed significant differences: R U6CP-L U6CP, R L3CP-L L3CP, S-N, Or_R-R U3CP, L1L to Me-GoL, and GoR-Gn/S-N (P<0.05). Of the 3 groups, the surgical scans without hardware exhibited the lowest error, reflecting the smallest difference in measurements between human- and artificial intelligence-based landmarking. The time required to identify 65 landmarks was approximately 40-60 minutes per CBCT volume when done manually, compared to 10.9 seconds for the artificial intelligence method (PC specifications: GeForce 2080Ti, 64GB RAM, and an Intel i7 CPU at 3.6 GHz). Conclusion Measurements obtained with a deep learning-based CBCT automatic landmarking algorithm were similar in accuracy to values derived from manually determined points. By decreasing the time required to calculate these measurements, the efficiency of diagnosis and treatment may be improved.
Collapse
Affiliation(s)
- Jungeun Park
- Department of Orthodontics, College of Dentistry, Yonsei University, Seoul, Korea
| | - Seongwon Yoon
- College of Dentistry, Seoul National University, Seoul, Korea
- Imagoworks Incorporated, Seoul, Korea
| | - Hannah Kim
- Imagoworks Incorporated, Seoul, Korea
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea
| | - Youngjun Kim
- Imagoworks Incorporated, Seoul, Korea
- Center for Bionics, Korea Institute of Science and Technology, Seoul, Korea
| | - Uilyong Lee
- Department of Oral and Maxillofacial Surgery, College of Dentistry, Chungang University Hospital, Seoul, Korea
| | - Hyungseog Yu
- Department of Orthodontics, The Institute of Craniofacial Deformity, College of Dentistry, Yonsei University, Seoul, Korea
| |
Collapse
|
2
|
Wang N, Dong G, Qiao R, Yin X, Lin S. Bringing Artificial Intelligence (AI) into Environmental Toxicology Studies: A Perspective of AI-Enabled Zebrafish High-Throughput Screening. ENVIRONMENTAL SCIENCE & TECHNOLOGY 2024; 58:9487-9499. [PMID: 38691763 DOI: 10.1021/acs.est.4c00480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2024]
Abstract
The booming development of artificial intelligence (AI) has brought excitement to many research fields that could benefit from its big data analysis capability for causative relationship establishment and knowledge generation. In toxicology studies using zebrafish, the microscopic images and videos that illustrate the developmental stages, phenotypic morphologies, and animal behaviors possess great potential to facilitate rapid hazard assessment and dissection of the toxicity mechanism of environmental pollutants. However, the traditional manual observation approach is both labor-intensive and time-consuming. In this Perspective, we aim to summarize the current AI-enabled image and video analysis tools to realize the full potential of AI. For image analysis, AI-based tools allow fast and objective determination of morphological features and extraction of quantitative information from images of various sorts. The advantages of providing accurate and reproducible results while avoiding human intervention play a critical role in speeding up the screening process. For video analysis, AI-based tools enable the tracking of dynamic changes in both microscopic cellular events and macroscopic animal behaviors. The subtle changes revealed by video analysis could serve as sensitive indicators of adverse outcomes. With AI-based toxicity analysis in its infancy, exciting developments and applications are expected to appear in the years to come.
Collapse
Affiliation(s)
- Nan Wang
- College of Environmental Science and Engineering, Biomedical Multidisciplinary Innovation Research Institute, Shanghai East Hospital, Tongji University, Shanghai 200092, People's Republic of China
- Key Laboratory of Yangtze River Water Environment, Ministry of Education; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, People's Republic of China
| | - Gongqing Dong
- College of Environmental Science and Engineering, Biomedical Multidisciplinary Innovation Research Institute, Shanghai East Hospital, Tongji University, Shanghai 200092, People's Republic of China
- Key Laboratory of Yangtze River Water Environment, Ministry of Education; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, People's Republic of China
| | - Ruxia Qiao
- College of Environmental Science and Engineering, Biomedical Multidisciplinary Innovation Research Institute, Shanghai East Hospital, Tongji University, Shanghai 200092, People's Republic of China
- Key Laboratory of Yangtze River Water Environment, Ministry of Education; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, People's Republic of China
| | - Xiang Yin
- College of Environmental Science and Engineering, Biomedical Multidisciplinary Innovation Research Institute, Shanghai East Hospital, Tongji University, Shanghai 200092, People's Republic of China
- Key Laboratory of Yangtze River Water Environment, Ministry of Education; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, People's Republic of China
| | - Sijie Lin
- College of Environmental Science and Engineering, Biomedical Multidisciplinary Innovation Research Institute, Shanghai East Hospital, Tongji University, Shanghai 200092, People's Republic of China
- Key Laboratory of Yangtze River Water Environment, Ministry of Education; Shanghai Institute of Pollution Control and Ecological Security, Shanghai 200092, People's Republic of China
| |
Collapse
|
3
|
S R, S S, S Murthy P, Deshmukh S. Landmark annotation through feature combinations: a comparative study on cephalometric images with in-depth analysis of model's explainability. Dentomaxillofac Radiol 2024; 53:115-126. [PMID: 38166356 DOI: 10.1093/dmfr/twad011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 11/16/2023] [Accepted: 11/16/2023] [Indexed: 01/04/2024] Open
Abstract
OBJECTIVES The objectives of this study are to explore and evaluate the automation of anatomical landmark localization in cephalometric images using machine learning techniques, with a focus on feature extraction and combinations, contextual analysis, and model interpretability through Shapley Additive exPlanations (SHAP) values. METHODS We conducted extensive experimentation on a private dataset of 300 lateral cephalograms to thoroughly study the annotation results obtained using pixel feature descriptors including raw pixel, gradient magnitude, gradient direction, and histogram-oriented gradient (HOG) values. The study includes evaluation and comparison of these feature descriptions calculated at different contexts namely local, pyramid, and global. The feature descriptor obtained using individual combinations is used to discern between landmark and nonlandmark pixels using classification method. Additionally, this study addresses the opacity of LGBM ensemble tree models across landmarks, introducing SHAP values to enhance interpretability. RESULTS The performance of feature combinations was assessed using metrics like mean radial error, standard deviation, success detection rate (SDR) (2 mm), and test time. Remarkably, among all the combinations explored, both the HOG and gradient direction operations demonstrated significant performance across all context combinations. At the contextual level, the global texture outperformed the others, although it came with the trade-off of increased test time. The HOG in the local context emerged as the top performer with an SDR of 75.84% compared to others. CONCLUSIONS The presented analysis enhances the understanding of the significance of different features and their combinations in the realm of landmark annotation but also paves the way for further exploration of landmark-specific feature combination methods, facilitated by explainability.
Collapse
Affiliation(s)
- Rashmi S
- Dept. of Computer Science and Engineering, Sri Jayachamarajendra College of Engineering, JSS Science and Technology University, Mysuru, 570006, India
| | - Srinath S
- Dept. of Computer Science and Engineering, Sri Jayachamarajendra College of Engineering, JSS Science and Technology University, Mysuru, 570006, India
| | - Prashanth S Murthy
- Dept. of Pediatric & Preventive Dentistry, JSS Dental College & Hospital, JSS Academy of Higher Education & Research, Mysuru, 570015, India
| | - Seema Deshmukh
- Dept. of Pediatric & Preventive Dentistry, JSS Dental College & Hospital, JSS Academy of Higher Education & Research, Mysuru, 570015, India
| |
Collapse
|
4
|
Chen J, Che H, Sun J, Rao Y, Wu J. An automatic cephalometric landmark detection method based on heatmap regression and Monte Carlo dropout. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083204 DOI: 10.1109/embc40787.2023.10341102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Cephalometric analysis plays an important role in orthodontic diagnosis and treatment planning. It depends on the detection of multiple landmarks, while the process is time-consuming and tedious. Although some deep learning-based automatic landmark detection algorithms have achieved excellent performance, most of them adopt multi-stage models increasing the complexity and detection time. Meanwhile, few studies focused on the uncertainty of detection results, thereby ignoring its significant clinical value. In this paper, we propose a novel approach based on heatmap regression for landmark detection, which can achieve competitive accuracy and good robustness with only one step. Furthermore, by applying Monte Carlo dropout to a U-shaped convolutional neural network, we can obtain not only the coordinate of each landmark but also the corresponding simple uncertainty, so that doctors can pay more attention to those landmarks with higher uncertainty. The evaluation results showed the mean radial error is 1.39±1.06mm and the average successful detection rate is 79.65%, 97.22% within 2mm, 4mm for the IEEE ISBI2015 Test Dataset 1, the indicators for the IEEE ISBI2015 Test Dataset 2 are 1.33±0.93mm, 80.05% and 97.53%, respectively. Our method has the potential to become an assistant tool in clinical practice. Automatic and accurate detection with uncertainty analysis is expected to help guide the doctor's judgment.
Collapse
|
5
|
Geldenhuys DS, Josias S, Brink W, Makhubele M, Hui C, Landi P, Bingham J, Hargrove J, Hazelbag MC. Deep learning approaches to landmark detection in tsetse wing images. PLoS Comput Biol 2023; 19:e1011194. [PMID: 37363914 PMCID: PMC10328335 DOI: 10.1371/journal.pcbi.1011194] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 07/07/2023] [Accepted: 05/17/2023] [Indexed: 06/28/2023] Open
Abstract
Morphometric analysis of wings has been suggested for identifying and controlling isolated populations of tsetse (Glossina spp), vectors of human and animal trypanosomiasis in Africa. Single-wing images were captured from an extensive data set of field-collected tsetse wings of species Glossina pallidipes and G. m. morsitans. Morphometric analysis required locating 11 anatomical landmarks on each wing. The manual location of landmarks is time-consuming, prone to error, and infeasible for large data sets. We developed a two-tier method using deep learning architectures to classify images and make accurate landmark predictions. The first tier used a classification convolutional neural network to remove most wings that were missing landmarks. The second tier provided landmark coordinates for the remaining wings. We compared direct coordinate regression using a convolutional neural network and segmentation using a fully convolutional network for the second tier. For the resulting landmark predictions, we evaluate shape bias using Procrustes analysis. We pay particular attention to consistent labelling to improve model performance. For an image size of 1024 × 1280, data augmentation reduced the mean pixel distance error from 8.3 (95% confidence interval [4.4,10.3]) to 5.34 (95% confidence interval [3.0,7.0]) for the regression model. For the segmentation model, data augmentation did not alter the mean pixel distance error of 3.43 (95% confidence interval [1.9,4.4]). Segmentation had a higher computational complexity and some large outliers. Both models showed minimal shape bias. We deployed the regression model on the complete unannotated data consisting of 14,354 pairs of wing images since this model had a lower computational cost and more stable predictions than the segmentation model. The resulting landmark data set was provided for future morphometric analysis. The methods we have developed could provide a starting point to studying the wings of other insect species. All the code used in this study has been written in Python and open sourced.
Collapse
Affiliation(s)
- Dylan S. Geldenhuys
- The South African Department of Science and Innovation-National Research Foundation (DSI-NRF) South African Centre for Epidemiological Modelling and Analysis (SACEMA), Stellenbosch University, Stellenbosch, South Africa
- Department of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa
| | - Shane Josias
- Department of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa
- School for Data Science and Computational Thinking, Stellenbosch University, Stellenbosch, South Africa
| | - Willie Brink
- Department of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa
| | - Mulanga Makhubele
- Department of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa
| | - Cang Hui
- Department of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa
- Mathematical Biosciences Group, African Institute for Mathematical Sciences, Muizenberg, South Africa
| | - Pietro Landi
- Department of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa
| | - Jeremy Bingham
- The South African Department of Science and Innovation-National Research Foundation (DSI-NRF) South African Centre for Epidemiological Modelling and Analysis (SACEMA), Stellenbosch University, Stellenbosch, South Africa
- Department of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa
| | - John Hargrove
- The South African Department of Science and Innovation-National Research Foundation (DSI-NRF) South African Centre for Epidemiological Modelling and Analysis (SACEMA), Stellenbosch University, Stellenbosch, South Africa
- Department of Mathematical Sciences, Stellenbosch University, Stellenbosch, South Africa
| | - Marijn C. Hazelbag
- The South African Department of Science and Innovation-National Research Foundation (DSI-NRF) South African Centre for Epidemiological Modelling and Analysis (SACEMA), Stellenbosch University, Stellenbosch, South Africa
- ExploreAI (Pty) Ltd., Cape Town, South Africa
| |
Collapse
|
6
|
Suhail S, Harris K, Sinha G, Schmidt M, Durgekar S, Mehta S, Upadhyay M. Learning Cephalometric Landmarks for Diagnostic Features Using Regression Trees. Bioengineering (Basel) 2022; 9:bioengineering9110617. [PMID: 36354530 PMCID: PMC9687964 DOI: 10.3390/bioengineering9110617] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 10/14/2022] [Accepted: 10/22/2022] [Indexed: 11/30/2022] Open
Abstract
Lateral cephalograms provide important information regarding dental, skeletal, and soft-tissue parameters that are critical for orthodontic diagnosis and treatment planning. Several machine learning methods have previously been used for the automated localization of diagnostically relevant landmarks on lateral cephalograms. In this study, we applied an ensemble of regression trees to solve this problem. We found that despite the limited size of manually labeled images, we can improve the performance of landmark detection by augmenting the training set using a battery of simple image transforms. We further demonstrated the calculation of second-order features encoding the relative locations of landmarks, which are diagnostically more important than individual landmarks.
Collapse
Affiliation(s)
- Sameera Suhail
- Department of Engineering Technologies, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
| | | | - Gaurav Sinha
- Departments of Computer Science & Statistics, University of British Columbia (Alumni), Vancouver, BC V6T1Z4, Canada
| | - Maayan Schmidt
- School of Dental Medicine, University of Connecticut Health, Farmington, CT 06030, USA
| | - Sujala Durgekar
- Department of Orthodontics, KLES’ Institute of Dental Sciences, Bangalore 560022, India
| | - Shivam Mehta
- Department of Developmental Sciences/Orthodontics, Marquette University, Milwaukee, WI 53202, USA
| | - Madhur Upadhyay
- Division of Orthodontics, University of Connecticut Health, Farmington, CT 06030, USA
- Correspondence:
| |
Collapse
|
7
|
Mitteroecker P, Schaefer K. Thirty years of geometric morphometrics: Achievements, challenges, and the ongoing quest for biological meaningfulness. AMERICAN JOURNAL OF BIOLOGICAL ANTHROPOLOGY 2022; 178 Suppl 74:181-210. [PMID: 36790612 PMCID: PMC9545184 DOI: 10.1002/ajpa.24531] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 03/28/2022] [Accepted: 04/17/2022] [Indexed: 12/30/2022]
Abstract
The foundations of geometric morphometrics were worked out about 30 years ago and have continually been refined and extended. What has remained as a central thrust and source of debate in the morphometrics community is the shared goal of meaningful biological inference through a tight connection between biological theory, measurement, multivariate biostatistics, and geometry. Here we review the building blocks of modern geometric morphometrics: the representation of organismal geometry by landmarks and semilandmarks, the computation of shape or form variables via superimposition, the visualization of statistical results as actual shapes or forms, the decomposition of shape variation into symmetric and asymmetric components and into different spatial scales, the interpretation of various geometries in shape or form space, and models of the association between shape or form and other variables, such as environmental, genetic, or behavioral data. We focus on recent developments and current methodological challenges, especially those arising from the increasing number of landmarks and semilandmarks, and emphasize the importance of thorough exploratory multivariate analyses rather than single scalar summary statistics. We outline promising directions for further research and for the evaluation of new developments, such as "landmark-free" approaches. To illustrate these methods, we analyze three-dimensional human face shape based on data from the Avon Longitudinal Study of Parents and Children (ALSPAC).
Collapse
Affiliation(s)
- Philipp Mitteroecker
- Department of Evolutionary Biology, Unit for Theoretical BiologyUniversity of ViennaViennaAustria
| | - Katrin Schaefer
- Department of Evolutionary AnthropologyUniversity of ViennaViennaAustria,Human Evolution and Archaeological Sciences (HEAS)University of ViennaViennaAustria
| |
Collapse
|
8
|
Hong M, Kim I, Cho JH, Kang KH, Kim M, Kim SJ, Kim YJ, Sung SJ, Kim YH, Lim SH, Kim N, Baek SH. Accuracy of artificial intelligence-assisted landmark identification in serial lateral cephalograms of Class III patients who underwent orthodontic treatment and two-jaw orthognathic surgery. Korean J Orthod 2022; 52:287-297. [PMID: 35719042 PMCID: PMC9314217 DOI: 10.4041/kjod21.248] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 03/07/2022] [Accepted: 03/11/2022] [Indexed: 11/20/2022] Open
Abstract
Objective To investigate the pattern of accuracy change in artificial intelligence-assisted landmark identification (LI) using a convolutional neural network (CNN) algorithm in serial lateral cephalograms (Lat-cephs) of Class III (C-III) patients who underwent two-jaw orthognathic surgery. Methods A total of 3,188 Lat-cephs of C-III patients were allocated into the training and validation sets (3,004 Lat-cephs of 751 patients) and test set (184 Lat-cephs of 46 patients; subdivided into the genioplasty and non-genioplasty groups, n = 23 per group) for LI. Each C-III patient in the test set had four Lat-cephs initial (T0), pre-surgery (T1, presence of orthodontic brackets [OBs]), post-surgery (T2, presence of OBs and surgical plates and screws [S-PS]), and debonding (T3, presence of S-PS and fixed retainers [FR]). After mean errors of 20 landmarks between human gold standard and the CNN model were calculated, statistical analysis was performed. Results The total mean error was 1.17 mm without significant difference among the four time-points (T0, 1.20 mm; T1, 1.14 mm; T2, 1.18 mm; T3, 1.15 mm). In comparison of two time-points ([T0, T1] vs. [T2, T3]), ANS, A point, and B point showed an increase in error (p < 0.01, 0.05, 0.01, respectively), while Mx6D and Md6D showeda decrease in error (all p < 0.01). No difference in errors existed at B point, Pogonion, Menton, Md1C, and Md1R between the genioplasty and non-genioplasty groups. Conclusions The CNN model can be used for LI in serial Lat-cephs despite the presence of OB, S-PS, FR, genioplasty, and bone remodeling.
Collapse
Affiliation(s)
- Mihee Hong
- Department of Orthodontics, School of Dentistry, Dental Research Institute, Seoul National University, Seoul, Korea.,Department of Orthodontics, School of Dentistry, Kyungpook National University, Daegu, Korea
| | - Inhwan Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Jin-Hyoung Cho
- Department of Orthodontics, Chonnam National University School of Dentistry, Gwangju, Korea
| | - Kyung-Hwa Kang
- Department of Orthodontics, School of Dentistry, Wonkwang University, Iksan, Korea
| | - Minji Kim
- Department of Orthodontics, College of Medicine, Ewha Womans University, Seoul, Korea
| | - Su-Jung Kim
- Department of Orthodontics, Kyung Hee University School of Dentistry, Seoul, Korea
| | - Yoon-Ji Kim
- Department of Orthodontics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Sang-Jin Sung
- Department of Orthodontics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Young Ho Kim
- Department of Orthodontics, Institute of Oral Health Science, Ajou University School of Medicine, Suwon, Korea
| | - Sung-Hoon Lim
- Department of Orthodontics, College of Dentistry, Chosun University, Gwangju, Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Seung-Hak Baek
- Department of Orthodontics, School of Dentistry, Dental Research Institute, Seoul National University, Seoul, Korea
| |
Collapse
|
9
|
|
10
|
Noshita K, Murata H, Kirie S. Model-based plant phenomics on morphological traits using morphometric descriptors. BREEDING SCIENCE 2022; 72:19-30. [PMID: 36045892 PMCID: PMC8987841 DOI: 10.1270/jsbbs.21078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 12/20/2021] [Indexed: 06/15/2023]
Abstract
The morphological traits of plants contribute to many important functional features such as radiation interception, lodging tolerance, gas exchange efficiency, spatial competition between individuals and/or species, and disease resistance. Although the importance of plant phenotyping techniques is increasing with advances in molecular breeding strategies, there are barriers to its advancement, including the gap between measured data and phenotypic values, low quantitativity, and low throughput caused by the lack of models for representing morphological traits. In this review, we introduce morphological descriptors that can be used for phenotyping plant morphological traits. Geometric morphometric approaches pave the way to a general-purpose method applicable to single units. Hierarchical structures composed of an indefinite number of multiple elements, which is often observed in plants, can be quantified in terms of their multi-scale topological characteristics using topological data analysis. Theoretical morphological models capture specific anatomical structures, if recognized. These morphological descriptors provide us with the advantages of model-based plant phenotyping, including robust quantification of limited datasets. Moreover, we discuss the future possibilities that a system of model-based measurement and model refinement would solve the lack of morphological models and the difficulties in scaling out the phenotyping processes.
Collapse
Affiliation(s)
- Koji Noshita
- Department of Biology, Kyushu University, Fukuoka, Fukuoka 819-0395, Japan
- Plant Frontier Research Center, Kyushu University, Fukuoka, Fukuoka 819-0395, Japan
| | - Hidekazu Murata
- Department of Biology, Kyushu University, Fukuoka, Fukuoka 819-0395, Japan
| | - Shiryu Kirie
- metaPhorest (Bioaesthetics Platform), Department of Electrical Engineering and Bioscience, Waseda University, TWIns, Tokyo 162-8480, Japan
| |
Collapse
|
11
|
Three-Dimensional Human Head Reconstruction Using Smartphone-Based Close-Range Video Photogrammetry. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app12010229] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Creation of head 3D models from videos or pictures of the head by using close-range photogrammetry techniques has many applications in clinical, commercial, industrial, artistic, and entertainment areas. This work aims to create a methodology for improving 3D head reconstruction, with a focus on using selfie videos as the data source. Then, using this methodology, we seek to propose changes for the general-purpose 3D reconstruction algorithm to improve the head reconstruction process. We define the improvement of the 3D head reconstruction as an increase of reconstruction quality (which is lowering reconstruction errors of the head and amount of semantic noise) and reduction of computational load. We proposed algorithm improvements that increase reconstruction quality by removing image backgrounds and by selecting diverse and high-quality frames. Algorithm modifications were evaluated on videos of the mannequin head. Evaluation results show that baseline reconstruction is improved 12 times due to the reduction of semantic noise and reconstruction errors of the head. The reduction of computational demand was achieved by reducing the frame number needed to process, reducing the number of image matches required to perform, reducing an average number of feature points in images, and still being able to provide the highest precision of the head reconstruction.
Collapse
|
12
|
Bellin N, Calzolari M, Callegari E, Bonilauri P, Grisendi A, Dottori M, Rossi V. Geometric morphometrics and machine learning as tools for the identification of sibling mosquito species of the Maculipennis complex (Anopheles). INFECTION GENETICS AND EVOLUTION 2021; 95:105034. [PMID: 34384936 DOI: 10.1016/j.meegid.2021.105034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 07/28/2021] [Accepted: 08/07/2021] [Indexed: 11/29/2022]
Abstract
Geometric morphometrics allows researchers to use the specific software to quantify and to visualize morphological differences between taxa from insect wings. Our objective was to assess wing geometry to distinguish four Anopheles sibling species of the Maculipennis complex, An. maculipennis s. s., An. daciae sp. inq., An. atroparvus and An. melanoon, found in Northern Italy. We combined the geometric morphometric approach with different machine learning alghorithms: support vector machine (SVM), random forest (RF), artificial neural network (ANN) and an ensemble model (EN). Centroid size was smaller in An. atroparvus than in An. maculipennis s. s. and An. daciae sp. inq. Principal component analysis (PCA) explained only 33% of the total variance and appeared not very useful to discriminate among species, and in particular between An. maculipennis s. s. and An. daciae sp. inq. The performance of four different machine learning alghorithms using procrustes coordinates of wing shape as predictors was evaluated. All models showed ROC-AUC and PRC-AUC values that were higher than the random classifier but the SVM algorithm maximized the most metrics on the test set. The SVM algorithm with radial basis function allowed the correct classification of 83% of An. maculipennis s. s. and 79% of An. daciae sp. inq. ROC-AUC analysis showed that three landmarks, 11, 16 and 15, were the most important procrustes coordinates in mean wing shape comparison between An. maculipennis s. s. and An. daciae sp. inq. The pattern in the three-dimensional space of the most important procrustes coordinates showed a clearer differentiation between the two species than the PCA. Our study demonstrated that machine learning algorithms could be a useful tool combined with the wing geometric morphometric approach.
Collapse
Affiliation(s)
- Nicolò Bellin
- University of Parma, Department of Chemistry, Life Sciences and Environmental Sustainability, Parco Area delle Scienze, 11/A, 43124 Parma, Italy.
| | - Mattia Calzolari
- Istituto Zooprofilattico Sperimentale della Lombardia e dell'Emilia Romagna "B. Ubertini" (IZSLER), Brescia, Italy
| | - Emanuele Callegari
- Istituto Zooprofilattico Sperimentale della Lombardia e dell'Emilia Romagna "B. Ubertini" (IZSLER), Brescia, Italy
| | - Paolo Bonilauri
- Istituto Zooprofilattico Sperimentale della Lombardia e dell'Emilia Romagna "B. Ubertini" (IZSLER), Brescia, Italy
| | - Annalisa Grisendi
- Istituto Zooprofilattico Sperimentale della Lombardia e dell'Emilia Romagna "B. Ubertini" (IZSLER), Brescia, Italy
| | - Michele Dottori
- Istituto Zooprofilattico Sperimentale della Lombardia e dell'Emilia Romagna "B. Ubertini" (IZSLER), Brescia, Italy
| | - Valeria Rossi
- University of Parma, Department of Chemistry, Life Sciences and Environmental Sustainability, Parco Area delle Scienze, 11/A, 43124 Parma, Italy
| |
Collapse
|
13
|
Guo J, Mu Y, Xue D, Li H, Chen J, Yan H, Xu H, Wang W. Automatic analysis system of calcaneus radiograph: Rotation-invariant landmark detection for calcaneal angle measurement, fracture identification and fracture region segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106124. [PMID: 34004502 DOI: 10.1016/j.cmpb.2021.106124] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 04/19/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Calcaneus is the largest tarsal bone to withstand the daily stresses of weight-bearing. The calcaneal fracture is the most common type in the tarsal bone fractures. After a fracture is suspected, plain radiographs should be taken first. Bohler's Angle (BA) and Critical Angle of Gissane (CAG), measured by four anatomic landmarks in lateral foot radiograph, can guide fracture diagnosis and facilitate operative recovery of the fractured calcaneus. This study aims to develop an analysis system that can automatically locate four anatomic landmarks, measure BA and CAG for fracture assessment, identify fractured calcaneus, and segment fractured regions. METHODS For landmark detection, we proposed a coarse-to-fine Rotation-Invariant Regression-Voting (RIRV) landmark detection method based on regressive Multi-Layer Perceptron (MLP) and Scale Invariant Feature Transform (SIFT) patch descriptor, which solves the problem of fickle rotation of calcaneus. By implementing a novel normalization approach, the RIRV method is explicitly rotation-invariance comparing with traditional regressive methods. For fracture identification and segmentation, a convolution neural network (CNN) based on U-Net with auxiliary classification head (U-Net-CH) is designed. The input ROIs of the CNN are normalized by detected landmarks to uniform view, orientation, and scale. The advantage of this approach is the multi-task learning that combines classification and segmentation. RESULTS Our system can accurately measure BA and CAG with a mean angle error of 3.8○ and 6.2○ respectively. For fracture identification and fracture region segmentation, our system presents good performance with an F1-score of 96.55%, recall of 94.99%, and segmentation IoU-score of 0.586. CONCLUSION A powerful calcaneal radiograph analysis system including anatomical angles measurement, fracture identification, and fracture segmentation can be built. The proposed analysis system can aid orthopedists to improve the efficiency and accuracy of calcaneus fracture diagnosis.
Collapse
Affiliation(s)
- Jia Guo
- Beijing Institute of Technology, Beijing 100081, China
| | - Yuxuan Mu
- Beijing Institute of Technology, Beijing 100081, China
| | - Dong Xue
- The First Affiliated Hospital of Jinzhou Medical University, Jinzhou 121001, China
| | - Huiqi Li
- Beijing Institute of Technology, Beijing 100081, China.
| | - Junxian Chen
- Beijing Institute of Technology, Beijing 100081, China
| | - Huanxin Yan
- Zhejiang University of Science & Technology, Zhejiang 310032, China
| | - Hailin Xu
- Peking University People's Hospital, Beijing 100044, China
| | - Wei Wang
- The First Affiliated Hospital of Jinzhou Medical University, Jinzhou 121001, China.
| |
Collapse
|
14
|
Kim J, Kim I, Kim YJ, Kim M, Cho JH, Hong M, Kang KH, Lim SH, Kim SJ, Kim YH, Kim N, Sung SJ, Baek SH. Accuracy of automated identification of lateral cephalometric landmarks using cascade convolutional neural networks on lateral cephalograms from nationwide multi-centres. Orthod Craniofac Res 2021; 24 Suppl 2:59-67. [PMID: 33973341 DOI: 10.1111/ocr.12493] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/16/2021] [Accepted: 04/27/2021] [Indexed: 11/28/2022]
Abstract
OBJECTIVE To investigate the accuracy of automated identification of cephalometric landmarks using the cascade convolutional neural networks (CNN) on lateral cephalograms acquired from nationwide multi-centres. SETTINGS AND SAMPLE POPULATION A total of 3150 lateral cephalograms were acquired from 10 university hospitals in South Korea for training. MATERIALS AND METHODS We evaluated the accuracy of the developed model with independent 100 lateral cephalograms as an external validation. Two orthodontists independently identified the anatomic landmarks of the test data set using the V-ceph software (version 8.0, Osstem, Seoul, Korea). The mean positions of the landmarks identified by two orthodontists were regarded as the gold standard. The performance of the CNN model was evaluated by calculating the mean absolute distance between the gold standard and the automatically detected positions. Factors associated with the detection accuracy for landmarks were analysed using the linear regression models. RESULTS The mean inter-examiner difference was 1.31 ± 1.13 mm. The overall automated detection error was 1.36 ± 0.98 mm. The mean detection error for each landmark ranged between 0.46 ± 0.37 mm (maxillary incisor crown tip) and 2.09 ± 1.91 mm (distal root tip of the mandibular first molar). A significant difference in the detection accuracy among cephalograms was noted according to hospital (P = .011), sensor type (P < .01), and cephalography machine model (P < .01). CONCLUSION The automated cephalometric landmark detection model may aid in preliminary screening for patient diagnosis and mid-treatment assessment, independent of the type of the radiography machines tested.
Collapse
Affiliation(s)
- Jaerong Kim
- Department of Orthodontics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Inhwan Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Yoon-Ji Kim
- Department of Orthodontics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Minji Kim
- Department of Orthodontics, College of Medicine, Ewha Woman's University, Seoul, Korea
| | - Jin-Hyoung Cho
- Department of Orthodontics, Chonnam National University School of Dentistry, Gwangju, Korea
| | - Mihee Hong
- Department of Orthodontics, School of Dentistry, Kyungpook National University, Daegu, Korea
| | - Kyung-Hwa Kang
- Department of Orthodontics, School of Dentistry, Wonkwang University, Iksan, Korea
| | - Sung-Hoon Lim
- Department of Orthodontics, College of Dentistry, Chosun University, Gwangju, Korea
| | - Su-Jung Kim
- Department of Orthodontics, Kyung Hee University School of Dentistry, Seoul, Korea
| | - Young Ho Kim
- Department of Orthodontics, Institute of Oral Health Science, Ajou University School of Medicine, Suwon, Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Sang-Jin Sung
- Department of Orthodontics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Seung-Hak Baek
- Department of Orthodontics, School of Dentistry, Dental Research Institute, Seoul National University, Seoul, Korea
| |
Collapse
|
15
|
Le VL, Beurton-Aimar M, Zemmari A, Marie A, Parisey N. Automated landmarking for insects morphometric analysis using deep neural networks. ECOL INFORM 2020. [DOI: 10.1016/j.ecoinf.2020.101175] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
16
|
Kim H, Shim E, Park J, Kim YJ, Lee U, Kim Y. Web-based fully automated cephalometric analysis by deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105513. [PMID: 32403052 DOI: 10.1016/j.cmpb.2020.105513] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 04/17/2020] [Accepted: 04/18/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE An accurate lateral cephalometric analysis is vital in orthodontic diagnosis. Identification of anatomic landmarks on lateral cephalograms is tedious, and errors may occur depending on the doctor's experience. Several attempts have been made to reduce this time-consuming process by automating the process through machine learning; however, they only dealt with a small amount of data from one institute. This study aims to develop a fully automated cephalometric analysis method using deep learning and a corresponding web-based application that can be used without high-specification hardware. METHODS We built our own dataset comprising 2,075 lateral cephalograms and ground truth positions of 23 landmarks from two institutes and trained a two-stage automated algorithm with a stacked hourglass deep learning model specialized for detecting landmarks in images. Additionally, a web-based application with the proposed algorithm for fully automated cephalometric analysis was developed for better accessibility regardless of the user's computer hardware, which is essential for a deep learning-based method. RESULTS The algorithm was evaluated with datasets from various devices and institutes, including a widely used open dataset and achieved 1.37 ± 1.79 mm of point-to-point errors with ground truth positions for 23 cephalometric landmarks. Based on the predicted positions, anatomical types of the subjects were automatically classified and compared with the ground truth, and the automated algorithm achieved a successful classification rate of 88.43%. CONCLUSIONS We expect that this fully automated cephalometric analysis algorithm and the web-based application can be widely used in various medical environments to save time and effort for manual marking and diagnosis.
Collapse
Affiliation(s)
- Hannah Kim
- Center for Bionics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Republic of Korea; Division of Bio-Medical Science & Technology, KIST School, Korea University of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Republic of Korea.
| | - Eungjune Shim
- Center for Bionics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Republic of Korea.
| | - Jungeun Park
- Department of Orthodontics, Graduate School, Yonsei University College of Dentistry, 50-1, Yonseiro, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | - Yoon-Ji Kim
- Department of Orthodontics, Korea University Anam Hospital, 73 Inchon-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| | - Uilyong Lee
- Department of Oral and Maxillofacial Surgery, Chungang University Hospital, 102, Heukseok-ro, Dongjak-gu, Seoul, 06973, Republic of Korea; Tooth Bioengineering National Research Laboratory, BK21, School of Dentistry, Seoul National University, Daehak-ro 101, Jongno-gu, Seoul, 03080, Republic of Korea.
| | - Youngjun Kim
- Center for Bionics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Republic of Korea; Division of Bio-Medical Science & Technology, KIST School, Korea University of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Republic of Korea.
| |
Collapse
|
17
|
Dot G, Rafflenbeul F, Arbotto M, Gajny L, Rouch P, Schouman T. Accuracy and reliability of automatic three-dimensional cephalometric landmarking. Int J Oral Maxillofac Surg 2020; 49:1367-1378. [DOI: 10.1016/j.ijom.2020.02.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Revised: 11/28/2019] [Accepted: 02/24/2020] [Indexed: 10/24/2022]
|
18
|
Rubens U, Mormont R, Paavolainen L, Bäcker V, Pavie B, Scholz LA, Michiels G, Maška M, Ünay D, Ball G, Hoyoux R, Vandaele R, Golani O, Stanciu SG, Sladoje N, Paul-Gilloteaux P, Marée R, Tosi S. BIAFLOWS: A Collaborative Framework to Reproducibly Deploy and Benchmark Bioimage Analysis Workflows. PATTERNS (NEW YORK, N.Y.) 2020; 1:100040. [PMID: 33205108 PMCID: PMC7660398 DOI: 10.1016/j.patter.2020.100040] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 04/04/2020] [Accepted: 04/27/2020] [Indexed: 01/26/2023]
Abstract
Image analysis is key to extracting quantitative information from scientific microscopy images, but the methods involved are now often so refined that they can no longer be unambiguously described by written protocols. We introduce BIAFLOWS, an open-source web tool enabling to reproducibly deploy and benchmark bioimage analysis workflows coming from any software ecosystem. A curated instance of BIAFLOWS populated with 34 image analysis workflows and 15 microscopy image datasets recapitulating common bioimage analysis problems is available online. The workflows can be launched and assessed remotely by comparing their performance visually and according to standard benchmark metrics. We illustrated these features by comparing seven nuclei segmentation workflows, including deep-learning methods. BIAFLOWS enables to benchmark and share bioimage analysis workflows, hence safeguarding research results and promoting high-quality standards in image analysis. The platform is thoroughly documented and ready to gather annotated microscopy datasets and workflows contributed by the bioimaging community.
Collapse
Affiliation(s)
- Ulysse Rubens
- Montefiore Institute, University of Liège, 4000 Liège, Belgium
| | - Romain Mormont
- Montefiore Institute, University of Liège, 4000 Liège, Belgium
| | | | - Volker Bäcker
- MRI, BioCampus Montpellier, Montpellier 34094, France
| | | | | | | | | | - Devrim Ünay
- Faculty of Engineering İzmir, Demokrasi University, 35330 Balçova, Turkey
| | - Graeme Ball
- Dundee Imaging Facility, School of Life Sciences, University of Dundee, Dundee DD1 5EH, UK
| | | | - Rémy Vandaele
- Montefiore Institute, University of Liège, 4000 Liège, Belgium
| | - Ofra Golani
- Life Sciences Core Facilities, Weizmann Institute of Science, Rehovot 7610001, Israel
| | | | - Natasa Sladoje
- Uppsala University, P.O. Box 256, 751 05 Uppsala, Sweden
| | - Perrine Paul-Gilloteaux
- Structure Fédérative de Recherche François Bonamy, Université de Nantes, CNRS, INSERM, Nantes Cedex 1 13522 44035, France
| | - Raphaël Marée
- Montefiore Institute, University of Liège, 4000 Liège, Belgium
| | - Sébastien Tosi
- Institute for Research in Biomedicine, IRB Barcelona, Barcelona Institute of Science and Technology, BIST, 08028 Barcelona, Spain
| |
Collapse
|
19
|
Porto A, Voje KL. ML‐morph: A fast, accurate and general approach for automated detection and landmarking of biological structures in images. Methods Ecol Evol 2020. [DOI: 10.1111/2041-210x.13373] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Affiliation(s)
- Arthur Porto
- Centre for Ecological and Evolutionary Synthesis University of Oslo Oslo Norway
| | - Kjetil L. Voje
- Centre for Ecological and Evolutionary Synthesis University of Oslo Oslo Norway
| |
Collapse
|
20
|
Automatic vocal tract landmark localization from midsagittal MRI data. Sci Rep 2020; 10:1468. [PMID: 32001739 PMCID: PMC6992757 DOI: 10.1038/s41598-020-58103-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 01/09/2020] [Indexed: 11/29/2022] Open
Abstract
The various speech sounds of a language are obtained by varying the shape and position of the articulators surrounding the vocal tract. Analyzing their variations is crucial for understanding speech production, diagnosing speech disorders and planning therapy. Identifying key anatomical landmarks of these structures on medical images is a pre-requisite for any quantitative analysis and the rising amount of data generated in the field calls for an automatic solution. The challenge lies in the high inter- and intra-speaker variability, the mutual interaction between the articulators and the moderate quality of the images. This study addresses this issue for the first time and tackles it by means of Deep Learning. It proposes a dedicated network architecture named Flat-net and its performance are evaluated and compared with eleven state-of-the-art methods from the literature. The dataset contains midsagittal anatomical Magnetic Resonance Images for 9 speakers sustaining 62 articulations with 21 annotated anatomical landmarks per image. Results show that the Flat-net approach outperforms the former methods, leading to an overall Root Mean Square Error of 3.6 pixels/0.36 cm obtained in a leave-one-out procedure over the speakers. The implementation codes are also shared publicly on GitHub.
Collapse
|
21
|
Teo BG, Dhillon SK. An automated 3D modeling pipeline for constructing 3D models of MONOGENEAN HARDPART using machine learning techniques. BMC Bioinformatics 2019; 20:658. [PMID: 31870297 PMCID: PMC6929343 DOI: 10.1186/s12859-019-3210-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Accepted: 11/12/2019] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND Studying structural and functional morphology of small organisms such as monogenean, is difficult due to the lack of visualization in three dimensions. One possible way to resolve this visualization issue is to create digital 3D models which may aid researchers in studying morphology and function of the monogenean. However, the development of 3D models is a tedious procedure as one will have to repeat an entire complicated modelling process for every new target 3D shape using a comprehensive 3D modelling software. This study was designed to develop an alternative 3D modelling approach to build 3D models of monogenean anchors, which can be used to understand these morphological structures in three dimensions. This alternative 3D modelling approach is aimed to avoid repeating the tedious modelling procedure for every single target 3D model from scratch. RESULT An automated 3D modeling pipeline empowered by an Artificial Neural Network (ANN) was developed. This automated 3D modelling pipeline enables automated deformation of a generic 3D model of monogenean anchor into another target 3D anchor. The 3D modelling pipeline empowered by ANN has managed to automate the generation of the 8 target 3D models (representing 8 species: Dactylogyrus primaries, Pellucidhaptor merus, Dactylogyrus falcatus, Dactylogyrus vastator, Dactylogyrus pterocleidus, Dactylogyrus falciunguis, Chauhanellus auriculatum and Chauhanellus caelatus) of monogenean anchor from the respective 2D illustrations input without repeating the tedious modelling procedure. CONCLUSIONS Despite some constraints and limitation, the automated 3D modelling pipeline developed in this study has demonstrated a working idea of application of machine learning approach in a 3D modelling work. This study has not only developed an automated 3D modelling pipeline but also has demonstrated a cross-disciplinary research design that integrates machine learning into a specific domain of study such as 3D modelling of the biological structures.
Collapse
Affiliation(s)
- Bee Guan Teo
- School of Engineering, Monash University Malaysia, Kuala Lumpur, Malaysia
- Data Science and Bioinformatics Laboratory, Institute of Biological Sciences, Faculty of Science, University of Malaya, Kuala Lumpur, Malaysia
| | - Sarinder Kaur Dhillon
- Data Science and Bioinformatics Laboratory, Institute of Biological Sciences, Faculty of Science, University of Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
22
|
Automatic Analysis of Lateral Cephalograms Based on Multiresolution Decision Tree Regression Voting. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:1797502. [PMID: 30581546 PMCID: PMC6276415 DOI: 10.1155/2018/1797502] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2018] [Accepted: 10/22/2018] [Indexed: 11/20/2022]
Abstract
Cephalometric analysis is a standard tool for assessment and prediction of craniofacial growth, orthodontic diagnosis, and oral-maxillofacial treatment planning. The aim of this study is to develop a fully automatic system of cephalometric analysis, including cephalometric landmark detection and cephalometric measurement in lateral cephalograms for malformation classification and assessment of dental growth and soft tissue profile. First, a novel method of multiscale decision tree regression voting using SIFT-based patch features is proposed for automatic landmark detection in lateral cephalometric radiographs. Then, some clinical measurements are calculated by using the detected landmark positions. Finally, two databases are tested in this study: one is the benchmark database of 300 lateral cephalograms from 2015 ISBI Challenge, and the other is our own database of 165 lateral cephalograms. Experimental results show that the performance of our proposed method is satisfactory for landmark detection and measurement analysis in lateral cephalograms.
Collapse
|