1
|
dos Santos PV, Scoczynski Ribeiro Martins M, Amorim Nogueira S, Gonçalves C, Maffei Loureiro R, Pacheco Calixto W. Unsupervised model for structure segmentation applied to brain computed tomography. PLoS One 2024; 19:e0304017. [PMID: 38870119 PMCID: PMC11175403 DOI: 10.1371/journal.pone.0304017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 05/03/2024] [Indexed: 06/15/2024] Open
Abstract
This article presents an unsupervised method for segmenting brain computed tomography scans. The proposed methodology involves image feature extraction and application of similarity and continuity constraints to generate segmentation maps of the anatomical head structures. Specifically designed for real-world datasets, this approach applies a spatial continuity scoring function tailored to the desired number of structures. The primary objective is to assist medical experts in diagnosis by identifying regions with specific abnormalities. Results indicate a simplified and accessible solution, reducing computational effort, training time, and financial costs. Moreover, the method presents potential for expediting the interpretation of abnormal scans, thereby impacting clinical practice. This proposed approach might serve as a practical tool for segmenting brain computed tomography scans, and make a significant contribution to the analysis of medical images in both research and clinical settings.
Collapse
Affiliation(s)
- Paulo Victor dos Santos
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Department of Radiology, Hospital Israelita Albert Einstein, Sao Paulo, Sao Paulo, Brazil
- Technology Research and Development Center (GCITE), Federal Institute of Goias, Goiania, Brazil
| | - Marcella Scoczynski Ribeiro Martins
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Federal University of Technology - Parana, Ponta Grossa, Parana, Brazil
| | - Solange Amorim Nogueira
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Department of Radiology, Hospital Israelita Albert Einstein, Sao Paulo, Sao Paulo, Brazil
| | | | - Rafael Maffei Loureiro
- Department of Radiology, Hospital Israelita Albert Einstein, Sao Paulo, Sao Paulo, Brazil
| | - Wesley Pacheco Calixto
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Technology Research and Development Center (GCITE), Federal Institute of Goias, Goiania, Brazil
| |
Collapse
|
2
|
Rana S, Gerbino S, Barretta D, Carillo P, Crimaldi M, Cirillo V, Maggio A, Sarghini F. RafanoSet: Dataset of raw, manually, and automatically annotated Raphanus Raphanistrum weed images for object detection and segmentation. Data Brief 2024; 54:110430. [PMID: 38698801 PMCID: PMC11063987 DOI: 10.1016/j.dib.2024.110430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 04/08/2024] [Accepted: 04/11/2024] [Indexed: 05/05/2024] Open
Abstract
The rationale for this data article is to provide resources which could facilitate the studies focussed over weed detection and segmentation in precision farming using computer vision. We have curated Multispectral (MS) images over crop fields of Triticum Aestivum containing heterogenous mix of Raphanus raphanistrum in both uniform and random crop spacing. This dataset is designed to facilitate weed detection and segmentation based on manual and automatically annotated Raphanus raphanistrum, commonly known as wild radish. The dataset is publicly available through the Zenodo data library and provides annotated pixel-level information that is crucial for registration and segmentation purposes. The dataset consists of 85 original MS images captured over 17 scenes covering various spectra including Blue, Green, Red, NIR (Near-Infrared), and RedEdge. Each image has a dimension of 1280 × 960 pixels and serves as the basis for the specific weed detection and segmentation. Manual annotations were performed using Visual Geometry Group Image Annotator (VIA) and the results were saved in Common Objects in Context (COCO) segmentation format. To facilitate this resource-intensive task of annotation, a Grounding DINO + Segment Anything Model (SAM) was trained with this manually annotated data to obtain automated Visual Object Classes Extended Markup Language (PASCAL VOC) annotations for 80 MS images. The dataset emphasizes quality control, validating both the 'manual" and 'automated" repositories by extracting and evaluating binary masks. The codes used for these processes are accessible to ensure transparency and reproducibility. This dataset is the first-of-its-kind public resource providing manual and automatically annotated weed information over close-ranged MS images in heterogenous agriculture environment. Researchers and practitioners in the fields of precision agriculture and computer vision can use this dataset to improve MS image registration and segmentation at close range photogrammetry with a focus on wild radish. The dataset not only helps with intra-subject registration to improve segmentation accuracy, but also provides valuable spectral information for training and refining machine learning models.
Collapse
Affiliation(s)
- Shubham Rana
- Department of Engineering, University of Campania "L. Vanvitelli", Via Roma 29, Aversa (CE) 81031, Italy
| | - Salvatore Gerbino
- Department of Engineering, University of Campania "L. Vanvitelli", Via Roma 29, Aversa (CE) 81031, Italy
| | - Domenico Barretta
- Department of Engineering, University of Campania "L. Vanvitelli", Via Roma 29, Aversa (CE) 81031, Italy
| | - Petronia Carillo
- Department of Biological and Pharmaceutical Environmental Sciences and Technologies, University of Campania "L. Vanvitelli", Via Antonio Vivaldi, 43, 81100 Caserta CE, Italy
| | - Mariano Crimaldi
- Department of Agricultural Sciences, University of Naples Federico II, 80055 Portici, Italy
| | - Valerio Cirillo
- Department of Agricultural Sciences, University of Naples Federico II, 80055 Portici, Italy
| | - Albino Maggio
- Department of Agricultural Sciences, University of Naples Federico II, 80055 Portici, Italy
| | - Fabrizio Sarghini
- Department of Agricultural Sciences, University of Naples Federico II, 80055 Portici, Italy
| |
Collapse
|
3
|
Rana S, Crimaldi M, Barretta D, Carillo P, Cirillo V, Maggio A, Sarghini F, Gerbino S. GobhiSet: Dataset of raw, manually, and automatically annotated RGB images across phenology of Brassica oleracea var. Botrytis. Data Brief 2024; 54:110506. [PMID: 38813239 PMCID: PMC11134536 DOI: 10.1016/j.dib.2024.110506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Revised: 04/17/2024] [Accepted: 05/06/2024] [Indexed: 05/31/2024] Open
Abstract
This research introduces an extensive dataset of unprocessed aerial RGB images and orthomosaics of Brassica oleracea crops, captured via a DJI Phantom 4. The dataset, publicly accessible, comprises 244 raw RGB images, acquired over six distinct dates in October and November of 2020 as well as 6 orthomosaics from an experimental farm located in Portici, Italy. The images, uniformly distributed across crop spaces, have undergone both manual and automatic annotations, to facilitate the detection, segmentation, and growth modelling of crops. Manual annotations were performed using bounding boxes via the Visual Geometry Group Image Annotator (VIA) and exported in the Common Objects in Context (COCO) segmentation format. The automated annotations were generated using a framework of Grounding DINO + Segment Anything Model (SAM) facilitated by YOLOv8x-seg pretrained weights obtained after training manually annotated images dated 8 October, 21 October, and 29 October 2020. The automated annotations were archived in Pascal Visual Object Classes (PASCAL VOC) format. Seven classes, designated as Row 1 through Row 7, have been identified for crop labelling. Additional attributes such as individual crop ID and the repetitiveness of individual crop specimens are delineated in the Comma Separated Values (CSV) version of the manual annotation. This dataset not only furnishes annotation information but also assists in the refinement of various machine learning models, thereby contributing significantly to the field of smart agriculture. The transparency and reproducibility of the processes are ensured by making the utilized codes accessible. This research marks a significant stride in leveraging technology for vision-based crop growth monitoring.
Collapse
Affiliation(s)
- Shubham Rana
- Department of Engineering, University of Campania “L. Vanvitelli”, Via Roma 29, Aversa, (CE) 81031, Italy
| | - Mariano Crimaldi
- Department of Agricultural Sciences, University of Naples “Federico II”, Via Università 100, Portici (NA) 80055, Italy
| | - Domenico Barretta
- Department of Engineering, University of Campania “L. Vanvitelli”, Via Roma 29, Aversa, (CE) 81031, Italy
| | - Petronia Carillo
- Department of Biological and Pharmaceutical Environmental Sciences and Technologies, University of Campania “L. Vanvitelli”, Via Antonio Vivaldi, 43, 81100 Caserta, (CE), Italy
| | - Valerio Cirillo
- Department of Agricultural Sciences, University of Naples “Federico II”, Via Università 100, Portici (NA) 80055, Italy
| | - Albino Maggio
- Department of Agricultural Sciences, University of Naples “Federico II”, Via Università 100, Portici (NA) 80055, Italy
| | - Fabrizio Sarghini
- Department of Agricultural Sciences, University of Naples “Federico II”, Via Università 100, Portici (NA) 80055, Italy
| | - Salvatore Gerbino
- Department of Engineering, University of Campania “L. Vanvitelli”, Via Roma 29, Aversa, (CE) 81031, Italy
| |
Collapse
|
4
|
Su H, Kamanda DB, Han T, Guo C, Li R, Liu Z, Su F, Shang L. Enhanced YOLO v3 for precise detection of apparent damage on bridges amidst complex backgrounds. Sci Rep 2024; 14:8627. [PMID: 38622182 PMCID: PMC11018769 DOI: 10.1038/s41598-024-58707-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 04/02/2024] [Indexed: 04/17/2024] Open
Abstract
A bridge disease identification approach based on an enhanced YOLO v3 algorithm is suggested to increase the accuracy of apparent disease detection of concrete bridges under complex backgrounds. First, the YOLO v3 network structure is enhanced to better accommodate the dense distribution and large variation of disease scale characteristics, and the detection layer incorporates the squeeze and excitation (SE) networks attention mechanism module and spatial pyramid pooling module to strengthen the semantic feature extraction ability. Secondly, CIoU with better localization ability is selected as the loss function for training. Finally, the K-means algorithm is used for anchor frame clustering on the bridge surface disease defects dataset. 1363 datasets containing exposed reinforcement, spalling, and water erosion damage of bridges are produced, and network training is done after manual labelling and data improvement in order to test the efficacy of the algorithm described in this paper. According to the trial results, the YOLO v3 model has enhanced more than the original model in terms of precision rate, recall rate, Average Precision (AP), and other indicators. Its overall mean Average Precision (mAP) value has also grown by 5.5%. With the RTX2080Ti graphics card, the detection frame rate increases to 84 Frames Per Second, enabling more precise and real-time bridge illness detection.
Collapse
Affiliation(s)
- Huifeng Su
- College of Transportation, Shandong University of Science and Technology, Qingdao, 266590, China.
| | - David Bonfils Kamanda
- College of Transportation, Shandong University of Science and Technology, Qingdao, 266590, China
| | - Tao Han
- Shandong Expressway Qingdao Development Co., Ltd., Qingdao, 266000, China
| | - Cheng Guo
- College of Transportation, Shandong University of Science and Technology, Qingdao, 266590, China
| | - Rongzhao Li
- College of Transportation, Shandong University of Science and Technology, Qingdao, 266590, China
| | - Zhilei Liu
- College of Transportation, Shandong University of Science and Technology, Qingdao, 266590, China
| | - Fengzhao Su
- College of Transportation, Shandong University of Science and Technology, Qingdao, 266590, China
| | - Liuhong Shang
- College of Transportation, Shandong University of Science and Technology, Qingdao, 266590, China
| |
Collapse
|
5
|
Irrera O, Marchesin S, Silvello G. MetaTron: advancing biomedical annotation empowering relation annotation and collaboration. BMC Bioinformatics 2024; 25:112. [PMID: 38486137 PMCID: PMC10941452 DOI: 10.1186/s12859-024-05730-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 03/04/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND The constant growth of biomedical data is accompanied by the need for new methodologies to effectively and efficiently extract machine-readable knowledge for training and testing purposes. A crucial aspect in this regard is creating large, often manually or semi-manually, annotated corpora vital for developing effective and efficient methods for tasks like relation extraction, topic recognition, and entity linking. However, manual annotation is expensive and time-consuming especially if not assisted by interactive, intuitive, and collaborative computer-aided tools. To support healthcare experts in the annotation process and foster annotated corpora creation, we present MetaTron. MetaTron is an open-source and free-to-use web-based annotation tool to annotate biomedical data interactively and collaboratively; it supports both mention-level and document-level annotations also integrating automatic built-in predictions. Moreover, MetaTron enables relation annotation with the support of ontologies, functionalities often overlooked by off-the-shelf annotation tools. RESULTS We conducted a qualitative analysis to compare MetaTron with a set of manual annotation tools including TeamTat, INCEpTION, LightTag, MedTAG, and brat, on three sets of criteria: technical, data, and functional. A quantitative evaluation allowed us to assess MetaTron performances in terms of time and number of clicks to annotate a set of documents. The results indicated that MetaTron fulfills almost all the selected criteria and achieves the best performances. CONCLUSIONS MetaTron stands out as one of the few annotation tools targeting the biomedical domain supporting the annotation of relations, and fully customizable with documents in several formats-PDF included, as well as abstracts retrieved from PubMed, Semantic Scholar, and OpenAIRE. To meet any user need, we released MetaTron both as an online instance and as a Docker image locally deployable.
Collapse
Affiliation(s)
- Ornella Irrera
- Department of Information Engineering, University of Padova, Padua, Italy.
| | - Stefano Marchesin
- Department of Information Engineering, University of Padova, Padua, Italy
| | - Gianmaria Silvello
- Department of Information Engineering, University of Padova, Padua, Italy
| |
Collapse
|
6
|
Habart D, Koza A, Leontovyc I, Kosinova L, Berkova Z, Kriz J, Zacharovova K, Brinkhof B, Cornelissen DJ, Magrane N, Bittenglova K, Capek M, Valecka J, Habartova A, Saudek F. IsletSwipe, a mobile platform for expert opinion exchange on islet graft images. Islets 2023; 15:2189873. [PMID: 36987915 PMCID: PMC10064927 DOI: 10.1080/19382014.2023.2189873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 03/30/2023] Open
Abstract
We previously developed a deep learning-based web service (IsletNet) for an automated counting of isolated pancreatic islets. The neural network training is limited by the absent consensus on the ground truth annotations. Here, we present a platform (IsletSwipe) for an exchange of graphical opinions among experts to facilitate the consensus formation. The platform consists of a web interface and a mobile application. In a small pilot study, we demonstrate the functionalities and the use case scenarios of the platform. Nine experts from three centers validated the drawing tools, tested precision and consistency of the expert contour drawing, and evaluated user experience. Eight experts from two centers proceeded to evaluate additional images to demonstrate the following two use case scenarios. The Validation scenario involves an automated selection of images and islets for the expert scrutiny. It is scalable (more experts, images, and islets may readily be added) and can be applied to independent validation of islet contours from various sources. The Inquiry scenario serves the ground truth generating expert in seeking assistance from peers to achieve consensus on challenging cases during the preparation for IsletNet training. This scenario is limited to a small number of manually selected images and islets. The experts gained an opportunity to influence IsletNet training and to compare other experts' opinions with their own. The ground truth-generating expert obtained feedback for future IsletNet training. IsletSwipe is a suitable tool for the consensus finding. Experts from additional centers are welcome to participate.
Collapse
Affiliation(s)
- David Habart
- Laboratory of Pancreatic Islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine (IKEM), Prague, Czech Republic
- CONTACT David Habart Laboratory of pancreatic islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine, Videnska 1958/9, Prague 4, 140 21, Czech Republic
| | - Adam Koza
- Dino School & Novy PORG, Prague, Czech Republic
| | - Ivan Leontovyc
- Laboratory of Pancreatic Islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine (IKEM), Prague, Czech Republic
| | - Lucie Kosinova
- Laboratory of Pancreatic Islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine (IKEM), Prague, Czech Republic
| | - Zuzana Berkova
- Laboratory of Pancreatic Islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine (IKEM), Prague, Czech Republic
| | - Jan Kriz
- Diabetes Center, Institute for Clinical and Experimental Medicine, Prague, Czech Republic
| | - Klara Zacharovova
- Laboratory of Pancreatic Islets, Center of Experimental Medicine, Institute for Clinical and Experimental Medicine (IKEM), Prague, Czech Republic
| | - Bas Brinkhof
- Department of Internal Medicine, Leiden University Medical Center (LUMC), Leiden, Netheralnds
| | - Dirk-Jan Cornelissen
- Department of Internal Medicine, Leiden University Medical Center (LUMC), Leiden, Netheralnds
| | - Nicholas Magrane
- Nuffield department of surgical sciences, Oxford Consortium for Islet transplantation, Oxford, UK
| | - Katerina Bittenglova
- Diabetes Center, Institute for Clinical and Experimental Medicine, Prague, Czech Republic
| | - Martin Capek
- Light Microscopy Laboratory, Institute of Molecular Genetics of the Czech Academy of Sciences, Prague, Czech Republic
- Laboratory of Biomathematics, Institute of Physiology of the Czech Academy of Sciences, Prague, Czech Republic
| | - Jan Valecka
- Laboratory of Biomathematics, Institute of Physiology of the Czech Academy of Sciences, Prague, Czech Republic
| | - Alena Habartova
- Redox Photochemistry Lab, Institute of Organic Chemistry and Biochemistry of the Czech Academy of Sciences, Prague, Czech Republic
| | - František Saudek
- Diabetes Center, Institute for Clinical and Experimental Medicine, Prague, Czech Republic
| |
Collapse
|
7
|
Kim MJ, Martin CA, Kim J, Jablonski MM. Computational methods in glaucoma research: Current status and future outlook. Mol Aspects Med 2023; 94:101222. [PMID: 37925783 PMCID: PMC10842846 DOI: 10.1016/j.mam.2023.101222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/19/2023] [Indexed: 11/07/2023]
Abstract
Advancements in computational techniques have transformed glaucoma research, providing a deeper understanding of genetics, disease mechanisms, and potential therapeutic targets. Systems genetics integrates genomic and clinical data, aiding in identifying drug targets, comprehending disease mechanisms, and personalizing treatment strategies for glaucoma. Molecular dynamics simulations offer valuable molecular-level insights into glaucoma-related biomolecule behavior and drug interactions, guiding experimental studies and drug discovery efforts. Artificial intelligence (AI) technologies hold promise in revolutionizing glaucoma research, enhancing disease diagnosis, target identification, and drug candidate selection. The generalized protocols for systems genetics, MD simulations, and AI model development are included as a guide for glaucoma researchers. These computational methods, however, are not separate and work harmoniously together to discover novel ways to combat glaucoma. Ongoing research and progresses in genomics technologies, MD simulations, and AI methodologies project computational methods to become an integral part of glaucoma research in the future.
Collapse
Affiliation(s)
- Minjae J Kim
- Department of Ophthalmology, The Hamilton Eye Institute, The University of Tennessee Health Science Center, Memphis, TN, 38163, USA.
| | - Cole A Martin
- Department of Ophthalmology, The Hamilton Eye Institute, The University of Tennessee Health Science Center, Memphis, TN, 38163, USA.
| | - Jinhwa Kim
- Graduate School of Artificial Intelligence, Graduate School of Metaverse, Department of Management Information Systems, Sogang University, 1 Shinsoo-Dong, Mapo-Gu, Seoul, South Korea.
| | - Monica M Jablonski
- Department of Ophthalmology, The Hamilton Eye Institute, The University of Tennessee Health Science Center, Memphis, TN, 38163, USA.
| |
Collapse
|
8
|
Solomon C, Shmueli O, Shrot S, Blumenfeld-Katzir T, Radunsky D, Omer N, Stern N, Reichman DBA, Hoffmann C, Salti M, Greenspan H, Ben-Eliezer N. Psychophysical Evaluation of Visual vs. Computer-Aided Detection of Brain Lesions on Magnetic Resonance Images. J Magn Reson Imaging 2023; 58:642-649. [PMID: 36495014 DOI: 10.1002/jmri.28559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 11/25/2022] [Accepted: 11/28/2022] [Indexed: 07/20/2023] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) diagnosis is usually performed by analyzing contrast-weighted images, where pathology is detected once it reached a certain visual threshold. Computer-aided diagnosis (CAD) has been proposed as a way for achieving higher sensitivity to early pathology. PURPOSE To compare conventional (i.e., visual) MRI assessment of artificially generated multiple sclerosis (MS) lesions in the brain's white matter to CAD based on a deep neural network. STUDY TYPE Prospective. POPULATION A total of 25 neuroradiologists (15 males, age 39 ± 9, 9 ± 9.8 years of experience) independently assessed all synthetic lesions. FIELD STRENGTH/SEQUENCE A 3.0 T, T2 -weighted multi-echo spin-echo (MESE) sequence. ASSESSMENT MS lesions of varying severity levels were artificially generated in healthy volunteer MRI scans by manipulating T2 values. Radiologists and a neural network were tasked with detecting these lesions in a series of 48 MR images. Sixteen images presented healthy anatomy and the rest contained a single lesion at eight increasing severity levels (6%, 9%, 12%, 15%, 18%, 21%, 25%, and 30% elevation in T2 ). True positive (TP) rates, false positive (FP) rates, and odds ratios (ORs) were compared between radiological diagnosis and CAD across the range lesion severity levels. STATISTICAL TESTS Diagnostic performance of the two approaches was compared using z-tests on TP rates, FP rates, and the logarithm of ORs across severity levels. A P-value <0.05 was considered statistically significant. RESULTS ORs of identifying pathology were significantly higher for CAD vis-à-vis visual inspection for all lesions' severity levels. For a 6% change in T2 value (lowest severity), radiologists' TP and FP rates were not significantly different (P = 0.12), while the corresponding CAD results remained statistically significant. DATA CONCLUSION CAD is capable of detecting the presence or absence of more subtle lesions with greater precision than the representative group of 25 radiologists chosen in this study. LEVEL OF EVIDENCE 1 TECHNICAL EFFICACY: Stage 3.
Collapse
Affiliation(s)
- Chen Solomon
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv, Israel
| | - Omer Shmueli
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv, Israel
| | - Shai Shrot
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat-Gan, Israel
| | | | - Dvir Radunsky
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv, Israel
| | - Noam Omer
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv, Israel
| | - Neta Stern
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv, Israel
| | | | - Chen Hoffmann
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat-Gan, Israel
| | - Moti Salti
- Brain Imaging Research Center (BIRC), Ben-Gurion University, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv, Israel
| | - Noam Ben-Eliezer
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Center for Advanced Imaging Innovation and Research (CAI2R), New York University, New York, New York, USA
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
9
|
A digital workflow for pair matching of maxillary anterior teeth using a 3D segmentation technique for esthetic implant restorations. Sci Rep 2022; 12:14356. [PMID: 35999338 PMCID: PMC9399247 DOI: 10.1038/s41598-022-18652-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/17/2022] [Indexed: 11/09/2022] Open
Abstract
We investigated a state-of-the-art algorithm for 3D reconstruction with a pair-matching technique, which enabled the fabrication of individualized implant restorations in the esthetic zone. This method compared 3D mirror images of crowns and emergence profiles between symmetric tooth pairs in the anterior maxilla using digital slicewise DICOM segmentation and the superimposition of STL data. With the outline extraction of each segment provided by 100 patients, the Hausdorff distance (HD) between two point sets was calculated to identify the similarity of the sets. By using HD thresholds as a pair matching criterion, the true positive rates of crowns were 100, 98, and 98%, while the false negative rates were 0, 2, and 2% for central incisors, lateral incisors, and canines, respectively, indicating high pair matching accuracy (> 99%) and sensitivity (> 98%). The true positive rates of emergence profiles were 99, 100, and 98%, while the false negative rates were 1, 0, and 2% for central incisors, lateral incisors, and canines, respectively, indicating high pair matching accuracy (> 99%) and sensitivity (> 98%). Therefore, digitally flipped contours of crown and emergence profiles can be successfully transferred for implant reconstruction in the maxillary anterior region to optimize esthetics and function.
Collapse
|