1
|
Hou H, Yang A, Li X, Zhu K, Zhao Y, Wu Z. Dental bur detection system based on asymmetric double convolution and adaptive feature fusion. Sci Rep 2024; 14:31874. [PMID: 39738621 DOI: 10.1038/s41598-024-83241-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Accepted: 12/12/2024] [Indexed: 01/02/2025] Open
Abstract
This study aims to improve the detection of dental burs, which are often undetected due to their minuscule size, slender profile, and substantial manufacturing output. The present study introduces You Only Look Once-Dental bur (YOLO-DB), an innovative deep learning-driven methodology for the accurate detection and counting of dental burs. A Lightweight Asymmetric Dual Convolution module (LADC) was devised to diminish the detrimental effects of extraneous features on the model's precision, thereby enhancing the feature extraction network. Moreover, to augment the efficiency of feature integration and diminish computational demands, a novel fusion network combining SlimNeck with BiFPN-Concat was introduced, effectively merging superficial spatial details with profound semantic features. A specialized platform was developed for the detection and counting of dental burs, and rigorous experimental assessments were performed. Promising results were achieved. YOLO-DB yielded a Mean Average Precision (mAP@0.5) of 99.3% on the dental bur dataset, with a notable 3.2% increase in mAP@0.5:0.95 and a sustained detection pace of 128 frames per second. The model also achieved a 14.4% reduction in parameter volume and a 17.9% decrease in computational expenditure, while achieving a flawless counting accuracy of 100%. Our approach outperforms current detection algorithms in terms of detection capability and efficiency, presenting a new method for the precise detection and counting of elongated objects such as dental burs.
Collapse
Affiliation(s)
- HongLing Hou
- School of Mechanical Engineering, Shaanxi University of Technology, Hanzhong, 723001, China.
- School of Mechanical and Precision Instrumental Engineering, Xi'an University of Technology, Xi'an, 710048, China.
| | - Ao Yang
- School of Mechanical Engineering, Shaanxi University of Technology, Hanzhong, 723001, China
| | - Xiangyao Li
- School of Mechanical Engineering, Shaanxi University of Technology, Hanzhong, 723001, China
| | - Kangkai Zhu
- School of Mechanical Engineering, Shaanxi University of Technology, Hanzhong, 723001, China
| | - Yandi Zhao
- School of Mechanical and Precision Instrumental Engineering, Xi'an University of Technology, Xi'an, 710048, China
| | - Zhiqiang Wu
- School of Mechanical Engineering, Shaanxi University of Technology, Hanzhong, 723001, China
| |
Collapse
|
2
|
Zhang YJ, Luo Z, Sun Y, Liu J, Chen Z. From beasts to bytes: Revolutionizing zoological research with artificial intelligence. Zool Res 2023; 44:1115-1131. [PMID: 37933101 PMCID: PMC10802096 DOI: 10.24272/j.issn.2095-8137.2023.263] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 10/30/2023] [Indexed: 11/08/2023] Open
Abstract
Since the late 2010s, Artificial Intelligence (AI) including machine learning, boosted through deep learning, has boomed as a vital tool to leverage computer vision, natural language processing and speech recognition in revolutionizing zoological research. This review provides an overview of the primary tasks, core models, datasets, and applications of AI in zoological research, including animal classification, resource conservation, behavior, development, genetics and evolution, breeding and health, disease models, and paleontology. Additionally, we explore the challenges and future directions of integrating AI into this field. Based on numerous case studies, this review outlines various avenues for incorporating AI into zoological research and underscores its potential to enhance our understanding of the intricate relationships that exist within the animal kingdom. As we build a bridge between beast and byte realms, this review serves as a resource for envisioning novel AI applications in zoological research that have not yet been explored.
Collapse
Affiliation(s)
- Yu-Juan Zhang
- Chongqing Key Laboratory of Vector Insects
- Chongqing Key Laboratory of Animal Biology
- College of Life Science, Chongqing Normal University, Chongqing 401331, China
| | - Zeyu Luo
- Chongqing Key Laboratory of Vector Insects
- Chongqing Key Laboratory of Animal Biology
- College of Life Science, Chongqing Normal University, Chongqing 401331, China
| | - Yawen Sun
- Chongqing Key Laboratory of Vector Insects
- Chongqing Key Laboratory of Animal Biology
- College of Life Science, Chongqing Normal University, Chongqing 401331, China
| | - Junhao Liu
- Chongqing Key Laboratory of Vector Insects
- Chongqing Key Laboratory of Animal Biology
- College of Life Science, Chongqing Normal University, Chongqing 401331, China
| | - Zongqing Chen
- School of Mathematical Sciences
- National Center for Applied Mathematics in Chongqing, Chongqing Normal University, Chongqing 401331, China. E-mail:
| |
Collapse
|
3
|
Goudarzi S, Whyte J, Boily M, Towers A, Kilgour RD, Rivaz H. Segmentation of Arm Ultrasound Images in Breast Cancer-Related Lymphedema: A Database and Deep Learning Algorithm. IEEE Trans Biomed Eng 2023; 70:2552-2563. [PMID: 37028332 DOI: 10.1109/tbme.2023.3253646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Breast cancer treatment often causes the removal of or damage to lymph nodes of the patient's lymphatic drainage system. This side effect is the origin of Breast Cancer-Related Lymphedema (BCRL), referring to a noticeable increase in excess arm volume. Ultrasound imaging is a preferred modality for the diagnosis and progression monitoring of BCRL because of its low cost, safety, and portability. As the affected and unaffected arms look similar in B-mode ultrasound images, the thickness of the skin, subcutaneous fat, and muscle have been shown to be important biomarkers for this task. The segmentation masks are also helpful in monitoring the longitudinal changes in morphology and mechanical properties of tissue layers. METHODS For the first time, a publicly available ultrasound dataset containing the Radio-Frequency (RF) data of 39 subjects and manual segmentation masks by two experts, are provided. Inter- and intra-observer reproducibility studies performed on the segmentation maps show a high Dice Score Coefficient (DSC) of 0.94±0.08 and 0.92±0.06, respectively. Gated Shape Convolutional Neural Network (GSCNN) is modified for precise automatic segmentation of tissue layers, and its generalization performance is improved by the CutMix augmentation strategy. RESULTS We got an average DSC of 0.87±0.11 on the test set, which confirms the high performance of the method. CONCLUSION Automatic segmentation can pave the way for convenient and accessible staging of BCRL, and our dataset can facilitate development and validation of those methods. SIGNIFICANCE Timely diagnosis and treatment of BCRL have crucial importance in preventing irreversible damage.
Collapse
|
4
|
De Rosa L, L’Abbate S, Kusmic C, Faita F. Applications of Deep Learning Algorithms to Ultrasound Imaging Analysis in Preclinical Studies on In Vivo Animals. Life (Basel) 2023; 13:1759. [PMID: 37629616 PMCID: PMC10455134 DOI: 10.3390/life13081759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/28/2023] [Accepted: 08/08/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND AND AIM Ultrasound (US) imaging is increasingly preferred over other more invasive modalities in preclinical studies using animal models. However, this technique has some limitations, mainly related to operator dependence. To overcome some of the current drawbacks, sophisticated data processing models are proposed, in particular artificial intelligence models based on deep learning (DL) networks. This systematic review aims to overview the application of DL algorithms in assisting US analysis of images acquired in in vivo preclinical studies on animal models. METHODS A literature search was conducted using the Scopus and PubMed databases. Studies published from January 2012 to November 2022 that developed DL models on US images acquired in preclinical/animal experimental scenarios were eligible for inclusion. This review was conducted according to PRISMA guidelines. RESULTS Fifty-six studies were enrolled and classified into five groups based on the anatomical district in which the DL models were used. Sixteen studies focused on the cardiovascular system and fourteen on the abdominal organs. Five studies applied DL networks to images of the musculoskeletal system and eight investigations involved the brain. Thirteen papers, grouped under a miscellaneous category, proposed heterogeneous applications adopting DL systems. Our analysis also highlighted that murine models were the most common animals used in in vivo studies applying DL to US imaging. CONCLUSION DL techniques show great potential in terms of US images acquired in preclinical studies using animal models. However, in this scenario, these techniques are still in their early stages, and there is room for improvement, such as sample sizes, data preprocessing, and model interpretability.
Collapse
Affiliation(s)
- Laura De Rosa
- Institute of Clinical Physiology, National Research Council (CNR), 56124 Pisa, Italy; (L.D.R.); (F.F.)
- Department of Information Engineering and Computer Science, University of Trento, 38123 Trento, Italy
| | - Serena L’Abbate
- Institute of Life Sciences, Scuola Superiore Sant’Anna, 56124 Pisa, Italy;
| | - Claudia Kusmic
- Institute of Clinical Physiology, National Research Council (CNR), 56124 Pisa, Italy; (L.D.R.); (F.F.)
| | - Francesco Faita
- Institute of Clinical Physiology, National Research Council (CNR), 56124 Pisa, Italy; (L.D.R.); (F.F.)
| |
Collapse
|
5
|
Aristizábal O, Qiu Z, Gallego E, Aristizábal M, Mamou J, Wang Y, Ketterling JA, Turnbull DH. Longitudinal in Utero Analysis of Engrailed-1 Knockout Mouse Embryonic Phenotypes Using High-Frequency Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:356-367. [PMID: 36283941 PMCID: PMC9712241 DOI: 10.1016/j.ultrasmedbio.2022.09.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 09/08/2022] [Accepted: 09/11/2022] [Indexed: 06/16/2023]
Abstract
Large-scale international efforts to generate and analyze loss-of-function mutations in each of the approximately 20,000 protein-encoding gene mutations are ongoing using the "knockout" mouse as a model organism. Because one-third of gene knockouts are expected to result in embryonic lethality, it is important to develop non-invasive in utero imaging methods to detect and monitor mutant phenotypes in mouse embryos. We describe the utility of 3-D high-frequency (40-MHz) ultrasound (HFU) for longitudinal in utero imaging of mouse embryos between embryonic days (E) 11.5 and E14.5, which represent critical stages of brain and organ development. Engrailed-1 knockout (En1-ko) mouse embryos and their normal control littermates were imaged with HFU in 3-D, enabling visualization of morphological phenotypes in the developing brains, limbs and heads of the En1-ko embryos. Recently developed deep learning approaches were used to automatically segment the embryonic brain ventricles and bodies from the 3-D HFU images, allowing quantitative volumetric analyses of the En1-ko brain phenotypes. Taken together, these results show great promise for the application of longitudinal 3-D HFU to analyze knockout mouse embryos in utero.
Collapse
Affiliation(s)
- Orlando Aristizábal
- Skirball Institute of Biomolecular Medicine and Department of Radiology, New York University Grossman School of Medicine, New York, New York, USA
| | - Ziming Qiu
- Department of Electrical and Computer Engineering, New York University Tandon School of Engineering, New York, New York, USA
| | - Estefania Gallego
- Skirball Institute of Biomolecular Medicine and Department of Radiology, New York University Grossman School of Medicine, New York, New York, USA
| | - Matias Aristizábal
- Skirball Institute of Biomolecular Medicine and Department of Radiology, New York University Grossman School of Medicine, New York, New York, USA
| | - Jonathan Mamou
- Department of Radiology, Weill Cornell Medicine, New York, New York, USA
| | - Yao Wang
- Department of Electrical and Computer Engineering, New York University Tandon School of Engineering, New York, New York, USA
| | | | - Daniel H Turnbull
- Skirball Institute of Biomolecular Medicine and Department of Radiology, New York University Grossman School of Medicine, New York, New York, USA.
| |
Collapse
|
6
|
Liu Y, Gargesha M, Scott B, Tchilibou Wane AO, Wilson DL. Deep learning multi-organ segmentation for whole mouse cryo-images including a comparison of 2D and 3D deep networks. Sci Rep 2022; 12:15161. [PMID: 36071089 PMCID: PMC9452525 DOI: 10.1038/s41598-022-19037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 08/23/2022] [Indexed: 11/25/2022] Open
Abstract
Cryo-imaging provided 3D whole-mouse microscopic color anatomy and fluorescence images that enables biotechnology applications (e.g., stem cells and metastatic cancer). In this report, we compared three methods of organ segmentation: 2D U-Net with 2D-slices and 3D U-Net with either 3D-whole-mouse or 3D-patches. We evaluated the brain, thymus, lung, heart, liver, stomach, spleen, left and right kidney, and bladder. Training with 63 mice, 2D-slices had the best performance, with median Dice scores of > 0.9 and median Hausdorff distances of < 1.2 mm in eightfold cross validation for all organs, except bladder, which is a problem organ due to variable filling and poor contrast. Results were comparable to those for a second analyst on the same data. Regression analyses were performed to fit learning curves, which showed that 2D-slices can succeed with fewer samples. Review and editing of 2D-slices segmentation results reduced human operator time from ~ 2-h to ~ 25-min, with reduced inter-observer variability. As demonstrations, we used organ segmentation to evaluate size changes in liver disease and to quantify the distribution of therapeutic mesenchymal stem cells in organs. With a 48-GB GPU, we determined that extra GPU RAM improved the performance of 3D deep learning because we could train at a higher resolution.
Collapse
Affiliation(s)
- Yiqiao Liu
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | | | - Bryan Scott
- BioInVision Inc, Suite E 781 Beta Drive, Cleveland, OH, 44143, USA
| | - Arthure Olivia Tchilibou Wane
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA. .,BioInVision Inc, Suite E 781 Beta Drive, Cleveland, OH, 44143, USA. .,Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.
| |
Collapse
|
7
|
Automated segmentation of epidermis in high-frequency ultrasound of pathological skin using a cascade of DeepLab v3+ networks and fuzzy connectedness. Comput Med Imaging Graph 2021; 95:102023. [PMID: 34883364 DOI: 10.1016/j.compmedimag.2021.102023] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/18/2021] [Accepted: 11/12/2021] [Indexed: 11/23/2022]
Abstract
This study proposes a novel, fully automated framework for epidermal layer segmentation in different skin diseases based on 75 MHz high-frequency ultrasound (HFUS) image data. A robust epidermis segmentation is a vital first step to detect changes in thickness, shape, and intensity and therefore support diagnosis and treatment monitoring in inflammatory and neoplastic skin lesions. Our framework links deep learning and fuzzy connectedness for image analysis. It consists of a cascade of two DeepLab v3+ models with a ResNet-50 backbone and a fuzzy connectedness analysis module for fine segmentation. Both deep models are pre-trained on the ImageNet dataset and subjected to transfer learning using our HFUS database of 580 images with atopic dermatitis, psoriasis and non-melanocytic skin tumors. The first deep model is used to detect the appropriate region of interest, while the second stands for the main segmentation procedure. We use the softmax layer of the latter twofold to prepare the input data for fuzzy connectedness analysis: as a reservoir of seed points and a direct contribution to the input image. In the experiments, we analyze different configurations of the framework, including region of interest detection, deep model backbones and training loss functions, or fuzzy connectedness analysis with parameter settings. We also use the Dice index and epidermis thickness to compare our results to state-of-the-art approaches. The Dice index of 0.919 yielded by our model over the entire dataset (and exceeding 0.93 in inflammatory diseases) proves its superiority over the other methods.
Collapse
|