1
|
Pinzón-Osorio CA, Machado MA, Camozzato JNB, Dos Santos Velho G, Dalto AGC, Rovani MT, de Oliveira FC, Bertolini M. Inter-software reliability and agreement for follicular and luteal morphometric and echotextural ultrasonographic parameters in beef cattle. Anim Reprod Sci 2024; 267:107518. [PMID: 38889613 DOI: 10.1016/j.anireprosci.2024.107518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 05/19/2024] [Accepted: 05/29/2024] [Indexed: 06/20/2024]
Abstract
This study aimed to compare the inter-software and inter-observer reliability and agreement for the assessment of follicular and luteal morphometry and echotexture parameters in beef crossbreed females (3/8 Bos taurus indicus and 5/8 Bos taurus taurus). B-mode and color Doppler ultrasonographic ovarian images were obtained at specific time points of estradiol-progesterone-based protocols for timed artificial insemination (TAI). Sonograms were analyzed by two observers using a licensed (IASP1) and an open access (IASP2) software package. A total of 292 snap-shot sonograms were analyzed for morphometric parameters and 504 for echotexture parameters. inter-software reliability was judged moderate to excellent (ICC or CCC=0.73-0.98), whereas inter-observer reliability for morphometric parameters was deemed good to excellent (ICC or CCC=0.82-0.98). A small percentage (up to 10.95 %) of measured parameters fell outside the limits of inter-software and inter-observer agreement. For echotexture parameters, inter-software reliability varied widely (ICC or CCC=0.16-0.95) based on the size of regions of interest (ROI), while inter-observer reliability ranged from moderate to excellent (ICC or CCC= 0.71-0.97). The highest inter-software reliability for pixel value and heterogeneity value was observed for the corpus luteum (ICCs=0.81-0.95; P>0.05), followed by the peripheral follicular antrum (ICCs=0.75-0.78; P<0.05). However, lower reliability was determined for the follicular wall (ICCs=0.08-0.33; P<0.0001) and perifollicular stroma (ICCs=0.16-0.46; P<0.05). In conclusion, both software packages showed high reproducibility for morphometric measurements, while echotexture measurements were more challenging to replicate based on ROI sizes. Caution is advised when selecting ROI sizes for echotexture measurements in bovine ovaries.
Collapse
Affiliation(s)
- César Augusto Pinzón-Osorio
- Embryology and Reproductive Technology Lab, School of Veterinary Medicine, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil
| | | | - Julia Nobre Blank Camozzato
- Embryology and Reproductive Technology Lab, School of Veterinary Medicine, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil; Research Group "Fisiopatologia e Biotécnicas da Reprodução Animal" (FiBRA), Large Ruminant Sector, Department of Animal Medicine, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Gabriella Dos Santos Velho
- Research Group "Fisiopatologia e Biotécnicas da Reprodução Animal" (FiBRA), Large Ruminant Sector, Department of Animal Medicine, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - André Gustavo Cabrera Dalto
- Research Group "Fisiopatologia e Biotécnicas da Reprodução Animal" (FiBRA), Large Ruminant Sector, Department of Animal Medicine, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Monique Tomazele Rovani
- Research Group "Fisiopatologia e Biotécnicas da Reprodução Animal" (FiBRA), Large Ruminant Sector, Department of Animal Medicine, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Fernando Caetano de Oliveira
- Embryology and Reproductive Technology Lab, School of Veterinary Medicine, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil; Research Group "Fisiopatologia e Biotécnicas da Reprodução Animal" (FiBRA), Large Ruminant Sector, Department of Animal Medicine, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil
| | - Marcelo Bertolini
- Embryology and Reproductive Technology Lab, School of Veterinary Medicine, Federal University of Rio Grande do Sul, Porto Alegre, RS, Brazil.
| |
Collapse
|
2
|
Bartolo MA, Taylor-LaPole AM, Gandhi D, Johnson A, Li Y, Slack E, Stevens I, Turner ZG, Weigand JD, Puelz C, Husmeier D, Olufsen MS. Computational framework for the generation of one-dimensional vascular models accounting for uncertainty in networks extracted from medical images. J Physiol 2024. [PMID: 39075725 DOI: 10.1113/jp286193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 05/28/2024] [Indexed: 07/31/2024] Open
Abstract
One-dimensional (1D) cardiovascular models offer a non-invasive method to answer medical questions, including predictions of wave-reflection, shear stress, functional flow reserve, vascular resistance and compliance. This model type can predict patient-specific outcomes by solving 1D fluid dynamics equations in geometric networks extracted from medical images. However, the inherent uncertainty in in vivo imaging introduces variability in network size and vessel dimensions, affecting haemodynamic predictions. Understanding the influence of variation in image-derived properties is essential to assess the fidelity of model predictions. Numerous programs exist to render three-dimensional surfaces and construct vessel centrelines. Still, there is no exact way to generate vascular trees from the centrelines while accounting for uncertainty in data. This study introduces an innovative framework employing statistical change point analysis to generate labelled trees that encode vessel dimensions and their associated uncertainty from medical images. To test this framework, we explore the impact of uncertainty in 1D haemodynamic predictions in a systemic and pulmonary arterial network. Simulations explore haemodynamic variations resulting from changes in vessel dimensions and segmentation; the latter is achieved by analysing multiple segmentations of the same images. Results demonstrate the importance of accurately defining vessel radii and lengths when generating high-fidelity patient-specific haemodynamics models. KEY POINTS: This study introduces novel algorithms for generating labelled directed trees from medical images, focusing on accurate junction node placement and radius extraction using change points to provide haemodynamic predictions with uncertainty within expected measurement error. Geometric features, such as vessel dimension (length and radius) and network size, significantly impact pressure and flow predictions in both pulmonary and aortic arterial networks. Standardizing networks to a consistent number of vessels is crucial for meaningful comparisons and decreases haemodynamic uncertainty. Change points are valuable to understanding structural transitions in vascular data, providing an automated and efficient way to detect shifts in vessel characteristics and ensure reliable extraction of representative vessel radii.
Collapse
Affiliation(s)
- Michelle A Bartolo
- Department of Mathematics, North Carolina State University, Raleigh, NC, USA
| | | | - Darsh Gandhi
- Department of Mathematics, North Carolina State University, Raleigh, NC, USA
- Department of Mathematics, University of Texas at Arlington, Arlington, TX, USA
| | - Alexandria Johnson
- Department of Mathematics, North Carolina State University, Raleigh, NC, USA
- Department of Mathematics and Statistics, University of South Florida, Tampa, FL, USA
| | - Yaqi Li
- Department of Mathematics, North Carolina State University, Raleigh, NC, USA
- North Carolina School of Science and Mathematics, Durham, NC, USA
| | - Emma Slack
- Department of Mathematics, North Carolina State University, Raleigh, NC, USA
- Department of Mathematics, Colorado State University, Fort Collins, CO, USA
| | - Isaiah Stevens
- Department of Mathematics, North Carolina State University, Raleigh, NC, USA
| | - Zachary G Turner
- Department of Mathematics, North Carolina State University, Raleigh, NC, USA
- School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, USA
| | - Justin D Weigand
- Division of Cardiology, Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Charles Puelz
- Division of Cardiology, Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Dirk Husmeier
- School of Mathematics and Statistics, University of Glasgow, Glasgow, UK
| | - Mette S Olufsen
- Department of Mathematics, North Carolina State University, Raleigh, NC, USA
| |
Collapse
|
3
|
Han M, Luo X, Xie X, Liao W, Zhang S, Song T, Wang G, Zhang S. DMSPS: Dynamically mixed soft pseudo-label supervision for scribble-supervised medical image segmentation. Med Image Anal 2024; 97:103274. [PMID: 39043109 DOI: 10.1016/j.media.2024.103274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 05/11/2024] [Accepted: 07/09/2024] [Indexed: 07/25/2024]
Abstract
High performance of deep learning on medical image segmentation rely on large-scale pixel-level dense annotations, which poses a substantial burden on medical experts due to the laborious and time-consuming annotation process, particularly for 3D images. To reduce the labeling cost as well as maintain relatively satisfactory segmentation performance, weakly-supervised learning with sparse labels has attained increasing attentions. In this work, we present a scribble-based framework for medical image segmentation, called Dynamically Mixed Soft Pseudo-label Supervision (DMSPS). Concretely, we extend a backbone with an auxiliary decoder to form a dual-branch network to enhance the feature capture capability of the shared encoder. Considering that most pixels do not have labels and hard pseudo-labels tend to be over-confident to result in poor segmentation, we propose to use soft pseudo-labels generated by dynamically mixing the decoders' predictions as auxiliary supervision. To further enhance the model's performance, we adopt a two-stage approach where the sparse scribbles are expanded based on predictions with low uncertainties from the first-stage model, leading to more annotated pixels to train the second-stage model. Experiments on ACDC dataset for cardiac structure segmentation, WORD dataset for 3D abdominal organ segmentation and BraTS2020 dataset for 3D brain tumor segmentation showed that: (1) compared with the baseline, our method improved the average DSC from 50.46% to 89.51%, from 75.46% to 87.56% and from 52.61% to 76.53% on the three datasets, respectively; (2) DMSPS achieved better performance than five state-of-the-art scribble-supervised segmentation methods, and is generalizable to different segmentation backbones. The code is available online at: https://github.com/HiLab-git/DMSPS.
Collapse
Affiliation(s)
- Meng Han
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiangde Luo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Xiangjiang Xie
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Wenjun Liao
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Chengdu, China; School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, Chengdu, China
| | - Tao Song
- SenseTime Research, Shanghai, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China.
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China.
| |
Collapse
|
4
|
Liao AH, Wang CH, Wang CY, Liu HL, Chuang HC, Tseng WJ, Weng WC, Shih CP, Tsui PH. Computer-Aided Diagnosis of Duchenne Muscular Dystrophy Based on Texture Pattern Recognition on Ultrasound Images Using Unsupervised Clustering Algorithms and Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1058-1068. [PMID: 38637169 DOI: 10.1016/j.ultrasmedbio.2024.03.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 02/28/2024] [Accepted: 03/31/2024] [Indexed: 04/20/2024]
Abstract
OBJECTIVE The feasibility of using deep learning in ultrasound imaging to predict the ambulatory status of patients with Duchenne muscular dystrophy (DMD) was previously explored for the first time. The present study further used clustering algorithms for the texture reconstruction of ultrasound images of DMD data sets and analyzed the difference in echo intensity between disease stages. METHODS k-means (Kms) and fuzzy c-means (FCM) clustering algorithms were used to reconstruct the DMD data-set textures. Each image was reconstructed using seven texture-feature categories, six of which were used as the primary analysis items. The task of automatically identifying the ambulatory function and DMD severity was performed by establishing a machine-learning model. RESULTS The experimental results indicated that the Gaussian Naïve Bayes and k-nearest neighbors classification models achieved an accuracy of 86.78% in ambulatory function classification. The decision-tree model achieved an identification accuracy of 83.80% in severity classification. A deep convolutional neural network model was established as the main structure of the deep-learning model while automatic auxiliary interpretation tasks of ambulatory function and severity were performed, and data augmentation was used to improve the recognition performance of the trained model. Both the visual geometry group (VGG)-16 and VGG-19 models achieved 98.53% accuracy in ambulatory-function classification. The VGG-19 model achieved 92.64% accuracy in severity classification. CONCLUSION Regarding the overall results, the Kms and FCM clustering algorithms were used in this study to reconstruct the characteristic texture of the gastrocnemius muscle group in DMD, which was indeed helpful in quantitatively analyzing the deterioration of the gastrocnemius muscle group in patients with DMD at different stages. Subsequent combination of machine-learning and deep-learning technologies can automatically and accurately assist in identifying DMD symptoms and tracking DMD deterioration for long-term observation.
Collapse
Affiliation(s)
- Ai-Ho Liao
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan; Department of Biomedical Engineering, National Defense Medical Center, Taipei, Taiwan.
| | - Chih-Hung Wang
- Division of Otolaryngology, Taipei Veterans General Hospital, Taoyuan Branch, Taoyuan, Taiwan; Graduate Institute of Medical Sciences, National Defense Medical Center, Taipei, Taiwan; Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Chong-Yu Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Hao-Li Liu
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| | - Ho-Chiao Chuang
- Department of Mechanical Engineering, National Taipei University of Technology, Taipei, Taiwan
| | - Wei-Jye Tseng
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Wen-Chin Weng
- Department of Pediatrics, National Taiwan University Hospital, and College of Medicine, National Taiwan University, Taipei, Taiwan; Department of Pediatric Neurology, National Taiwan University Children's Hospital, Taipei, Taiwan
| | - Cheng-Ping Shih
- Department of Otolaryngology-Head and Neck Surgery, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan; Institute for Radiological Research, Chang Gung University and Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan; Research Center for Radiation Medicine, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
5
|
Leventhal S, Gyulassy A, Heimann M, Pascucci V. Exploring Classification of Topological Priors With Machine Learning for Feature Extraction. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:3959-3972. [PMID: 37027638 DOI: 10.1109/tvcg.2023.3248632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In many scientific endeavors, increasingly abstract representations of data allow for new interpretive methodologies and conceptualization of phenomena. For example, moving from raw imaged pixels to segmented and reconstructed objects allows researchers new insights and means to direct their studies toward relevant areas. Thus, the development of new and improved methods for segmentation remains an active area of research. With advances in machine learning and neural networks, scientists have been focused on employing deep neural networks such as U-Net to obtain pixel-level segmentations, namely, defining associations between pixels and corresponding/referent objects and gathering those objects afterward. Topological analysis, such as the use of the Morse-Smale complex to encode regions of uniform gradient flow behavior, offers an alternative approach: first, create geometric priors, and then apply machine learning to classify. This approach is empirically motivated since phenomena of interest often appear as subsets of topological priors in many applications. Using topological elements not only reduces the learning space but also introduces the ability to use learnable geometries and connectivity to aid the classification of the segmentation target. In this article, we describe an approach to creating learnable topological elements, explore the application of ML techniques to classification tasks in a number of areas, and demonstrate this approach as a viable alternative to pixel-level classification, with similar accuracy, improved execution time, and requiring marginal training data.
Collapse
|
6
|
Gazula H, Tregidgo HFJ, Billot B, Balbastre Y, Williams-Ramirez J, Herisse R, Deden-Binder LJ, Casamitjana A, Melief EJ, Latimer CS, Kilgore MD, Montine M, Robinson E, Blackburn E, Marshall MS, Connors TR, Oakley DH, Frosch MP, Young SI, Van Leemput K, Dalca AV, Fischl B, MacDonald CL, Keene CD, Hyman BT, Iglesias JE. Machine learning of dissection photographs and surface scanning for quantitative 3D neuropathology. eLife 2024; 12:RP91398. [PMID: 38896568 PMCID: PMC11186625 DOI: 10.7554/elife.91398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024] Open
Abstract
We present open-source tools for three-dimensional (3D) analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (1) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (2) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite 'FreeSurfer' (https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools).
Collapse
Affiliation(s)
- Harshvardhan Gazula
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Henry FJ Tregidgo
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
| | - Benjamin Billot
- Computer Science and Artificial Intelligence Laboratory, MITCambridgeUnited States
| | - Yael Balbastre
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | | | - Rogeny Herisse
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Lucas J Deden-Binder
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Adria Casamitjana
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
- Biomedical Imaging Group, Universitat Politècnica de CatalunyaBarcelonaSpain
| | - Erica J Melief
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Caitlin S Latimer
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Mitchell D Kilgore
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Mark Montine
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Eleanor Robinson
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
| | - Emily Blackburn
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
| | - Michael S Marshall
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Theresa R Connors
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Derek H Oakley
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Matthew P Frosch
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Sean I Young
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Koen Van Leemput
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
- Neuroscience and Biomedical Engineering, Aalto UniversityEspooFinland
| | - Adrian V Dalca
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
- Computer Science and Artificial Intelligence Laboratory, MITCambridgeUnited States
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | | | - C Dirk Keene
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Bradley T Hyman
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Juan E Iglesias
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
- Computer Science and Artificial Intelligence Laboratory, MITCambridgeUnited States
| |
Collapse
|
7
|
dos Santos PV, Scoczynski Ribeiro Martins M, Amorim Nogueira S, Gonçalves C, Maffei Loureiro R, Pacheco Calixto W. Unsupervised model for structure segmentation applied to brain computed tomography. PLoS One 2024; 19:e0304017. [PMID: 38870119 PMCID: PMC11175403 DOI: 10.1371/journal.pone.0304017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 05/03/2024] [Indexed: 06/15/2024] Open
Abstract
This article presents an unsupervised method for segmenting brain computed tomography scans. The proposed methodology involves image feature extraction and application of similarity and continuity constraints to generate segmentation maps of the anatomical head structures. Specifically designed for real-world datasets, this approach applies a spatial continuity scoring function tailored to the desired number of structures. The primary objective is to assist medical experts in diagnosis by identifying regions with specific abnormalities. Results indicate a simplified and accessible solution, reducing computational effort, training time, and financial costs. Moreover, the method presents potential for expediting the interpretation of abnormal scans, thereby impacting clinical practice. This proposed approach might serve as a practical tool for segmenting brain computed tomography scans, and make a significant contribution to the analysis of medical images in both research and clinical settings.
Collapse
Affiliation(s)
- Paulo Victor dos Santos
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Department of Radiology, Hospital Israelita Albert Einstein, Sao Paulo, Sao Paulo, Brazil
- Technology Research and Development Center (GCITE), Federal Institute of Goias, Goiania, Brazil
| | - Marcella Scoczynski Ribeiro Martins
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Federal University of Technology - Parana, Ponta Grossa, Parana, Brazil
| | - Solange Amorim Nogueira
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Department of Radiology, Hospital Israelita Albert Einstein, Sao Paulo, Sao Paulo, Brazil
| | | | - Rafael Maffei Loureiro
- Department of Radiology, Hospital Israelita Albert Einstein, Sao Paulo, Sao Paulo, Brazil
| | - Wesley Pacheco Calixto
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Technology Research and Development Center (GCITE), Federal Institute of Goias, Goiania, Brazil
| |
Collapse
|
8
|
Wu J, Zhang S, Huang Y, Hao Q. Fringe-based depth segmentation via minimum-fringe-period-based singular points extraction. OPTICS EXPRESS 2024; 32:20066-20079. [PMID: 38859124 DOI: 10.1364/oe.524008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 05/03/2024] [Indexed: 06/12/2024]
Abstract
In the field of machine vision, depth segmentation plays a crucial role in dividing targets into different regions based on abrupt changes in depth. Phase-shifting depth segmentation is a technique that extracts singular points to form segmentation lines by leveraging the phase-shifting invariance of singular points in different wrapped phase maps. This makes it immune to color, texture, and camera exposure. However, current phase-shifting depth segmentation techniques face challenges in the precision of segmentation. To overcome this issue, this paper proposes a singular points extraction technique by constructing a more comprehensive threshold with the help of the minimum period of the phase map. Taking full advantage of the proposed technique, mean-value points and order singular points are accurately filtered out, and the integrity of segmentation lines in high-curvature regions can be guaranteed. During optimization processing, the precision of segmentation is improved by employing a low-cost morphology-based optimization model. Simulation results demonstrate the segmentation accuracy reaches up to 98.58% even in a noisy condition. Experimental results on different objects indicate that the proposed method exhibits good generalization and robustness.
Collapse
|
9
|
Collazo C, Vargas I, Cara B, Weinheimer CJ, Grabau RP, Goldgof D, Hall L, Wickline SA, Pan H. Synergizing Deep Learning-Enabled Preprocessing and Human-AI Integration for Efficient Automatic Ground Truth Generation. Bioengineering (Basel) 2024; 11:434. [PMID: 38790302 PMCID: PMC11117745 DOI: 10.3390/bioengineering11050434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/20/2024] [Accepted: 04/26/2024] [Indexed: 05/26/2024] Open
Abstract
The progress of incorporating deep learning in the field of medical image interpretation has been greatly hindered due to the tremendous cost and time associated with generating ground truth for supervised machine learning, alongside concerns about the inconsistent quality of images acquired. Active learning offers a potential solution to these problems of expanding dataset ground truth by algorithmically choosing the most informative samples for ground truth labeling. Still, this effort incurs the costs of human labeling, which needs minimization. Furthermore, automatic labeling approaches employing active learning often exhibit overfitting tendencies while selecting samples closely aligned with the training set distribution and excluding out-of-distribution samples, which could potentially improve the model's effectiveness. We propose that the majority of out-of-distribution instances can be attributed to inconsistent cross images. Since the FDA approved the first whole-slide image system for medical diagnosis in 2017, whole-slide images have provided enriched critical information to advance the field of automated histopathology. Here, we exemplify the benefits of a novel deep learning strategy that utilizes high-resolution whole-slide microscopic images. We quantitatively assess and visually highlight the inconsistencies within the whole-slide image dataset employed in this study. Accordingly, we introduce a deep learning-based preprocessing algorithm designed to normalize unknown samples to the training set distribution, effectively mitigating the overfitting issue. Consequently, our approach significantly increases the amount of automatic region-of-interest ground truth labeling on high-resolution whole-slide images using active deep learning. We accept 92% of the automatic labels generated for our unlabeled data cohort, expanding the labeled dataset by 845%. Additionally, we demonstrate expert time savings of 96% relative to manual expert ground-truth labeling.
Collapse
Affiliation(s)
| | - Ian Vargas
- The Heart Institute, College of Medicine, University of South Florida, Tampa, FL 33602, USA (B.C.); (S.A.W.)
| | - Brendon Cara
- The Heart Institute, College of Medicine, University of South Florida, Tampa, FL 33602, USA (B.C.); (S.A.W.)
| | - Carla J. Weinheimer
- Department of Medicine, Washington University in St. Louis, St. Louis, MO 63110, USA
| | - Ryan P. Grabau
- The Heart Institute, College of Medicine, University of South Florida, Tampa, FL 33602, USA (B.C.); (S.A.W.)
| | - Dmitry Goldgof
- College of Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Lawrence Hall
- College of Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Samuel A. Wickline
- The Heart Institute, College of Medicine, University of South Florida, Tampa, FL 33602, USA (B.C.); (S.A.W.)
| | - Hua Pan
- Department of Medicine, Washington University in St. Louis, St. Louis, MO 63110, USA
- Department of Pathology & Immunology, Washington University in St. Louis, St. Louis, MO 63110, USA
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
| |
Collapse
|
10
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
11
|
El-Beblawy YM, Bakry AM, Mohamed MEA. Accuracy of formula-based volume and image segmentation-based volume in calculation of preoperative cystic jaw lesions' volume. Oral Radiol 2024; 40:259-268. [PMID: 38112919 DOI: 10.1007/s11282-023-00731-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 11/26/2023] [Indexed: 12/21/2023]
Abstract
OBJECTIVE The aim of this study was to assess the accuracy of formula-based volume measurements and the 3D volume analysis with different software packages in the calculation of preoperative cystic jaw lesions' volume. The secondary aim was to assess the reliability and the accuracy of 3 imaging software programs for measuring the cystic jaw lesions' volume in CBCT images. MATERIALS AND METHODS This study consisted of two parts: an in vitro part using 2 dry human mandibles that were used to create simulated osteolytic lesions to assess the accuracy of the volumetric analysis and formula-based volume. As a gold standard, the volume of each bone defect was determined by taking an impression using rapid soft silicone (Vinylight) and then quantifying the volume of the replica. Afterward, each tooth socket was scanned using a high-resolution CBCT. A retrospective part using archived CBCT radiographs that were taken from the database of the outpatient clinic of the oral and maxillofacial radiology department, Faculty of Dentistry, Minia University to assess the reliability of the 3 software packages. The volumetric data set was exported for volume quantification using the 3 software packages (MIMICS-OnDemand and InVesalius software). Also, the three greatest orthogonal diameters of the lesions were calculated, and the volume was assessed using the ellipsoid formula. Dunn's test was used for pair-wise comparisons when Friedman's test was significant. The inter-examiner agreement was assessed using Cronbach's alpha reliability coefficient and intra-class correlation coefficient. RESULTS Regarding the results of the retrospective part, there was a statistically significant difference between volumetric measurements by equation and different software (P value < 0.001, Effect size = 0.513). The inter-observer reliability of the measurements of the cystic lesions using the different software packages was very good. The highest inter-examiner agreement for volume measurement was found with InVesalius (Cronbach's alpha = 0.992). On the other hand, there was a statistically significant difference between dry mandible volumetric measurements and Gold Standard. All software showed statistically significantly lower dry mandible volumetric measurements than the gold standard. CONCLUSION Computer-aided assessment of cystic lesion volume using InVesalius, OnDemand, and MIMICS is a readily available, easy to use, non-invasive option. It confers an advantage over formula-based volume as it gives the exact morphology of the lesion so that potential problems can be detected before surgery. Volume analysis with InVesalius software was accurate in determining the volume of simulated periapical defects in a human cadaver mandible as compared to true volume. InVesalius software proved that open-source software can be robust yet user-friendly with the advantage of minimal cost to use.
Collapse
Affiliation(s)
- Yasmein Maher El-Beblawy
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Minia University, Shalaby Street, Minya, Egypt.
| | - Ahmed Mohamed Bakry
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Minia University, Shalaby Street, Minya, Egypt
| | - Maha Eshaq Amer Mohamed
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Minia University, Shalaby Street, Minya, Egypt
| |
Collapse
|
12
|
Sikkandar MY, Alhashim MM, Alassaf A, AlMohimeed I, Alhussaini K, Aleid A, Almutairi MJ, Alshammari SH, Asiri YN, Sabarunisha Begum S. Unsupervised local center of mass based scoliosis spinal segmentation and Cobb angle measurement. PLoS One 2024; 19:e0300685. [PMID: 38512969 PMCID: PMC10956862 DOI: 10.1371/journal.pone.0300685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 03/01/2024] [Indexed: 03/23/2024] Open
Abstract
Scoliosis is a medical condition in which a person's spine has an abnormal curvature and Cobb angle is a measurement used to evaluate the severity of a spinal curvature. Presently, automatic Existing Cobb angle measurement techniques require huge dataset, time-consuming, and needs significant effort. So, it is important to develop an unsupervised method for the measurement of Cobb angle with good accuracy. In this work, an unsupervised local center of mass (LCM) technique is proposed to segment the spine region and further novel Cobb angle measurement method is proposed for accurate measurement. Validation of the proposed method was carried out on 2D X-ray images from the Saudi Arabian population. Segmentation results were compared with GMM-Based Hidden Markov Random Field (GMM-HMRF) segmentation method based on sensitivity, specificity, and dice score. Based on the findings, it can be observed that our proposed segmentation method provides an overall accuracy of 97.3% whereas GMM-HMRF has an accuracy of 89.19%. Also, the proposed method has a higher dice score of 0.54 compared to GMM-HMRF. To further evaluate the effectiveness of the approach in the Cobb angle measurement, the results were compared with Senior Scoliosis Surgeon at Multispecialty Hospital in Saudi Arabia. The findings indicated that the segmentation of the scoliotic spine was nearly flawless, and the Cobb angle measurements obtained through manual examination by the expert and the algorithm were nearly identical, with a discrepancy of only ± 3 degrees. Our proposed method can pave the way for accurate spinal segmentation and Cobb angle measurement among scoliosis patients by reducing observers' variability.
Collapse
Affiliation(s)
- Mohamed Yacin Sikkandar
- Department of Medical Equipment Technology, College of Applied Medical Sciences, Majmaah University, Al Majmaah, Saudi Arabia
| | - Maryam M. Alhashim
- Department of Radiology, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Ahmad Alassaf
- Department of Medical Equipment Technology, College of Applied Medical Sciences, Majmaah University, Al Majmaah, Saudi Arabia
| | - Ibrahim AlMohimeed
- Department of Medical Equipment Technology, College of Applied Medical Sciences, Majmaah University, Al Majmaah, Saudi Arabia
| | - Khalid Alhussaini
- Department of Biomedical Technology, College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Adham Aleid
- Department of Biomedical Technology, College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Murad J. Almutairi
- Department of Medical Equipment Technology, College of Applied Medical Sciences, Majmaah University, Al Majmaah, Saudi Arabia
| | - Salem H. Alshammari
- Department of Medical Equipment Technology, College of Applied Medical Sciences, Majmaah University, Al Majmaah, Saudi Arabia
| | - Yasser N. Asiri
- Medical Imaging Services Center, King Fahad Specialist Hospital Dammam, Dammam, Saudi Arabia
| | | |
Collapse
|
13
|
Panda AK, Verma V, Srivastav A, Badola R, Hussain SA. Digital image processing: A new tool for morphological measurements of freshwater turtles under rehabilitation. PLoS One 2024; 19:e0300253. [PMID: 38484004 PMCID: PMC10939246 DOI: 10.1371/journal.pone.0300253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 02/23/2024] [Indexed: 03/17/2024] Open
Abstract
Freshwater fauna is facing an uphill task for survival in the Ganga Basin, India, due to a range of factors causing habitat degradation and fragmentation, necessitating conservation interventions. As part of the ongoing efforts to conserve the freshwater fauna of the Basin, we are working on rehabilitating rescued freshwater chelonians. We carry out various interventions to restore rescued individuals to an apparent state of fitness for their release in suitable natural habitats. Morphometric measurements are crucial to managing captive wild animals for assessing their growth and well-being. Measurements are made using manual methods like vernier caliper that are prone to observer error experience and require handling the specimens for extended periods. Digital imaging technology is rapidly progressing at a fast pace and with the advancement of technology. We acquired images of turtles using smartphones along with manual morphometric measurements using vernier calipers of the straight carapace length and straight carapace width. The images were subsequently processed using ImageJ, a freeware and compared with manual morphometric measurements. A significant decrease in the time spent in carrying out morphometric measurements was observed in our study. The difference in error in measurements was, however, not significant. A probable cause for this may have been the extensive experience of the personnel carrying out the measurements using vernier caliper. Digital image processing technology can cause a significant reduction in the stress of the animals exposed to handling during measurements, thereby improving their welfare. Additionally, this can be used in the field to carry out morphometric measurements of free-ranging individuals, where it is often difficult to capture individuals, and challenges are faced in obtaining permission to capture specimens.
Collapse
Affiliation(s)
- Ashish Kumar Panda
- Ganga Aqualife Conservation and Monitoring Centre, Wildlife Institute of India, Chandrabani, Dehra Dun, Uttarakhand, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, Uttar Pradesh, India
| | - Vikas Verma
- Ganga Aqualife Conservation and Monitoring Centre, Wildlife Institute of India, Chandrabani, Dehra Dun, Uttarakhand, India
| | - Anupam Srivastav
- Ganga Aqualife Conservation and Monitoring Centre, Wildlife Institute of India, Chandrabani, Dehra Dun, Uttarakhand, India
| | - Ruchi Badola
- Ganga Aqualife Conservation and Monitoring Centre, Wildlife Institute of India, Chandrabani, Dehra Dun, Uttarakhand, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, Uttar Pradesh, India
| | - Syed Ainul Hussain
- Ganga Aqualife Conservation and Monitoring Centre, Wildlife Institute of India, Chandrabani, Dehra Dun, Uttarakhand, India
| |
Collapse
|
14
|
Zeng Z, Giap BD, Kahana E, Lustre J, Mahmoud O, Mian SI, Tannen B, Nallasamy N. Evaluation of Methods for Detection and Semantic Segmentation of the Anterior Capsulotomy in Cataract Surgery Video. Clin Ophthalmol 2024; 18:647-657. [PMID: 38476358 PMCID: PMC10929120 DOI: 10.2147/opth.s453073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 02/20/2024] [Indexed: 03/14/2024] Open
Abstract
Background The capsulorhexis is one of the most important and challenging maneuvers in cataract surgery. Automated analysis of the anterior capsulotomy could aid surgical training through the provision of objective feedback and guidance to trainees. Purpose To develop and evaluate a deep learning-based system for the automated identification and semantic segmentation of the anterior capsulotomy in cataract surgery video. Methods In this study, we established a BigCat-Capsulotomy dataset comprising 1556 video frames extracted from 190 recorded cataract surgery videos for developing and validating the capsulotomy recognition system. The proposed system involves three primary stages: video preprocessing, capsulotomy video frame classification, and capsulotomy segmentation. To thoroughly evaluate its efficacy, we examined the performance of a total of eight deep learning-based classification models and eleven segmentation models, assessing both accuracy and time consumption. Furthermore, we delved into the factors influencing system performance by deploying it across various surgical phases. Results The ResNet-152 model employed in the classification step of the proposed capsulotomy recognition system attained strong performance with an overall Dice coefficient of 92.21%. Similarly, the UNet model with the DenseNet-169 backbone emerged as the most effective segmentation model among those investigated, achieving an overall Dice coefficient of 92.12%. Moreover, the time consumption of the system was low at 103.37 milliseconds per frame, facilitating its application in real-time scenarios. Phase-wise analysis indicated that the Phacoemulsification phase (nuclear disassembly) was the most challenging to segment (Dice coefficient of 86.02%). Conclusion The experimental results showed that the proposed system is highly effective in intraoperative capsulotomy recognition during cataract surgery and demonstrates both high accuracy and real-time capabilities. This system holds significant potential for applications in surgical performance analysis, education, and intraoperative guidance systems.
Collapse
Affiliation(s)
- Zixue Zeng
- School of Public Health, University of Michigan, Ann Arbor, MI, USA
| | - Binh Duong Giap
- Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, MI, USA
| | - Ethan Kahana
- Department of Computer Science, University of Michigan, Ann Arbor, MI, USA
| | | | - Ossama Mahmoud
- School of Medicine, Wayne State University, Detroit, MI, USA
| | - Shahzad I Mian
- Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, MI, USA
| | - Bradford Tannen
- Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, MI, USA
| | - Nambi Nallasamy
- Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, MI, USA
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
15
|
Yang Z, Lafata K, Vaios E, Hu Z, Mullikin T, Yin FF, Wang C. Quantifying U-Net uncertainty in multi-parametric MRI-based glioma segmentation by spherical image projection. Med Phys 2024; 51:1931-1943. [PMID: 37696029 PMCID: PMC10925552 DOI: 10.1002/mp.16695] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 07/18/2023] [Accepted: 08/08/2023] [Indexed: 09/13/2023] Open
Abstract
BACKGROUND Uncertainty quantification in deep learning is an important research topic. For medical image segmentation, the uncertainty measurements are usually reported as the likelihood that each pixel belongs to the predicted segmentation region. In potential clinical applications, the uncertainty result reflects the algorithm's robustness and supports the confidence and trust of the segmentation result when the ground-truth result is absent. For commonly studied deep learning models, novel methods for quantifying segmentation uncertainty are in demand. PURPOSE To develop a U-Net segmentation uncertainty quantification method based on spherical image projection of multi-parametric MRI (MP-MRI) in glioma segmentation. METHODS The projection of planar MRI data onto a spherical surface is equivalent to a nonlinear image transformation that retains global anatomical information. By incorporating this image transformation process in our proposed spherical projection-based U-Net (SPU-Net) segmentation model design, multiple independent segmentation predictions can be obtained from a single MRI. The final segmentation is the average of all available results, and the variation can be visualized as a pixel-wise uncertainty map. An uncertainty score was introduced to evaluate and compare the performance of uncertainty measurements. The proposed SPU-Net model was implemented on the basis of 369 glioma patients with MP-MRI scans (T1, T1-Ce, T2, and FLAIR). Three SPU-Net models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The SPU-Net model was compared with (1) the classic U-Net model with test-time augmentation (TTA) and (2) linear scaling-based U-Net (LSU-Net) segmentation models in terms of both segmentation accuracy (Dice coefficient, sensitivity, specificity, and accuracy) and segmentation uncertainty (uncertainty map and uncertainty score). RESULTS The developed SPU-Net model successfully achieved low uncertainty for correct segmentation predictions (e.g., tumor interior or healthy tissue interior) and high uncertainty for incorrect results (e.g., tumor boundaries). This model could allow the identification of missed tumor targets or segmentation errors in U-Net. Quantitatively, the SPU-Net model achieved the highest uncertainty scores for three segmentation targets (ET/TC/WT): 0.826/0.848/0.936, compared to 0.784/0.643/0.872 using the U-Net with TTA and 0.743/0.702/0.876 with the LSU-Net (scaling factor = 2). The SPU-Net also achieved statistically significantly higher Dice coefficients, underscoring the improved segmentation accuracy. CONCLUSION The SPU-Net model offers a powerful tool to quantify glioma segmentation uncertainty while improving segmentation accuracy. The proposed method can be generalized to other medical image-related deep-learning applications for uncertainty evaluation.
Collapse
Affiliation(s)
- Zhenyu Yang
- Department of Radiation Oncology, Duke University, Durham, NC, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Kyle Lafata
- Department of Radiation Oncology, Duke University, Durham, NC, USA
- Department of Radiology, Duke University, Durham, NC, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Eugene Vaios
- Department of Radiation Oncology, Duke University, Durham, NC, USA
| | - Zongsheng Hu
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Graduate School of Biomedical Science, Houston, TX, USA
| | - Trey Mullikin
- Department of Radiation Oncology, Duke University, Durham, NC, USA
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University, Durham, NC, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
| | - Chunhao Wang
- Department of Radiation Oncology, Duke University, Durham, NC, USA
| |
Collapse
|
16
|
Bera K, Rojas-Gómez RA, Mukherjee P, Snyder CE, Aksamitiene E, Alex A, Spillman DR, Marjanovic M, Shabana A, Johnson R, Hood SR, Boppart SA. Probing delivery of a lipid nanoparticle encapsulated self-amplifying mRNA vaccine using coherent Raman microscopy and multiphoton imaging. Sci Rep 2024; 14:4348. [PMID: 38388635 PMCID: PMC10884293 DOI: 10.1038/s41598-024-54697-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 02/15/2024] [Indexed: 02/24/2024] Open
Abstract
The COVID-19 pandemic triggered the resurgence of synthetic RNA vaccine platforms allowing rapid, scalable, low-cost manufacturing, and safe administration of therapeutic vaccines. Self-amplifying mRNA (SAM), which self-replicates upon delivery into the cellular cytoplasm, leads to a strong and sustained immune response. Such mRNAs are encapsulated within lipid nanoparticles (LNPs) that act as a vehicle for delivery to the cell cytoplasm. A better understanding of LNP-mediated SAM uptake and release mechanisms in different types of cells is critical for designing effective vaccines. Here, we investigated the cellular uptake of a SAM-LNP formulation and subsequent intracellular expression of SAM in baby hamster kidney (BHK-21) cells using hyperspectral coherent anti-Stokes Raman scattering (HS-CARS) microscopy and multiphoton-excited fluorescence lifetime imaging microscopy (FLIM). Cell classification pipelines based on HS-CARS and FLIM features were developed to obtain insights on spectral and metabolic changes associated with SAM-LNPs uptake. We observed elevated lipid intensities with the HS-CARS modality in cells treated with LNPs versus PBS-treated cells, and simultaneous fluorescence images revealed SAM expression inside BHK-21 cell nuclei and cytoplasm within 5 h of treatment. In a separate experiment, we observed a strong correlation between the SAM expression and mean fluorescence lifetime of the bound NAD(P)H population. This work demonstrates the ability and significance of multimodal optical imaging techniques to assess the cellular uptake of SAM-LNPs and the subsequent changes occurring in the cellular microenvironment following the vaccine expression.
Collapse
Affiliation(s)
- Kajari Bera
- GSK Center for Optical Molecular Imaging, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Renán A Rojas-Gómez
- GSK Center for Optical Molecular Imaging, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Prabuddha Mukherjee
- GSK Center for Optical Molecular Imaging, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Corey E Snyder
- GSK Center for Optical Molecular Imaging, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Edita Aksamitiene
- GSK Center for Optical Molecular Imaging, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Aneesh Alex
- GSK Center for Optical Molecular Imaging, University of Illinois Urbana-Champaign, Urbana, IL, USA
- In Vitro/In Vivo Translation, Research, GlaxoSmithKline, Collegeville, PA, USA
| | - Darold R Spillman
- GSK Center for Optical Molecular Imaging, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Marina Marjanovic
- GSK Center for Optical Molecular Imaging, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, USA
- Carle Illinois College of Medicine, University of Illinois Urbana-Champaign, Urbana, IL, USA
| | - Ahmed Shabana
- GSK Vaccines, Rockville Center for Vaccines Research, Rockville, MD, USA
| | - Russell Johnson
- GSK Vaccines, Rockville Center for Vaccines Research, Rockville, MD, USA
| | - Steve R Hood
- GSK Center for Optical Molecular Imaging, University of Illinois Urbana-Champaign, Urbana, IL, USA
- In Vitro/In Vivo Translation, Research, GlaxoSmithKline, Stevenage, UK
| | - Stephen A Boppart
- GSK Center for Optical Molecular Imaging, University of Illinois Urbana-Champaign, Urbana, IL, USA.
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL, USA.
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL, USA.
- Department of Bioengineering, University of Illinois Urbana-Champaign, Urbana, IL, USA.
- Carle Illinois College of Medicine, University of Illinois Urbana-Champaign, Urbana, IL, USA.
- Cancer Center at Illinois, University of Illinois Urbana-Champaign, Urbana, IL, USA.
| |
Collapse
|
17
|
de Boer M, Kos TM, Fick T, van Doormaal JAM, Colombo E, Kuijf HJ, Robe PAJT, Regli LP, Bartels LW, van Doormaal TPC. NnU-Net versus mesh growing algorithm as a tool for the robust and timely segmentation of neurosurgical 3D images in contrast-enhanced T1 MRI scans. Acta Neurochir (Wien) 2024; 166:92. [PMID: 38376564 PMCID: PMC10879314 DOI: 10.1007/s00701-024-05973-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Accepted: 01/22/2024] [Indexed: 02/21/2024]
Abstract
PURPOSE This study evaluates the nnU-Net for segmenting brain, skin, tumors, and ventricles in contrast-enhanced T1 (T1CE) images, benchmarking it against an established mesh growing algorithm (MGA). METHODS We used 67 retrospectively collected annotated single-center T1CE brain scans for training models for brain, skin, tumor, and ventricle segmentation. An additional 32 scans from two centers were used test performance compared to that of the MGA. The performance was measured using the Dice-Sørensen coefficient (DSC), intersection over union (IoU), 95th percentile Hausdorff distance (HD95), and average symmetric surface distance (ASSD) metrics, with time to segment also compared. RESULTS The nnU-Net models significantly outperformed the MGA (p < 0.0125) with a median brain segmentation DSC of 0.971 [95CI: 0.945-0.979], skin: 0.997 [95CI: 0.984-0.999], tumor: 0.926 [95CI: 0.508-0.968], and ventricles: 0.910 [95CI: 0.812-0.968]. Compared to the MGA's median DSC for brain: 0.936 [95CI: 0.890, 0.958], skin: 0.991 [95CI: 0.964, 0.996], tumor: 0.723 [95CI: 0.000-0.926], and ventricles: 0.856 [95CI: 0.216-0.916]. NnU-Net performance between centers did not significantly differ except for the skin segmentations Additionally, the nnU-Net models were faster (mean: 1139 s [95CI: 685.0-1616]) than the MGA (mean: 2851 s [95CI: 1482-6246]). CONCLUSIONS The nnU-Net is a fast, reliable tool for creating automatic deep learning-based segmentation pipelines, reducing the need for extensive manual tuning and iteration. The models are able to achieve this performance despite a modestly sized training set. The ability to create high-quality segmentations in a short timespan can prove invaluable in neurosurgical settings.
Collapse
Affiliation(s)
- Mathijs de Boer
- Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands.
| | - Tessa M Kos
- Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Tim Fick
- Department of Neuro-Oncology, Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | | | - Elisa Colombo
- Department of Neurosurgery, University Hospital of Zürich, Zurich, Switzerland
| | - Hugo J Kuijf
- Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Pierre A J T Robe
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Luca P Regli
- Department of Neurosurgery, University Hospital of Zürich, Zurich, Switzerland
| | - Lambertus W Bartels
- Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht, The Netherlands
| | - Tristan P C van Doormaal
- Department of Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Neurosurgery, University Hospital of Zürich, Zurich, Switzerland
| |
Collapse
|
18
|
Wei R, Ganglberger W, Sun H, Hadar P, Gollub R, Pieper S, Billot B, Au R, Eugenio Iglesias J, Cash SS, Kim S, Shin C, Westover MB, Joseph Thomas R. Linking brain structure, cognition, and sleep: insights from clinical data. Sleep 2024; 47:zsad294. [PMID: 37950486 PMCID: PMC10851868 DOI: 10.1093/sleep/zsad294] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/13/2023] [Indexed: 11/12/2023] Open
Abstract
STUDY OBJECTIVES To use relatively noisy routinely collected clinical data (brain magnetic resonance imaging (MRI) data, clinical polysomnography (PSG) recordings, and neuropsychological testing), to investigate hypothesis-driven and data-driven relationships between brain physiology, structure, and cognition. METHODS We analyzed data from patients with clinical PSG, brain MRI, and neuropsychological evaluations. SynthSeg, a neural network-based tool, provided high-quality segmentations despite noise. A priori hypotheses explored associations between brain function (measured by PSG) and brain structure (measured by MRI). Associations with cognitive scores and dementia status were studied. An exploratory data-driven approach investigated age-structure-physiology-cognition links. RESULTS Six hundred and twenty-three patients with sleep PSG and brain MRI data were included in this study; 160 with cognitive evaluations. Three hundred and forty-two participants (55%) were female, and age interquartile range was 52 to 69 years. Thirty-six individuals were diagnosed with dementia, 71 with mild cognitive impairment, and 326 with major depression. One hundred and fifteen individuals were evaluated for insomnia and 138 participants had an apnea-hypopnea index equal to or greater than 15. Total PSG delta power correlated positively with frontal lobe/thalamic volumes, and sleep spindle density with thalamic volume. rapid eye movement (REM) duration and amygdala volume were positively associated with cognition. Patients with dementia showed significant differences in five brain structure volumes. REM duration, spindle, and slow-oscillation features had strong associations with cognition and brain structure volumes. PSG and MRI features in combination predicted chronological age (R2 = 0.67) and cognition (R2 = 0.40). CONCLUSIONS Routine clinical data holds extended value in understanding and even clinically using brain-sleep-cognition relationships.
Collapse
Affiliation(s)
- Ruoqi Wei
- Division of Pulmonary Critical Care & Sleep Medicine, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- McCance Center for Brain Health, Massachusetts General Hospital, Boston, MA, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, Massachusetts, USA
- Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, USA
| | - Wolfgang Ganglberger
- McCance Center for Brain Health, Massachusetts General Hospital, Boston, MA, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, Massachusetts, USA
- Department of Neurology, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Sleep and Health Zurich, University of Zurich, Zurich, Switzerland
| | - Haoqi Sun
- McCance Center for Brain Health, Massachusetts General Hospital, Boston, MA, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, Massachusetts, USA
- Department of Neurology, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Peter N Hadar
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Randy L Gollub
- Department of Psychiatry, Massachusetts General Hospital, Boston, MA, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | | | - Benjamin Billot
- Computer Science and Artificial Intelligence Lab, MIT, Boston, MA, USA
| | - Rhoda Au
- Anatomy& Neurobiology, Neurology, Medicine and Epidemiology, Boston University Chobanian & Avedisian School of Medicine and School of Public Health, Boston University, Boston, MA, USA
| | - Juan Eugenio Iglesias
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
- Isomics, Inc. Cambridge, MA, USA
- Center for Medical Image Computing, University College London, London, UK
| | - Sydney S Cash
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Soriul Kim
- Institute of Human Genomic Study, College of Medicine, Kore University, Seoul, Republic of Korea
| | - Chol Shin
- Institute of Human Genomic Study, College of Medicine, Kore University, Seoul, Republic of Korea
- Biomedical Research Center, Korea University Ansan Hospital, Ansan, Republic of Korea
| | - M Brandon Westover
- McCance Center for Brain Health, Massachusetts General Hospital, Boston, MA, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, Massachusetts, USA
- Department of Neurology, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Robert Joseph Thomas
- Division of Pulmonary Critical Care & Sleep Medicine, Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Division of Sleep Medicine, Harvard Medical School, Boston, Massachusetts, USA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
19
|
Scott I, Connell D, Moulton D, Waters S, Namburete A, Arnab A, Malliaras P. An automated method for tendon image segmentation on ultrasound using grey-level co-occurrence matrix features and hidden Gaussian Markov random fields. Comput Biol Med 2024; 169:107872. [PMID: 38160500 DOI: 10.1016/j.compbiomed.2023.107872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 12/07/2023] [Accepted: 12/17/2023] [Indexed: 01/03/2024]
Abstract
BACKGROUND Despite knowledge of qualitative changes that occur on ultrasound in tendinopathy, there is currently no objective and reliable means to quantify the severity or prognosis of tendinopathy on ultrasound. OBJECTIVE The primary objective of this study is to produce a quantitative and automated means of inferring potential structural changes in tendinopathy by developing and implementing an algorithm which performs a texture based segmentation of tendon ultrasound (US) images. METHOD A model-based segmentation approach is used which combines Gaussian mixture models, Markov random field theory and grey-level co-occurrence (GLCM) features. The algorithm is trained and tested on 49 longitudinal B-mode ultrasound images of the Achilles tendons which are labelled as tendinopathic (24) or healthy (25). Hyperparameters are tuned, using a training set of 25 images, to optimise a decision tree based classification of the images from texture class proportions. We segment and classify the remaining test images using the decision tree. RESULTS Our approach successfully detects a difference in the texture profiles of tendinopathic and healthy tendons, with 22/24 of the test images accurately classified based on a simple texture proportion cut-off threshold. Results for the tendinopathic images are also collated to gain insight into the topology of structural changes that occur with tendinopathy. It is evident that distinct textures, which are predominantly present in tendinopathic tendons, appear most commonly near the transverse boundary of the tendon, though there was a large variability among diseased tendons. CONCLUSION The GLCM based segmentation of tendons under ultrasound resulted in distinct segmentations between healthy and tendinopathic tendons and provides a potential tool to objectively quantify damage in tendinopathy.
Collapse
Affiliation(s)
- Isabelle Scott
- Mathematical Institute, University of Oxford, Oxford, United Kingdom; Orygen, The National Centre of Excellence in Youth Mental Health, University of Melbourne, Parkville, Melbourne, Australia.
| | | | - Derek Moulton
- Mathematical Institute, University of Oxford, Oxford, United Kingdom
| | - Sarah Waters
- Mathematical Institute, University of Oxford, Oxford, United Kingdom
| | - Ana Namburete
- Oxford Machine Learning in Neuroimaging laboratory, OMNI, Department of Computer Science, University of Oxford, Oxford, United Kingdom
| | | | - Peter Malliaras
- Imaging at Olympic Park, Melbourne, Australia; Department of Physiotherapy, Monash University, Melbourne, Australia
| |
Collapse
|
20
|
Alabdulhafith M, Ba Mahel AS, Samee NA, Mahmoud NF, Talaat R, Muthanna MSA, Nassef TM. Automated wound care by employing a reliable U-Net architecture combined with ResNet feature encoders for monitoring chronic wounds. Front Med (Lausanne) 2024; 11:1310137. [PMID: 38357646 PMCID: PMC10865496 DOI: 10.3389/fmed.2024.1310137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 01/02/2024] [Indexed: 02/16/2024] Open
Abstract
Quality of life is greatly affected by chronic wounds. It requires more intensive care than acute wounds. Schedule follow-up appointments with their doctor to track healing. Good wound treatment promotes healing and fewer problems. Wound care requires precise and reliable wound measurement to optimize patient treatment and outcomes according to evidence-based best practices. Images are used to objectively assess wound state by quantifying key healing parameters. Nevertheless, the robust segmentation of wound images is complex because of the high diversity of wound types and imaging conditions. This study proposes and evaluates a novel hybrid model developed for wound segmentation in medical images. The model combines advanced deep learning techniques with traditional image processing methods to improve the accuracy and reliability of wound segmentation. The main objective is to overcome the limitations of existing segmentation methods (UNet) by leveraging the combined advantages of both paradigms. In our investigation, we introduced a hybrid model architecture, wherein a ResNet34 is utilized as the encoder, and a UNet is employed as the decoder. The combination of ResNet34's deep representation learning and UNet's efficient feature extraction yields notable benefits. The architectural design successfully integrated high-level and low-level features, enabling the generation of segmentation maps with high precision and accuracy. Following the implementation of our model to the actual data, we were able to determine the following values for the Intersection over Union (IOU), Dice score, and accuracy: 0.973, 0.986, and 0.9736, respectively. According to the achieved results, the proposed method is more precise and accurate than the current state-of-the-art.
Collapse
Affiliation(s)
- Maali Alabdulhafith
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Abduljabbar S. Ba Mahel
- School of Life Science, University of Electronic Science and Technology of China, Chengdu, China
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Rawan Talaat
- Biotechnology and Genetics Department, Agriculture Engineering, Ain Shams University, Cairo, Egypt
| | | | - Tamer M. Nassef
- Computer and Software Engineering Department, Engineering College, Misr University for Science and Technology, 6th of October, Egypt
| |
Collapse
|
21
|
Gazula H, Tregidgo HFJ, Billot B, Balbastre Y, William-Ramirez J, Herisse R, Deden-Binder LJ, Casamitjana A, Melief EJ, Latimer CS, Kilgore MD, Montine M, Robinson E, Blackburn E, Marshall MS, Connors TR, Oakley DH, Frosch MP, Young SI, Van Leemput K, Dalca AV, FIschl B, Mac Donald CL, Keene CD, Hyman BT, Iglesias JE. Machine learning of dissection photographs and surface scanning for quantitative 3D neuropathology. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.08.544050. [PMID: 37333251 PMCID: PMC10274889 DOI: 10.1101/2023.06.08.544050] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
We present open-source tools for 3D analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (i) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (ii) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite "FreeSurfer" ( https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools ).
Collapse
|
22
|
Gómez Ó, Mesejo P, Ibáñez Ó, Valsecchi A, Bermejo E, Cerezo A, Pérez J, Alemán I, Kahana T, Damas S, Cordón Ó. Evaluating artificial intelligence for comparative radiography. Int J Legal Med 2024; 138:307-327. [PMID: 37801115 DOI: 10.1007/s00414-023-03080-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 08/23/2023] [Indexed: 10/07/2023]
Abstract
INTRODUCTION Comparative radiography is a forensic identification and shortlisting technique based on the comparison of skeletal structures in ante-mortem and post-mortem images. The images (e.g., 2D radiographs or 3D computed tomographies) are manually superimposed and visually compared by a forensic practitioner. It requires a significant amount of time per comparison, limiting its utility in large comparison scenarios. METHODS We propose and validate a novel framework for automating the shortlisting of candidates using artificial intelligence. It is composed of (1) a segmentation method to delimit skeletal structures' silhouettes in radiographs, (2) a superposition method to generate the best simulated "radiographs" from 3D images according to the segmented radiographs, and (3) a decision-making method for shortlisting all candidates ranked according to a similarity metric. MATERIAL The dataset is composed of 180 computed tomographies and 180 radiographs where the frontal sinuses are visible. Frontal sinuses are the skeletal structure analyzed due to their high individualization capability. RESULTS Firstly, we validate two deep learning-based techniques for segmenting the frontal sinuses in radiographs, obtaining high-quality results. Secondly, we study the framework's shortlisting capability using both automatic segmentations and superimpositions. The obtained superimpositions, based only on the superimposition metric, allowed us to filter out 40% of the possible candidates in a completely automatic manner. Thirdly, we perform a reliability study by comparing 180 radiographs against 180 computed tomographies using manual segmentations. The results allowed us to filter out 73% of the possible candidates. Furthermore, the results are robust to inter- and intra-expert-related errors.
Collapse
Affiliation(s)
- Óscar Gómez
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain.
| | - Pablo Mesejo
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
| | - Óscar Ibáñez
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
- Faculty of Computer Science, CITIC, University of A Coruña, A Coruña, Spain
| | - Andrea Valsecchi
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
| | - Enrique Bermejo
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
- Panacea Cooperative Research S. Coop., Ponferrada, Spain
| | - Andrea Cerezo
- Department of Legal Medicine, Toxicology and Physical Anthropology, University of Granada, Granada, Spain
| | - José Pérez
- Department of Legal Medicine, Toxicology and Physical Anthropology, University of Granada, Granada, Spain
| | - Inmaculada Alemán
- Department of Legal Medicine, Toxicology and Physical Anthropology, University of Granada, Granada, Spain
| | - Tzipi Kahana
- Faculty of Criminology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Sergio Damas
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Software Engineering, University of Granada, Granada, Spain
| | - Óscar Cordón
- Andalusian Research Institute DaSCI, University of Granada, Granada, Spain
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, Spain
| |
Collapse
|
23
|
Guzman M, Geuther B, Sabnis G, Kumar V. Highly Accurate and Precise Determination of Mouse Mass Using Computer Vision. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.30.573718. [PMID: 38318203 PMCID: PMC10843158 DOI: 10.1101/2023.12.30.573718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
Abstract
Changes in body mass are a key indicator of health and disease in humans and model organisms. Animal body mass is routinely monitored in husbandry and preclinical studies. In rodent studies, the current best method requires manually weighing the animal on a balance which has at least two consequences. First, direct handling of the animal induces stress and can have confounding effects on studies. Second, the acquired mass is static and not amenable to continuous assessment, and rapid mass changes can be missed. A noninvasive and continuous method of monitoring animal mass would have utility in multiple areas of biomedical research. Here, we test the feasibility of determining mouse body mass using video data. We combine computer vision methods with statistical modeling to demonstrate the feasibility of our approach. Our methods determine mouse mass with 4.8% error across highly genetically diverse mouse strains, with varied coat colors and mass. This error is low enough to replace manual weighing with image-based assessment in most mouse studies. We conclude that visual determination of rodent mass using video enables noninvasive and continuous monitoring and can improve animal welfare and preclinical studies.
Collapse
|
24
|
Wang H, Cao P, Yang J, Zaiane O. MCA-UNet: multi-scale cross co-attentional U-Net for automatic medical image segmentation. Health Inf Sci Syst 2023; 11:10. [PMID: 36721640 PMCID: PMC9884736 DOI: 10.1007/s13755-022-00209-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 10/01/2022] [Indexed: 01/31/2023] Open
Abstract
Medical image segmentation is a challenging task due to the high variation in shape, size and position of infections or lesions in medical images. It is necessary to construct multi-scale representations to capture image contents from different scales. However, it is still challenging for U-Net with a simple skip connection to model the global multi-scale context. To overcome it, we proposed a dense skip-connection with cross co-attention in U-Net to solve the semantic gaps for an accurate automatic medical image segmentation. We name our method MCA-UNet, which enjoys two benefits: (1) it has a strong ability to model the multi-scale features, and (2) it jointly explores the spatial and channel attentions. The experimental results on the COVID-19 and IDRiD datasets suggest that our MCA-UNet produces more precise segmentation performance for the consolidation, ground-glass opacity (GGO), microaneurysms (MA) and hard exudates (EX). The source code of this work will be released via https://github.com/McGregorWwww/MCA-UNet/.
Collapse
Affiliation(s)
- Haonan Wang
- Computer Science and Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, China
| | - Peng Cao
- Computer Science and Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, China
| | - Jinzhu Yang
- Computer Science and Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, China
| | - Osmar Zaiane
- Amii, University of Alberta, Edmonton, AB Canada
| |
Collapse
|
25
|
Chen Z, Zhuo W, Wang T, Cheng J, Xue W, Ni D. Semi-Supervised Representation Learning for Segmentation on Medical Volumes and Sequences. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3972-3986. [PMID: 37756175 DOI: 10.1109/tmi.2023.3319973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
Benefiting from the massive labeled samples, deep learning-based segmentation methods have achieved great success for two dimensional natural images. However, it is still a challenging task to segment high dimensional medical volumes and sequences, due to the considerable efforts for clinical expertise to make large scale annotations. Self/semi-supervised learning methods have been shown to improve the performance by exploiting unlabeled data. However, they are still lack of mining local semantic discrimination and exploitation of volume/sequence structures. In this work, we propose a semi-supervised representation learning method with two novel modules to enhance the features in the encoder and decoder, respectively. For the encoder, based on the continuity between slices/frames and the common spatial layout of organs across subjects, we propose an asymmetric network with an attention-guided predictor to enable prediction between feature maps of different slices of unlabeled data. For the decoder, based on the semantic consistency between labeled data and unlabeled data, we introduce a novel semantic contrastive learning to regularize the feature maps in the decoder. The two parts are trained jointly with both labeled and unlabeled volumes/sequences in a semi-supervised manner. When evaluated on three benchmark datasets of medical volumes and sequences, our model outperforms existing methods with a large margin of 7.3% DSC on ACDC, 6.5% on Prostate, and 3.2% on CAMUS when only a few labeled data is available. Further, results on the M&M dataset show that the proposed method yields improvement without using any domain adaption techniques for data from unknown domain. Intensive evaluations reveal the effectiveness of representation mining, and superiority on performance of our method. The code is available at https://github.com/CcchenzJ/BootstrapRepresentation.
Collapse
|
26
|
Rönnau MM, Lepper TW, Amaral LN, Rados PV, Oliveira MM. A CNN-based approach for joint segmentation and quantification of nuclei and NORs in AgNOR-stained images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107788. [PMID: 37738838 DOI: 10.1016/j.cmpb.2023.107788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 08/18/2023] [Accepted: 09/01/2023] [Indexed: 09/24/2023]
Abstract
BACKGROUND AND OBJECTIVE Oral cancer is the sixth most common kind of human cancer. Brush cytology for counting Argyrophilic Nucleolar Organizer Regions (AgNORs) can help early mouth cancer detection, lowering patient mortality. However, the manual counting of AgNORs still in use today is time-consuming, labor-intensive, and error-prone. The goal of our work is to address these shortcomings by proposing a convolutional neural network (CNN) based method to automatically segment individual nuclei and AgNORs in microscope slide images and count the number of AgNORs within each nucleus. METHODS We systematically defined, trained and tested 102 CNNs in the search for a high-performing solution. This included the evaluation of 51 network architectures combining 17 encoders with 3 decoders and 2 loss functions. These CNNs were trained and evaluated on a new AgNOR-stained image dataset of epithelial cells from oral mucosa containing 1,171 images from 48 patients, with ground truth annotated by specialists. The annotations were greatly facilitated by a semi-automatic procedure developed in our project. Overlapping nuclei, which tend to hide AgNORs, thus affecting their true count, were discarded using an automatic solution also developed in our project. Besides the evaluation on the test dataset, the robustness of the best performing model was evaluated against the results produced by a group of human experts on a second dataset. RESULTS The best performing CNN model on the test dataset consisted of a DenseNet-169 + LinkNet with Focal Loss (DenseNet-169 as encoder and LinkNet as decoder). It obtained a Dice score of 0.90 and intersection over union (IoU) of 0.84. The counting of nuclei and AgNORs achieved precision and recall of 0.94 and 0.90 for nuclei, and 0.82 and 0.74 for AgNORs, respectively. Our solution achieved a performance similar to human experts on a set of 291 images from 6 new patients, obtaining Intraclass Correlation Coefficient (ICC) of 0.91 for nuclei and 0.81 for AgNORs with 95% confidence intervals of [0.89, 0.93] and [0.77, 0.84], respectively, and p-values < 0.001, confirming its statistical significance. Our AgNOR-stained image dataset is the most diverse publicly available AgNOR-stained image dataset in terms of number of patients and the first for oral cells. CONCLUSIONS CNN-based joint segmentation and quantification of nuclei and NORs in AgNOR-stained images achieves expert-like performance levels, while being orders of magnitude faster than the later. Our solution demonstrated this by showing strong agreement with the results produced by a group of specialists, highlighting its potential to accelerate diagnostic workflows. Our trained model, code, and dataset are available and can stimulate new research in early oral cancer detection.
Collapse
Affiliation(s)
- Maikel M Rönnau
- Instituto de Informática, Universidade Federal do Rio Grande do Sul, Av. Gonçalves, 9500, Porto Alegre, 91501-970, RS, Brazil.
| | - Tatiana W Lepper
- Faculdade de Odontologia, Universidade Federal do Rio Grande do Sul, R. Ramiro Barcelos, 2492, Porto Alegre, 90035-003, RS, Brazil.
| | - Luara N Amaral
- Faculdade de Odontologia, Universidade Federal do Rio Grande do Sul, R. Ramiro Barcelos, 2492, Porto Alegre, 90035-003, RS, Brazil.
| | - Pantelis V Rados
- Faculdade de Odontologia, Universidade Federal do Rio Grande do Sul, R. Ramiro Barcelos, 2492, Porto Alegre, 90035-003, RS, Brazil.
| | - Manuel M Oliveira
- Instituto de Informática, Universidade Federal do Rio Grande do Sul, Av. Gonçalves, 9500, Porto Alegre, 91501-970, RS, Brazil.
| |
Collapse
|
27
|
Wang J, Peng Y, Jing S, Han L, Li T, Luo J. A deep-learning approach for segmentation of liver tumors in magnetic resonance imaging using UNet+. BMC Cancer 2023; 23:1060. [PMID: 37923988 PMCID: PMC10623778 DOI: 10.1186/s12885-023-11432-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 09/21/2023] [Indexed: 11/06/2023] Open
Abstract
OBJECTIVE Radiomic and deep learning studies based on magnetic resonance imaging (MRI) of liver tumor are gradually increasing. Manual segmentation of normal hepatic tissue and tumor exhibits limitations. METHODS 105 patients diagnosed with hepatocellular carcinoma were retrospectively studied between Jan 2015 and Dec 2020. The patients were divided into three sets: training (n = 83), validation (n = 11), and internal testing (n = 11). Additionally, 9 cases were included from the Cancer Imaging Archive as the external test set. Using the arterial phase and T2WI sequences, expert radiologists manually delineated all images. Using deep learning, liver tumors and liver segments were automatically segmented. A preliminary liver segmentation was performed using the UNet + + network, and the segmented liver mask was re-input as the input end into the UNet + + network to segment liver tumors. The false positivity rate was reduced using a threshold value in the liver tumor segmentation. To evaluate the segmentation results, we calculated the Dice similarity coefficient (DSC), average false positivity rate (AFPR), and delineation time. RESULTS The average DSC of the liver in the validation and internal testing sets was 0.91 and 0.92, respectively. In the validation set, manual and automatic delineation took 182.9 and 2.2 s, respectively. On an average, manual and automatic delineation took 169.8 and 1.7 s, respectively. The average DSC of liver tumors was 0.612 and 0.687 in the validation and internal testing sets, respectively. The average time for manual and automatic delineation and AFPR in the internal testing set were 47.4 s, 2.9 s, and 1.4, respectively, and those in the external test set were 29.5 s, 4.2 s, and 1.6, respectively. CONCLUSION UNet + + can automatically segment normal hepatic tissue and liver tumors based on MR images. It provides a methodological basis for the automated segmentation of liver tumors, improves the delineation efficiency, and meets the requirement of extraction set analysis of further radiomics and deep learning.
Collapse
Affiliation(s)
- Jing Wang
- Department of General medicine, The First Medical Center Department of Chinese PLA General Hospital, Peking, 100039, China
| | - Yanyang Peng
- Department of Radiology, First Medical Center of General Hospital of People's Liberation Army, Peking, China
| | - Shi Jing
- Department of Oncology, Huaihe Hospital, Henan University, Kaifeng, 475000, China
| | - Lujun Han
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Cancer for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510030, China.
- Translational Medical Center of Huaihe Hospital, Henan University, 115 West Gate Street, Kaifeng, 475000, China.
| | - Tian Li
- School of Basic Medicine, Fourth Military Medical University, Xi'an, 710032, China.
- Translational Medical Center of Huaihe Hospital, Henan University, 115 West Gate Street, Kaifeng, 475000, China.
| | - Junpeng Luo
- Translational Medical Center of Huaihe Hospital, Henan University, 115 West Gate Street, Kaifeng, 475000, China.
- Academy for Advanced Interdisciplinary Studies, Henan University, Zhengzhou, 450046, China.
| |
Collapse
|
28
|
Chen H, Lv T, Luo Q, Li L, Wang Q, Li Y, Zhou D, Emami E, Schmittbuhl M, van der Stelt P, Huynh N. Reliability and accuracy of a semi-automatic segmentation protocol of the nasal cavity using cone beam computed tomography in patients with sleep apnea. Clin Oral Investig 2023; 27:6813-6821. [PMID: 37796336 DOI: 10.1007/s00784-023-05295-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023]
Abstract
OBJECTIVES The objectives of this study included using the cone beam computed tomography (CBCT) technology to assess: (1) intra- and inter-observer reliability of the volume measurement of the nasal cavity; (2) the accuracy of the segmentation protocol for evaluation of the nasal cavity. MATERIALS AND METHODS This study used test-retest reliability and accuracy methods within two different population sample groups, from Eastern Asia and North America. Thirty obstructive sleep apnea (OSA) patients were randomly selected from administrative and research oral health data archived at two dental faculties in China and Canada. To assess the reliability of the protocol, two observers performed nasal cavity volume measurement twice with a 10-day interval, using Amira software (v4.1, Visage Imaging Inc., Carlsbad, CA). The accuracy study used a computerized tomography (CT) scan of an OSA patient, who was not included in the study sample, to fabricate an anthropomorphic phantom of the nasal cavity volume with known dimensions (18.9 ml, gold standard). This phantom was scanned using one NewTom 5G (QR systems, Verona, Italy) CBCT scanner. The nasal cavity was segmented based on CBCT images and converted into standard tessellation language (STL) models. The volume of the nasal cavity was measured on the acquired STL models (18.99 ± 0.066 ml). RESULTS The intra-observer and inter-observer intraclass correlation coefficients for the volume measurement of the nasal cavity were 0.980-0.997 and 0.948-0.992 consecutively. The nasal cavity volume measurement was overestimated by 1.1%-3.1%, compared to the gold standard. CONCLUSIONS The semi-automatic segmentation protocol of the nasal cavity in patients with sleep apnea and by using cone beam computed tomography is reliable and accurate. CLINICAL RELEVANCE This study provides a reliable and accurate protocol for segmentation of nasal cavity, which will facilitate the clinician to analyze the images within nasoethmoidal region.
Collapse
Affiliation(s)
- Hui Chen
- Department of Orthodontics, School and Hospital of Stomatology, Shandong University, Shandong Key Laboratory of Oral Tissue Regeneration, Shandong Engineering Laboratory for Dental Materials and Oral Tissue Regeneration, Shandong Provincial Clinical Research Center for Oral Diseases, Cheeloo College of Medicine, Shandong University, Jinan, 250100, Shandong, China.
| | - Tao Lv
- Department of Orthodontics, School and Hospital of Stomatology, Shandong University, Shandong Key Laboratory of Oral Tissue Regeneration, Shandong Engineering Laboratory for Dental Materials and Oral Tissue Regeneration, Shandong Provincial Clinical Research Center for Oral Diseases, Cheeloo College of Medicine, Shandong University, Jinan, 250100, Shandong, China.
| | - Qing Luo
- Hospital of Stomatology, Ningbo, Zhejiang, China
| | - Lei Li
- Centre for Advanced Jet Engineering Technologies (CaJET), School of Mechanical Engineering, Key Laboratory of High-Efficiency and Clean Mechanical Manufacture at Shandong University, Ministry of Education, National Demonstration Center for Experimental Mechanical Engineering Education, Shandong University, Jinan, China
| | - Qing Wang
- Department of Orthodontics, Stomatological Hospital, Southern Medical University, Guangzhou, Guangdong, China
| | - Yanzhong Li
- Department of Otorhinolaryngology, NHC Key Laboratory of Otorhinolaryngology, Qilu Hospital of Shandong University, Jinan, China
| | - Debo Zhou
- Key Laboratory of Special Functional Aggregated Materials, Ministry of Education, School of Chemistry and Chemical Engineering, Shandong University, Jinan, China
| | - Elham Emami
- Faculty of Dentistry, McGill University, Montreal, Quebec, Canada
| | | | - Paul van der Stelt
- Department of Oral Radilology, Academic Centre for Dentistry Amsterdam, Amsterdam, the Netherlands
| | - Nelly Huynh
- Faculty of Dental Medicine, Université de Montréal, Montreal, Quebec, Canada
| |
Collapse
|
29
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
30
|
Yao T, Wang C, Wang X, Li X, Jiang Z, Qi P. Enhancing percutaneous coronary intervention with heuristic path planning and deep-learning-based vascular segmentation. Comput Biol Med 2023; 166:107540. [PMID: 37806060 DOI: 10.1016/j.compbiomed.2023.107540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 09/21/2023] [Accepted: 09/28/2023] [Indexed: 10/10/2023]
Abstract
Percutaneous coronary intervention (PCI) is a minimally invasive technique for treating vascular diseases. PCI requires precise and real-time visualization and guidance to ensure surgical safety and efficiency. Existing mainstream guiding methods rely on hemodynamic parameters. However, these methods are less intuitive than images and pose some challenges to the decision-making of cardiologists. This paper proposes a novel PCI guiding assistance system by combining a novel vascular segmentation network and a heuristic intervention path planning algorithm, providing cardiologists with clear and visualized information. A dataset of 1077 DSA images from 288 patients is also collected in clinical practice. A Likert Scale is also designed to evaluate system performance in user experiments. Results of user experiments demonstrate that the system can generate satisfactory and reasonable paths for PCI. Our proposed method outperformed the state-of-the-art baselines based on three metrics (Jaccard: 0.4091, F1: 0.5626, Accuracy: 0.9583). The proposed system can effectively assist cardiologists in PCI by providing a clear segmentation of vascular structures and optimal real-time intervention paths, thus demonstrating great potential for robotic PCI autonomy.
Collapse
Affiliation(s)
- Tianliang Yao
- College of Electronics and Information Engineering, Tongji University, Shanghai, 200092, China.
| | - Chengjia Wang
- School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AP, United Kingdom; BHF Centre for Cardiovascular Science,University of Edinburgh, Edinburgh, EH16 4TJ, United Kingdom.
| | - Xinyi Wang
- School of Medicine, Tongji University, Shanghai, 200092, China.
| | - Xiang Li
- Departments of Cardiology and Nursing, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, 200072, China.
| | - Zhaolei Jiang
- Department of Cardiothoracic Surgery, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China.
| | - Peng Qi
- College of Electronics and Information Engineering, Tongji University, Shanghai, 200092, China.
| |
Collapse
|
31
|
Nanda P, Kirschner DE. Calibration methods to fit parameters within complex biological models. FRONTIERS IN APPLIED MATHEMATICS AND STATISTICS 2023; 9:1256443. [PMID: 38222943 PMCID: PMC10785782 DOI: 10.3389/fams.2023.1256443] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
Mathematical and computational models of biological systems are increasingly complex, typically comprised of hybrid multi-scale methods such as ordinary differential equations, partial differential equations, agent-based and rule-based models, etc. These mechanistic models concurrently simulate detail at resolutions of whole host, multi-organ, organ, tissue, cellular, molecular, and genomic dynamics. Lacking analytical and numerical methods, solving complex biological models requires iterative parameter sampling-based approaches to establish appropriate ranges of model parameters that capture corresponding experimental datasets. However, these models typically comprise large numbers of parameters and therefore large degrees of freedom. Thus, fitting these models to multiple experimental datasets over time and space presents significant challenges. In this work we undertake the task of reviewing, testing, and advancing calibration practices across models and dataset types to compare methodologies for model calibration. Evaluating the process of calibrating models includes weighing strengths and applicability of each approach as well as standardizing calibration methods. Our work compares the performance of our model agnostic Calibration Protocol (CaliPro) with approximate Bayesian computing (ABC) to highlight strengths, weaknesses, synergies, and differences among these methods. We also present next-generation updates to CaliPro. We explore several model implementations and suggest a decision tree for selecting calibration approaches to match dataset types and modeling constraints.
Collapse
Affiliation(s)
- Pariksheet Nanda
- Department of Microbiology and Immunology, University of Michigan Medical School, Ann Arbor, MI, United States
| | - Denise E. Kirschner
- Department of Microbiology and Immunology, University of Michigan Medical School, Ann Arbor, MI, United States
| |
Collapse
|
32
|
Chen W, Zhou S, Liu X, Chen Y. Semi-TMS: an efficient regularization-oriented triple-teacher semi-supervised medical image segmentation model. Phys Med Biol 2023; 68:205011. [PMID: 37699409 DOI: 10.1088/1361-6560/acf90f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 09/12/2023] [Indexed: 09/14/2023]
Abstract
Objective. Although convolutional neural networks (CNN) and Transformers have performed well in many medical image segmentation tasks, they rely on large amounts of labeled data for training. The annotation of medical image data is expensive and time-consuming, so it is common to use semi-supervised learning methods that use a small amount of labeled data and a large amount of unlabeled data to improve the performance of medical imaging segmentation.Approach. This work aims to enhance the segmentation performance of medical images using a triple-teacher cross-learning semi-supervised medical image segmentation with shape perception and multi-scale consistency regularization. To effectively leverage the information from unlabeled data, we design a multi-scale semi-supervised method for three-teacher cross-learning based on shape perception, called Semi-TMS. The three teacher models engage in cross-learning with each other, where Teacher A and Teacher C utilize a CNN architecture, while Teacher B employs a transformer model. The cross-learning module consisting of Teacher A and Teacher C captures local and global information, generates pseudo-labels, and performs cross-learning using prediction results. Multi-scale consistency regularization is applied separately to the CNN and Transformer to improve accuracy. Furthermore, the low uncertainty output probabilities from Teacher A or Teacher C are utilized as input to Teacher B, enhancing the utilization of prior knowledge and overall segmentation robustness.Main results. Experimental evaluations on two public datasets demonstrate that the proposed method outperforms some existing semi-segmentation models, implicitly capturing shape information and effectively improving the utilization and accuracy of unlabeled data through multi-scale consistency.Significance. With the widespread utilization of medical imaging in clinical diagnosis, our method is expected to be a potential auxiliary tool, assisting clinicians and medical researchers in their diagnoses.
Collapse
Affiliation(s)
- Weihong Chen
- College of Computer Science, Chongqing University, Chongqing 400044, People's Republic of China
| | - Shangbo Zhou
- College of Computer Science, Chongqing University, Chongqing 400044, People's Republic of China
| | - Xiaojuan Liu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400050, People's Republic of China
| | - Yijia Chen
- College of Computer Science, Chongqing University, Chongqing 400044, People's Republic of China
| |
Collapse
|
33
|
Mazurowski MA, Dong H, Gu H, Yang J, Konz N, Zhang Y. Segment anything model for medical image analysis: An experimental study. Med Image Anal 2023; 89:102918. [PMID: 37595404 PMCID: PMC10528428 DOI: 10.1016/j.media.2023.102918] [Citation(s) in RCA: 27] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Revised: 07/03/2023] [Accepted: 07/31/2023] [Indexed: 08/20/2023]
Abstract
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation.
Collapse
Affiliation(s)
- Maciej A Mazurowski
- Department of Radiology, Duke University, Durham, NC, 27708, USA; Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA; Department of Computer Science, Duke University, Durham, NC, 27708, USA; Department of Biostatistics & Bioinformatics, Duke University, Durham, NC, 27708, USA
| | - Haoyu Dong
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA.
| | - Hanxue Gu
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA
| | - Jichen Yang
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA
| | - Nicholas Konz
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA
| | - Yixin Zhang
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA
| |
Collapse
|
34
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
35
|
Dong Y, Wang T, Ma C, Li Z, Chellali R. DE-UFormer: U-shaped dual encoder architectures for brain tumor segmentation. Phys Med Biol 2023; 68:195019. [PMID: 37699403 DOI: 10.1088/1361-6560/acf911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 09/12/2023] [Indexed: 09/14/2023]
Abstract
Objective. In brain tumor segmentation tasks, the convolutional neural network (CNN) or transformer is usually acted as the encoder since the encoder is necessary to be used. On one hand, the convolution operation of CNN has advantages of extracting local information although its performance of obtaining global expressions is bad. On the other hand, the attention mechanism of the transformer is good at establishing remote dependencies while it is lacking in the ability to extract high-precision local information. Either high precision local information or global contextual information is crucial in brain tumor segmentation tasks. The aim of this paper is to propose a brain tumor segmentation model that can simultaneously extract and fuse high-precision local and global contextual information.Approach. We propose a network model DE-Uformer with dual encoders to obtain local features and global representations using both CNN encoder and Transformer encoder. On the basis of this, we further propose the nested encoder-aware feature fusion (NEaFF) module for effective deep fusion of the information under each dimension. It may establishe remote dependencies of features under a single encoder via the spatial attention Transformer. Meanwhile ,it also investigates how features extracted from two encoders are related with the cross-encoder attention transformer.Main results. The proposed algorithm segmentation have been performed on BraTS2020 dataset and private meningioma dataset. Results show that it is significantly better than current state-of-the-art brain tumor segmentation methods.Significance. The method proposed in this paper greatly improves the accuracy of brain tumor segmentation. This advancement helps healthcare professionals perform a more comprehensive analysis and assessment of brain tumors, thereby improving diagnostic accuracy and reliability. This fully automated brain model segmentation model with high accuracy is of great significance for critical decisions made by physicians in selecting treatment strategies and preoperative planning.
Collapse
Affiliation(s)
- Yan Dong
- College of Electrical Engineering And Control Science, Nanjing Tech University NanJing, People's Republic of China
| | - Ting Wang
- College of Electrical Engineering And Control Science, Nanjing Tech University NanJing, People's Republic of China
| | - Chiyuan Ma
- Jinling Hospital, Affiliated Hospital of Medical School, Nanjing University NanJing, People's Republic of China
| | - Zhenxing Li
- Jinling Hospital, Affiliated Hospital of Medical School, Nanjing University NanJing, People's Republic of China
| | - Ryad Chellali
- College of Electrical Engineering And Control Science, Nanjing Tech University NanJing, People's Republic of China
| |
Collapse
|
36
|
Alsahafi YS, Elshora DS, Mohamed ER, Hosny KM. Multilevel Threshold Segmentation of Skin Lesions in Color Images Using Coronavirus Optimization Algorithm. Diagnostics (Basel) 2023; 13:2958. [PMID: 37761325 PMCID: PMC10529071 DOI: 10.3390/diagnostics13182958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 09/06/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023] Open
Abstract
Skin Cancer (SC) is among the most hazardous due to its high mortality rate. Therefore, early detection of this disease would be very helpful in the treatment process. Multilevel Thresholding (MLT) is widely used for extracting regions of interest from medical images. Therefore, this paper utilizes the recent Coronavirus Disease Optimization Algorithm (COVIDOA) to address the MLT issue of SC images utilizing the hybridization of Otsu, Kapur, and Tsallis as fitness functions. Various SC images are utilized to validate the performance of the proposed algorithm. The proposed algorithm is compared to the following five meta-heuristic algorithms: Arithmetic Optimization Algorithm (AOA), Sine Cosine Algorithm (SCA), Reptile Search Algorithm (RSA), Flower Pollination Algorithm (FPA), Seagull Optimization Algorithm (SOA), and Artificial Gorilla Troops Optimizer (GTO) to prove its superiority. The performance of all algorithms is evaluated using a variety of measures, such as Mean Square Error (MSE), Peak Signal-To-Noise Ratio (PSNR), Feature Similarity Index Metric (FSIM), and Normalized Correlation Coefficient (NCC). The results of the experiments prove that the proposed algorithm surpasses several competing algorithms in terms of MSE, PSNR, FSIM, and NCC segmentation metrics and successfully solves the segmentation issue.
Collapse
Affiliation(s)
- Yousef S. Alsahafi
- Department of Information Technology, Khulis College, University of Jeddah, Jeddah 23890, Saudi Arabia;
| | - Doaa S. Elshora
- Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Egypt; (D.S.E.); (E.R.M.)
| | - Ehab R. Mohamed
- Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Egypt; (D.S.E.); (E.R.M.)
| | - Khalid M. Hosny
- Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Egypt; (D.S.E.); (E.R.M.)
| |
Collapse
|
37
|
Ackermann J, Hoch A, Snedeker JG, Zingg PO, Esfandiari H, Fürnstahl P. Automatic 3D Postoperative Evaluation of Complex Orthopaedic Interventions. J Imaging 2023; 9:180. [PMID: 37754944 PMCID: PMC10532700 DOI: 10.3390/jimaging9090180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 08/21/2023] [Accepted: 08/27/2023] [Indexed: 09/28/2023] Open
Abstract
In clinical practice, image-based postoperative evaluation is still performed without state-of-the-art computer methods, as these are not sufficiently automated. In this study we propose a fully automatic 3D postoperative outcome quantification method for the relevant steps of orthopaedic interventions on the example of Periacetabular Osteotomy of Ganz (PAO). A typical orthopaedic intervention involves cutting bone, anatomy manipulation and repositioning as well as implant placement. Our method includes a segmentation based deep learning approach for detection and quantification of the cuts. Furthermore, anatomy repositioning was quantified through a multi-step registration method, which entailed a coarse alignment of the pre- and postoperative CT images followed by a fine fragment alignment of the repositioned anatomy. Implant (i.e., screw) position was identified by 3D Hough transform for line detection combined with fast voxel traversal based on ray tracing. The feasibility of our approach was investigated on 27 interventions and compared against manually performed 3D outcome evaluations. The results show that our method can accurately assess the quality and accuracy of the surgery. Our evaluation of the fragment repositioning showed a cumulative error for the coarse and fine alignment of 2.1 mm. Our evaluation of screw placement accuracy resulted in a distance error of 1.32 mm for screw head location and an angular deviation of 1.1° for screw axis. As a next step we will explore generalisation capabilities by applying the method to different interventions.
Collapse
Affiliation(s)
- Joëlle Ackermann
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
- Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland
| | - Armando Hoch
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Jess Gerrit Snedeker
- Laboratory for Orthopaedic Biomechanics, ETH Zurich, 8093 Zurich, Switzerland
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Patrick Oliver Zingg
- Department of Orthopedics, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Hooman Esfandiari
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science, Balgrist University Hospital, University of Zurich, 8008 Zurich, Switzerland
| |
Collapse
|
38
|
Yousefi T, Aktaş Ö. New hybrid segmentation algorithm: UNet-GOA. PeerJ Comput Sci 2023; 9:e1499. [PMID: 37705637 PMCID: PMC10496000 DOI: 10.7717/peerj-cs.1499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 07/04/2023] [Indexed: 09/15/2023]
Abstract
The U-Net architecture is a prominent technique for image segmentation. However, a significant challenge in utilizing this algorithm is the selection of appropriate hyperparameters. In this study, we aimed to address this issue using an evolutionary approach. We conducted experiments on four different geometric datasets (triangle, kite, parallelogram, and square), with 1,000 training samples and 200 test samples. Initially, we performed image segmentation without the evolutionary approach, manually adjusting the U-Net hyperparameters. The average accuracy rates for the geometric images were 0.94463, 0.96289, 0.96962, and 0.93971, respectively. Subsequently, we proposed a hybrid version of the U-Net architecture, incorporating the Grasshopper Optimization Algorithm (GOA) for an evolutionary approach. This method automatically discovered the optimal hyperparameters, resulting in improved image segmentation performance. The average accuracy rates achieved by the proposed method were 0.99418, 0.99673, 0.99143, and 0.99946, respectively, for the geometric images. Comparative analysis revealed that the proposed UNet-GOA approach outperformed the traditional U-Net architecture, yielding higher accuracy rates.
Collapse
Affiliation(s)
- Tohid Yousefi
- Computer Engineering, Dokuz Eylül University, Izmir, Buca, Turkey
| | - Özlem Aktaş
- Computer Engineering, Dokuz Eylül University, Izmir, Buca, Turkey
| |
Collapse
|
39
|
Alonso A, Kirkegaard JB. Fast detection of slender bodies in high density microscopy data. Commun Biol 2023; 6:754. [PMID: 37468539 PMCID: PMC10356847 DOI: 10.1038/s42003-023-05098-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 07/05/2023] [Indexed: 07/21/2023] Open
Abstract
Computer-aided analysis of biological microscopy data has seen a massive improvement with the utilization of general-purpose deep learning techniques. Yet, in microscopy studies of multi-organism systems, the problem of collision and overlap remains challenging. This is particularly true for systems composed of slender bodies such as swimming nematodes, swimming spermatozoa, or the beating of eukaryotic or prokaryotic flagella. Here, we develop a end-to-end deep learning approach to extract precise shape trajectories of generally motile and overlapping slender bodies. Our method works in low resolution settings where feature keypoints are hard to define and detect. Detection is fast and we demonstrate the ability to track thousands of overlapping organisms simultaneously. While our approach is agnostic to area of application, we present it in the setting of and exemplify its usability on dense experiments of swimming Caenorhabditis elegans. The model training is achieved purely on synthetic data, utilizing a physics-based model for nematode motility, and we demonstrate the model's ability to generalize from simulations to experimental videos.
Collapse
Affiliation(s)
- Albert Alonso
- Niels Bohr Institute & Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Julius B Kirkegaard
- Niels Bohr Institute & Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
40
|
Zeng Y, Zeng P, Shen S, Liang W, Li J, Zhao Z, Zhang K, Shen C. DCTR U-Net: automatic segmentation algorithm for medical images of nasopharyngeal cancer in the context of deep learning. Front Oncol 2023; 13:1190075. [PMID: 37546396 PMCID: PMC10402756 DOI: 10.3389/fonc.2023.1190075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Nasopharyngeal carcinoma (NPC) is a malignant tumor that occurs in the wall of the nasopharyngeal cavity and is prevalent in Southern China, Southeast Asia, North Africa, and the Middle East. According to studies, NPC is one of the most common malignant tumors in Hainan, China, and it has the highest incidence rate among otorhinolaryngological malignancies. We proposed a new deep learning network model to improve the segmentation accuracy of the target region of nasopharyngeal cancer. Our model is based on the U-Net-based network, to which we add Dilated Convolution Module, Transformer Module, and Residual Module. The new deep learning network model can effectively solve the problem of restricted convolutional fields of perception and achieve global and local multi-scale feature fusion. In our experiments, the proposed network was trained and validated using 10-fold cross-validation based on the records of 300 clinical patients. The results of our network were evaluated using the dice similarity coefficient (DSC) and the average symmetric surface distance (ASSD). The DSC and ASSD values are 0.852 and 0.544 mm, respectively. With the effective combination of the Dilated Convolution Module, Transformer Module, and Residual Module, we significantly improved the segmentation performance of the target region of the NPC.
Collapse
Affiliation(s)
- Yan Zeng
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
- ChinaPersonnel Department, Hainan Medical University, Haikou, China
| | - PengHui Zeng
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - ShaoDong Shen
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Wei Liang
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Jun Li
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Zhe Zhao
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| | - Kun Zhang
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
- School of Information Science and Technology, Hainan Normal University, Haikou, China
| | - Chong Shen
- State Key Laboratory of Marine Resource Utilization in South China Sea, School of Information and Communication Engineering, Hainan University, Haikou, China
| |
Collapse
|
41
|
Sword J, Lee JH, Castro MA, Solomon J, Aiosa N, Reza SMS, Chu WT, Johnson JC, Bartos C, Cooper K, Jahrling PB, Johnson RF, Calcagno C, Crozier I, Kuhn JH, Hensley LE, Feuerstein IM, Mani V. Computed Tomography Imaging for Monitoring of Marburg Virus Disease: a Nonhuman Primate Proof-Of-Concept Study. Microbiol Spectr 2023; 11:e0349422. [PMID: 37036346 PMCID: PMC10269526 DOI: 10.1128/spectrum.03494-22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 02/01/2023] [Indexed: 04/11/2023] Open
Abstract
Marburg virus (MARV) is a highly virulent zoonotic filovirid that causes Marburg virus disease (MVD) in humans. The pathogenesis of MVD remains poorly understood, partially due to the low number of cases that can be studied, the absence of state-of-the-art medical equipment in areas where cases are reported, and limitations on the number of animals that can be safely used in experimental studies under maximum containment animal biosafety level 4 conditions. Medical imaging modalities, such as whole-body computed tomography (CT), may help to describe disease progression in vivo, potentially replacing ethically contentious and logistically challenging serial euthanasia studies. Towards this vision, we performed a pilot study, during which we acquired whole-body CT images of 6 rhesus monkeys before and 7 to 9 days after intramuscular MARV exposure. We identified imaging abnormalities in the liver, spleen, and axillary lymph nodes that corresponded to clinical, virological, and gross pathological hallmarks of MVD in this animal model. Quantitative image analysis indicated hepatomegaly with a significant reduction in organ density (indicating fatty infiltration of the liver), splenomegaly, and edema that corresponded with gross pathological and histopathological findings. Our results indicated that CT imaging could be used to verify and quantify typical MVD pathogenesis versus altered, diminished, or absent disease severity or progression in the presence of candidate medical countermeasures, thus possibly reducing the number of animals needed and eliminating serial euthanasia. IMPORTANCE Marburg virus (MARV) is a highly virulent zoonotic filovirid that causes Marburg virus disease (MVD) in humans. Much is unknown about disease progression and, thus, prevention and treatment options are limited. Medical imaging modalities, such as whole-body computed tomography (CT), have the potential to improve understanding of MVD pathogenesis. Our study used CT to identify abnormalities in the liver, spleen, and axillary lymph nodes that corresponded to known clinical signs of MVD in this animal model. Our results indicated that CT imaging and analyses could be used to elucidate pathogenesis and possibly assess the efficacy of candidate treatments.
Collapse
Affiliation(s)
- Jennifer Sword
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Ji Hyun Lee
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Marcelo A. Castro
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Jeffrey Solomon
- Clinical Monitoring Research Program Directorate, Frederick National Laboratory for Cancer Research, Frederick, Maryland, USA
| | - Nina Aiosa
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Syed M. S. Reza
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | - Winston T. Chu
- Center for Infectious Disease Imaging, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
| | - Joshua C. Johnson
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Christopher Bartos
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Kurt Cooper
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Peter B. Jahrling
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
- Emerging Viral Pathogens Section, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland, USA
| | - Reed F. Johnson
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
- Emerging Viral Pathogens Section, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Frederick, Maryland, USA
| | - Claudia Calcagno
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Ian Crozier
- Clinical Monitoring Research Program Directorate, Frederick National Laboratory for Cancer Research, Frederick, Maryland, USA
| | - Jens H. Kuhn
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Lisa E. Hensley
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Irwin M. Feuerstein
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| | - Venkatesh Mani
- Integrated Research Facility at Fort Detrick, Division of Clinical Research, National Institute of Allergy and Infectious Diseases, Fort Detrick, National Institutes of Health, Fort Detrick Frederick, Maryland, USA
| |
Collapse
|
42
|
Cannon PC, Ferguson JM, Pitt EB, Shrand JA, Setia SA, Nimmagadda N, Barth EJ, Kavoussi NL, Galloway RL, Herrell SD, Webster RJ. A Safe Framework for Quantitative In Vivo Human Evaluation of Image Guidance. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2023; 5:133-139. [PMID: 38487093 PMCID: PMC10939321 DOI: 10.1109/ojemb.2023.3271853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 02/16/2023] [Accepted: 03/27/2023] [Indexed: 03/17/2024] Open
Abstract
Goal: We present a new framework for in vivo image guidance evaluation and provide a case study on robotic partial nephrectomy. Methods: This framework (called the "bystander protocol") involves two surgeons, one who solely performs the therapeutic process without image guidance, and another who solely periodically collects data to evaluate image guidance. This isolates the evaluation from the therapy, so that in-development image guidance systems can be tested without risk of negatively impacting the standard of care. We provide a case study applying this protocol in clinical cases during robotic partial nephrectomy surgery. Results: The bystander protocol was performed successfully in 6 patient cases. We find average lesion centroid localization error with our IGS system to be 6.5 mm in vivo compared to our prior result of 3.0 mm in phantoms. Conclusions: The bystander protocol is a safe, effective method for testing in-development image guidance systems in human subjects.
Collapse
Affiliation(s)
| | | | | | | | | | - Naren Nimmagadda
- Vanderbilt University Medical CenterNashvilleTN37232USA
- The Johns Hopkins University School of MedicineBaltimoreMD21287USA
| | | | | | | | | | | |
Collapse
|
43
|
Les T, Markiewicz T, Dziekiewicz M, Gallego J, Swiderska-Chadaj Z, Lorent M. Localization of spleen and kidney organs from CT scans based on classification of slices in rotational views. Sci Rep 2023; 13:5709. [PMID: 37029169 PMCID: PMC10082200 DOI: 10.1038/s41598-023-32741-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 03/31/2023] [Indexed: 04/09/2023] Open
Abstract
This article presents a novel multiple organ localization and tracking technique applied to spleen and kidney regions in computed tomography images. The proposed solution is based on a unique approach to classify regions in different spatial projections (e.g., side projection) using convolutional neural networks. Our procedure merges classification results from different projection resulting in a 3D segmentation. The proposed system is able to recognize the contour of the organ with an accuracy of 88-89% depending on the body organ. Research has shown that the use of a single method can be useful for the detection of different organs: kidney and spleen. Our solution can compete with U-Net based solutions in terms of hardware requirements, as it has significantly lower demands. Additionally, it gives better results in small data sets. Another advantage of our solution is a significantly lower training time on an equally sized data set and more capabilities to parallelize calculations. The proposed system enables visualization, localization and tracking of organs and is therefore a valuable tool in medical diagnostic problems.
Collapse
Affiliation(s)
- Tomasz Les
- University of Technology, Plac Politechniki 1, 00-661, Warsaw, Poland.
| | - Tomasz Markiewicz
- University of Technology, Plac Politechniki 1, 00-661, Warsaw, Poland
- Military Institute of Medicine, Szaserów 128, 04-141, Warsaw, Poland
| | | | - Jaime Gallego
- University of Barcelona, Gran Via de les Corts Catalanes, 08007, Barcelona, Spain
| | | | - Malgorzata Lorent
- Military Institute of Medicine, Szaserów 128, 04-141, Warsaw, Poland
| |
Collapse
|
44
|
Rashid T, Sultana S, Chakravarty M, Audette MA. Atlas-Based Shared-Boundary Deformable Multi-Surface Models through Multi-Material and Two-Manifold Dual Contouring. INFORMATION 2023. [DOI: 10.3390/info14040220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023] Open
Abstract
This paper presents a multi-material dual “contouring” method used to convert a digital 3D voxel-based atlas of basal ganglia to a deformable discrete multi-surface model that supports surgical navigation for an intraoperative MRI-compatible surgical robot, featuring fast intraoperative deformation computation. It is vital that the final surface model maintain shared boundaries where appropriate so that even as the deep-brain model deforms to reflect intraoperative changes encoded in ioMRI, the subthalamic nucleus stays in contact with the substantia nigra, for example, while still providing a significantly sparser representation than the original volumetric atlas consisting of hundreds of millions of voxels. The dual contouring (DC) algorithm is a grid-based process used to generate surface meshes from volumetric data. The DC method enables the insertion of vertices anywhere inside the grid cube, as opposed to the marching cubes (MC) algorithm, which can insert vertices only on the grid edges. This multi-material DC method is then applied to initialize, by duality, a deformable multi-surface simplex model, which can be used for multi-surface atlas-based segmentation. We demonstrate our proposed method on synthetic and deep-brain atlas data, and a comparison of our method’s results with those of traditional DC demonstrates its effectiveness.
Collapse
Affiliation(s)
- Tanweer Rashid
- Neuroimage Analytics Laboratory, Glenn Biggs Institute for Alzheimer’s and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX 78229, USA
| | - Sharmin Sultana
- Information Sciences and Technology, George Mason University, Fairfax, VA 22030, USA
| | - Mallar Chakravarty
- Brain Imaging Centre, Douglas Research Centre, Montréal, QC H4H 1R3, Canada
| | | |
Collapse
|
45
|
Bhattarai B, Subedi R, Gaire RR, Vazquez E, Stoyanov D. Histogram of Oriented Gradients meet deep learning: A novel multi-task deep network for 2D surgical image semantic segmentation. Med Image Anal 2023; 85:102747. [PMID: 36702038 PMCID: PMC10626764 DOI: 10.1016/j.media.2023.102747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 12/01/2022] [Accepted: 01/05/2023] [Indexed: 01/15/2023]
Abstract
We present our novel deep multi-task learning method for medical image segmentation. Existing multi-task methods demand ground truth annotations for both the primary and auxiliary tasks. Contrary to it, we propose to generate the pseudo-labels of an auxiliary task in an unsupervised manner. To generate the pseudo-labels, we leverage Histogram of Oriented Gradients (HOGs), one of the most widely used and powerful hand-crafted features for detection. Together with the ground truth semantic segmentation masks for the primary task and pseudo-labels for the auxiliary task, we learn the parameters of the deep network to minimize the loss of both the primary task and the auxiliary task jointly. We employed our method on two powerful and widely used semantic segmentation networks: UNet and U2Net to train in a multi-task setup. To validate our hypothesis, we performed experiments on two different medical image segmentation data sets. From the extensive quantitative and qualitative results, we observe that our method consistently improves the performance compared to the counter-part method. Moreover, our method is the winner of FetReg Endovis Sub-challenge on Semantic Segmentation organised in conjunction with MICCAI 2021. Code and implementation details are available at:https://github.com/thetna/medical_image_segmentation.
Collapse
Affiliation(s)
| | - Ronast Subedi
- Nepal Applied Mathematics and Informatics Institute for research (NAAMII), Nepal
| | - Rebati Raman Gaire
- Nepal Applied Mathematics and Informatics Institute for research (NAAMII), Nepal
| | | | | |
Collapse
|
46
|
Rabbani A, Babaei M, Gharib M. Automated segmentation and morphological characterization of placental intervillous space based on a single labeled image. Micron 2023; 169:103448. [PMID: 36965271 DOI: 10.1016/j.micron.2023.103448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 03/19/2023] [Accepted: 03/20/2023] [Indexed: 03/27/2023]
Abstract
In this study, a novel method of data augmentation has been presented for the segmentation of placental histological images when the labeled data are scarce. This method generates new realizations of the placenta intervillous morphology while maintaining the general textures and orientations. As a result, a diversified artificial dataset of images is generated that can be used for training deep learning segmentation models. We have observed that on average the presented method of data augmentation led to a 42% decrease in the binary cross-entropy loss of the validation dataset compared to the common approach in the literature. Additionally, the morphology of the intervillous space is studied under the effect of the proposed image reconstruction technique, and the diversity of the artificially generated population is quantified. We have demonstrated that the proposed method results in a more accurate morphological characterization of the placental intervillous space with an average feature relative error of 6.5%, which is significantly lower than the 11.5% error observed with conventional augmentation techniques. Due to the high resemblance of the generated images to the real ones, applications of the proposed method may not be limited to placental histological images, and it is recommended that other types of tissue be investigated in future studies.
Collapse
Affiliation(s)
- Arash Rabbani
- School of Computing, University of Leeds, Leeds, UK.
| | - Masoud Babaei
- School of Chemical Engineering and Analytical Science, The University of Manchester, Manchester, UK
| | - Masoumeh Gharib
- Department of Pathology, Mashhad University of Medical Sciences, Mashhad, Iran
| |
Collapse
|
47
|
Comparative validation of AI and non-AI methods in MRI volumetry to diagnose Parkinsonian syndromes. Sci Rep 2023; 13:3439. [PMID: 36859498 PMCID: PMC10156821 DOI: 10.1038/s41598-023-30381-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Accepted: 02/21/2023] [Indexed: 03/03/2023] Open
Abstract
Automated segmentation and volumetry of brain magnetic resonance imaging (MRI) scans are essential for the diagnosis of Parkinson's disease (PD) and Parkinson's plus syndromes (P-plus). To enhance the diagnostic performance, we adopt deep learning (DL) models in brain MRI segmentation and compared their performance with the gold-standard non-DL method. We collected brain MRI scans of healthy controls ([Formula: see text]) and patients with PD ([Formula: see text]), multiple systemic atrophy ([Formula: see text]), and progressive supranuclear palsy ([Formula: see text]) at Samsung Medical Center from January 2017 to December 2020. Using the gold-standard non-DL model, FreeSurfer (FS), we segmented six brain structures: midbrain, pons, caudate, putamen, pallidum, and third ventricle, and considered them as annotated data for DL models, the representative convolutional neural network (CNN) and vision transformer (ViT)-based models. Dice scores and the area under the curve (AUC) for differentiating normal, PD, and P-plus cases were calculated to determine the measure to which FS performance can be reproduced as-is while increasing speed by the DL approaches. The segmentation times of CNN and ViT for the six brain structures per patient were 51.26 ± 2.50 and 1101.82 ± 22.31 s, respectively, being 14 to 300 times faster than FS (15,735 ± 1.07 s). Dice scores of both DL models were sufficiently high (> 0.85) so their AUCs for disease classification were not inferior to that of FS. For classification of normal vs. P-plus and PD vs. P-plus (except multiple systemic atrophy - Parkinsonian type) based on all brain parts, the DL models and FS showed AUCs above 0.8, demonstrating the clinical value of DL models in addition to FS. DL significantly reduces the analysis time without compromising the performance of brain segmentation and differential diagnosis. Our findings may contribute to the adoption of DL brain MRI segmentation in clinical settings and advance brain research.
Collapse
|
48
|
Applying machine learning methods to enable automatic customisation of knee replacement implants from CT data. Sci Rep 2023; 13:3317. [PMID: 36849812 PMCID: PMC9971034 DOI: 10.1038/s41598-023-30483-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 02/23/2023] [Indexed: 03/01/2023] Open
Abstract
The aim of this study was to develop an automated pipeline capable of designing custom total knee replacement implants from CT scans. The developed pipeline firstly utilised a series of machine learning methods including classification, object detection, and image segmentation models, to extract geometrical information from inputted DICOM files. Statistical shape models then used the information to create femur and tibia 3D surface model predictions which were ultimately used by computer aided design scripts to generate customised implant designs. The developed pipeline was trained and tested using CT scan images, along with segmented 3D models, obtained for 98 Korean Asian subjects. The performance of the pipeline was tested computationally by virtually fitting outputted implant designs with 'ground truth' 3D models for each test subject's bones. This demonstrated the pipeline was capable of repeatably producing highly accurate designs, and its performance was not impacted by subject sex, height, age, or knee side. In conclusion, a robust, accurate and automatic, CT-based total knee replacement customisation pipeline was shown to be feasible and could afford significant time and cost advantages over conventional methods. The pipeline framework could also be adapted to enable customisation of other medical implants.
Collapse
|
49
|
Wang L, Hu H, Li Z, Dai W. Splitting ore from X-ray image based on improved robust concave-point algorithm. PeerJ Comput Sci 2023; 9:e1263. [PMID: 37346626 PMCID: PMC10280491 DOI: 10.7717/peerj-cs.1263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 02/01/2023] [Indexed: 06/23/2023]
Abstract
Image segmentation is a key part of ore separation process based on X-ray images, and its segmentation result directly affects the accuracy of ore classification. In the field of ore production, the conventional segmentation method is difficult to meet the requirements of real-time, robustness and accuracy during ore segmentation process. In order to solve the above problems, this article proposes an ore segmentation method dealing with pseudo-dual-energy X-ray image which is composed of contour extraction module, concave point detection module and concave point matching module. In the contour extraction module, the image is firstly cut into two parts with high and low energy, then the adaptive threshold is used to obtain the ore binary image. After filtering and morphological operation, the image contour is obtained from the binary image. Concave point detection module uses vector to detect concave points on contour. As the main contribution of this article, the concave point matching module can remove the influence of boundary interference concave points by drawing the auxiliary line and judging the relative position of auxiliary line and ore contour. With the matching concave points connected, the whole ore segmentation is completed. In order to verify the effectiveness of this method, a comparative experiment was conducted between the proposed method and conventional segmentation method using X-ray images of antimony ore as data samples. The result of industrial experiment shows that the proposed intelligent segmentation method can remove the interference of pseudo concave points on the contour, achieve accuracy segmentation result, and satisfy the requirements of processing X-ray image of ore.
Collapse
Affiliation(s)
- Lanhao Wang
- National Engineering Research Center of Coal Preparation and Purification, China University of Mining Technology, Xuzhou, China
| | - Hongdong Hu
- School of Information and Control Engineering, China University of Mining Technology, Xuzhou, China
| | | | - Wei Dai
- School of Information and Control Engineering, China University of Mining Technology, Xuzhou, China
| |
Collapse
|
50
|
Nazarudin AA, Zulkarnain N, Mokri SS, Zaki WMDW, Hussain A, Ahmad MF, Nordin INAM. Performance Analysis of a Novel Hybrid Segmentation Method for Polycystic Ovarian Syndrome Monitoring. Diagnostics (Basel) 2023; 13:750. [PMID: 36832237 PMCID: PMC9954948 DOI: 10.3390/diagnostics13040750] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 02/11/2023] [Accepted: 02/13/2023] [Indexed: 02/18/2023] Open
Abstract
Experts have used ultrasound imaging to manually determine follicle count and perform measurements, especially in cases of polycystic ovary syndrome (PCOS). However, due to the laborious and error-prone process of manual diagnosis, researchers have explored and developed medical image processing techniques to help with diagnosing and monitoring PCOS. This study proposes a combination of Otsu's thresholding with the Chan-Vese method to segment and identify follicles in the ovary with reference to ultrasound images marked by a medical practitioner. Otsu's thresholding highlights the pixel intensities of the image and creates a binary mask for use with the Chan-Vese method to define the boundary of the follicles. The acquired results were compared between the classical Chan-Vese method and the proposed method. The performances of the methods were evaluated in terms of accuracy, Dice score, Jaccard index and sensitivity. In overall segmentation evaluation, the proposed method showed superior results compared to the classical Chan-Vese method. Among the calculated evaluation metrics, the sensitivity of the proposed method was superior, with an average of 0.74 ± 0.12. Meanwhile, the average sensitivity for the classical Chan-Vese method was 0.54 ± 0.14, which is 20.03% lower than the sensitivity of the proposed method. Moreover, the proposed method showed significantly improved Dice score (p = 0.011), Jaccard index (p = 0.008) and sensitivity (p = 0.0001). This study showed that the combination of Otsu's thresholding and the Chan-Vese method enhanced the segmentation of ultrasound images.
Collapse
Affiliation(s)
- Asma’ Amirah Nazarudin
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Noraishikin Zulkarnain
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Siti Salasiah Mokri
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Wan Mimi Diyana Wan Zaki
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Aini Hussain
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Mohd Faizal Ahmad
- Advanced Reproductive Centre, Department of Obstetrics and Gynaecology, Faculty of Medicine, Kuala Lumpur Campus, Universiti Kebangsaan Malaysia, Cheras 56000, Kuala Lumpur, Malaysia
| | - Ili Najaa Aimi Mohd Nordin
- Department of Electrical Engineering Technology, Faculty of Engineering Technology, Universiti Tun Hussein Onn Malaysia, Bandar Universiti Pagoh, KM1, Panchor, Pagoh 86400, Johor, Malaysia
| |
Collapse
|