201
|
Ibtehaz N, Rahman MS. MultiResUNet : Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw 2020; 121:74-87. [DOI: 10.1016/j.neunet.2019.08.025] [Citation(s) in RCA: 320] [Impact Index Per Article: 80.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 08/17/2019] [Accepted: 08/22/2019] [Indexed: 10/26/2022]
|
202
|
Remedios SW, Roy S, Bermudez C, Patel MB, Butman JA, Landman BA, Pham DL. Distributed deep learning across multisite datasets for generalized CT hemorrhage segmentation. Med Phys 2020; 47:89-98. [PMID: 31660621 PMCID: PMC6983946 DOI: 10.1002/mp.13880] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 10/07/2019] [Accepted: 10/08/2019] [Indexed: 11/06/2022] Open
Abstract
PURPOSE As deep neural networks achieve more success in the wide field of computer vision, greater emphasis is being placed on the generalizations of these models for production deployment. With sufficiently large training datasets, models can typically avoid overfitting their data; however, for medical imaging it is often difficult to obtain enough data from a single site. Sharing data between institutions is also frequently nonviable or prohibited due to security measures and research compliance constraints, enforced to guard protected health information (PHI) and patient anonymity. METHODS In this paper, we implement cyclic weight transfer with independent datasets from multiple geographically disparate sites without compromising PHI. We compare results between single-site learning (SSL) and multisite learning (MSL) models on testing data drawn from each of the training sites as well as two other institutions. RESULTS The MSL model attains an average dice similarity coefficient (DSC) of 0.690 on the holdout institution datasets with a volume correlation of 0.914, respectively corresponding to a 7% and 5% statistically significant improvement over the average of both SSL models, which attained an average DSC of 0.646 and average correlation of 0.871. CONCLUSIONS We show that a neural network can be efficiently trained on data from two physically remote sites without consolidating patient data to a single location. The resulting network improves model generalization and achieves higher average DSCs on external datasets than neural networks trained on data from a single source.
Collapse
Affiliation(s)
- Samuel W. Remedios
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
- Department of Computer Science, Middle Tennessee State University
- Department of Electrical Engineering, Vanderbilt University
| | - Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
| | | | - Mayur B. Patel
- Departments of Surgery, Neurosurgery, Hearing & Speech Sciences; Center for Health Services Research, Vanderbilt Brain Institute; Critical Illness, Brain Dysfunction, and Survivorship Center, Vanderbilt University Medical Center; VA Tennessee Valley Healthcare System, Department of Veterans Affairs Medical Center
| | - John A. Butman
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
| | - Bennett A. Landman
- Department of Electrical Engineering, Vanderbilt University
- Department of Biomedical Engineering, Vanderbilt University
- Department of Computer Science, Vanderbilt University
| | - Dzung L. Pham
- Center for Neuroscience and Regenerative Medicine, Henry Jackson Foundation
- Radiology and Imaging Sciences, Clinical Center, National Institute of Health
| |
Collapse
|
203
|
Three-Dimensional Visualisation of Skeletal Cavities. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2019; 1171:73-83. [PMID: 31823241 DOI: 10.1007/978-3-030-24281-7_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Bones contain spaces within them. The extraction and the analysis of those cavities are crucial in the study of bone tissue function and can inform about pathologies or past traumatic events. The use of medical imaging techniques allows a non-invasive visualisation of skeletal cavities opening a new frontier in medical inspection and diagnosis. Here, we report the application of a new mesh-based approach for the isolation of skeletal cavities of different size and geometrical structure. We apply a mesh-based approach to extract (i) the main virtual cavities inside the human skull, (ii) a complete human endocast, (iii) the inner vasculature of the malleus bone and (iv) the medullary of a human femur. The detailed description of the mesh-based isolation method and its pioneristic application to four different case-studies show the potential of this approach in medical visualisation.
Collapse
|
204
|
A Phantom Investigation to Quantify Huygens Principle Based Microwave Imaging for Bone Lesion Detection. ELECTRONICS 2019. [DOI: 10.3390/electronics8121505] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper demonstrates the outcomes of a feasibility study of a microwave imaging procedure based on the Huygens principle for bone lesion detection. This study has been performed using a dedicated phantom and validated through measurements in the frequency range of 1–3 GHz using one receiving and one transmitting antenna in free space. Specifically, a multilayered bone phantom, which is comprised of cortical bone and bone marrow layers, was fabricated. The identification of the lesion’s presence in different bone layers was performed on images that were derived after processing through Huygens’ principle, the S21 signals measured inside an anechoic chamber in multi-bistatic fashion. The quantification of the obtained images was carried out by introducing parameters such as the resolution and signal-to-clutter ratio (SCR). The impact of different frequencies and bandwidths (in the 1–3 GHz range) in lesion detection was investigated. The findings showed that the frequency range of 1.5–2.5 GHz offered the best resolution (1.1 cm) and SCR (2.22 on a linear scale). Subtraction between S21 obtained using two slightly displaced transmitting positions was employed to remove the artefacts; the best artefact removal was obtained when the spatial displacement was approximately of the same magnitude as the dimension of the lesion.
Collapse
|
205
|
Lenchik L, Heacock L, Weaver AA, Boutin RD, Cook TS, Itri J, Filippi CG, Gullapalli RP, Lee J, Zagurovskaya M, Retson T, Godwin K, Nicholson J, Narayana PA. Automated Segmentation of Tissues Using CT and MRI: A Systematic Review. Acad Radiol 2019; 26:1695-1706. [PMID: 31405724 PMCID: PMC6878163 DOI: 10.1016/j.acra.2019.07.006] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 07/17/2019] [Accepted: 07/17/2019] [Indexed: 01/10/2023]
Abstract
RATIONALE AND OBJECTIVES The automated segmentation of organs and tissues throughout the body using computed tomography and magnetic resonance imaging has been rapidly increasing. Research into many medical conditions has benefited greatly from these approaches by allowing the development of more rapid and reproducible quantitative imaging markers. These markers have been used to help diagnose disease, determine prognosis, select patients for therapy, and follow responses to therapy. Because some of these tools are now transitioning from research environments to clinical practice, it is important for radiologists to become familiar with various methods used for automated segmentation. MATERIALS AND METHODS The Radiology Research Alliance of the Association of University Radiologists convened an Automated Segmentation Task Force to conduct a systematic review of the peer-reviewed literature on this topic. RESULTS The systematic review presented here includes 408 studies and discusses various approaches to automated segmentation using computed tomography and magnetic resonance imaging for neurologic, thoracic, abdominal, musculoskeletal, and breast imaging applications. CONCLUSION These insights should help prepare radiologists to better evaluate automated segmentation tools and apply them not only to research, but eventually to clinical practice.
Collapse
Affiliation(s)
- Leon Lenchik
- Department of Radiology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157.
| | - Laura Heacock
- Department of Radiology, NYU Langone, New York, New York
| | - Ashley A Weaver
- Department of Biomedical Engineering, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Robert D Boutin
- Department of Radiology, University of California Davis School of Medicine, Sacramento, California
| | - Tessa S Cook
- Department of Radiology, University of Pennsylvania, Philadelphia Pennsylvania
| | - Jason Itri
- Department of Radiology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157
| | - Christopher G Filippi
- Department of Radiology, Donald and Barbara School of Medicine at Hofstra/Northwell, Lenox Hill Hospital, NY, New York
| | - Rao P Gullapalli
- Department of Radiology, University of Maryland School of Medicine, Baltimore, Maryland
| | - James Lee
- Department of Radiology, University of Kentucky, Lexington, Kentucky
| | | | - Tara Retson
- Department of Radiology, University of California San Diego, San Diego, California
| | - Kendra Godwin
- Medical Library, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Joey Nicholson
- NYU Health Sciences Library, NYU School of Medicine, NYU Langone Health, New York, New York
| | - Ponnada A Narayana
- Department of Diagnostic and Interventional Imaging, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas
| |
Collapse
|
206
|
Hassan A, Ghafoor M, Tariq SA, Zia T, Ahmad W. High Efficiency Video Coding (HEVC)-Based Surgical Telementoring System Using Shallow Convolutional Neural Network. J Digit Imaging 2019; 32:1027-1043. [PMID: 30980262 PMCID: PMC6841856 DOI: 10.1007/s10278-019-00206-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Surgical telementoring systems have gained lots of interest, especially in remote locations. However, bandwidth constraint has been the primary bottleneck for efficient telementoring systems. This study aims to establish an efficient surgical telementoring system, where the qualified surgeon (mentor) provides real-time guidance and technical assistance for surgical procedures to the on-spot physician (surgeon). High Efficiency Video Coding (HEVC/H.265)-based video compression has shown promising results for telementoring applications. However, there is a trade-off between the bandwidth resources required for video transmission and quality of video received by the remote surgeon. In order to efficiently compress and transmit real-time surgical videos, a hybrid lossless-lossy approach is proposed where surgical incision region is coded in high quality whereas the background region is coded in low quality based on distance from the surgical incision region. For surgical incision region extraction, state-of-the-art deep learning (DL) architectures for semantic segmentation can be used. However, the computational complexity of these architectures is high resulting in large training and inference times. For telementoring systems, encoding time is crucial; therefore, very deep architectures are not suitable for surgical incision extraction. In this study, we propose a shallow convolutional neural network (S-CNN)-based segmentation approach that consists of encoder network only for surgical region extraction. The segmentation performance of S-CNN is compared with one of the state-of-the-art image segmentation networks (SegNet), and results demonstrate the effectiveness of the proposed network. The proposed telementoring system is efficient and explicitly considers the physiological nature of the human visual system to encode the video by providing good overall visual impact in the location of surgery. The results of the proposed S-CNN-based segmentation demonstrated a pixel accuracy of 97% and a mean intersection over union accuracy of 79%. Similarly, HEVC experimental results showed that the proposed surgical region-based encoding scheme achieved an average bitrate reduction of 88.8% at high-quality settings in comparison with default full-frame HEVC encoding. The average gain in encoding performance (signal-to-noise) of the proposed algorithm is 11.5 dB in the surgical region. The bitrate saving and visual quality of the proposed optimal bit allocation scheme are compared with the mean shift segmentation-based coding scheme for fair comparison. The results show that the proposed scheme maintains high visual quality in surgical incision region along with achieving good bitrate saving. Based on comparison and results, the proposed encoding algorithm can be considered as an efficient and effective solution for surgical telementoring systems for low-bandwidth networks.
Collapse
Affiliation(s)
- Ali Hassan
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
| | - Mubeen Ghafoor
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
| | - Syed Ali Tariq
- Department of Computer Science, COMSATS University, Islamabad, Pakistan.
| | - Tehseen Zia
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
| | - Waqas Ahmad
- Department of Information Systems and Technology, Mid Sweden University, Sundsvall, Sweden
| |
Collapse
|
207
|
de Roos A, Tao Q. Predicting Atrial Fibrillation from Automated Measurements of Left Atrial Volume Using Routine Chest CT Examination: Overlooked and Underrecognized Risk Factors. Radiol Cardiothorac Imaging 2019; 1:e190217. [PMID: 33779626 PMCID: PMC7977714 DOI: 10.1148/ryct.2019190217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Accepted: 10/14/2019] [Indexed: 06/12/2023]
Affiliation(s)
- Albert de Roos
- From the Department of Radiology, Leiden University Medical Center, Albinusdreef 2, C2-S, Leiden, South-Holland 2333 ZA, the Netherlands
| | - Qian Tao
- From the Department of Radiology, Leiden University Medical Center, Albinusdreef 2, C2-S, Leiden, South-Holland 2333 ZA, the Netherlands
| |
Collapse
|
208
|
Zhuang X. Multivariate Mixture Model for Myocardial Segmentation Combining Multi-Source Images. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2019; 41:2933-2946. [PMID: 30207950 DOI: 10.1109/tpami.2018.2869576] [Citation(s) in RCA: 70] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The author proposes a method for simultaneous registration and segmentation of multi-source images, using the multivariate mixture model (MvMM) and maximum of log-likelihood (LL) framework. Specifically, the method is applied to the problem of myocardial segmentation combining the complementary information from multi-sequence (MS) cardiac magnetic resonance (CMR) images. For the image misalignment and incongruent data, the MvMM is formulated with transformations and is further generalized for dealing with the hetero-coverage multi-modality images (HC-MMIs). The segmentation of MvMM is performed in a virtual common space, to which all the images and misaligned slices are simultaneously registered. Furthermore, this common space can be divided into a number of sub-regions, each of which contains congruent data, thus the HC-MMIs can be modeled using a set of conventional MvMMs. Results show that MvMM obtained significantly better performance compared to the conventional approaches and demonstrated good potential for scar quantification as well as myocardial segmentation. The generalized MvMM has also demonstrated better robustness in the incongruent data, where some images may not fully cover the region of interest, and the full coverage can only be reconstructed combining the images from multiple sources.
Collapse
|
209
|
|
210
|
A Novel Bio-Inspired Method for Early Diagnosis of Breast Cancer through Mammographic Image Analysis. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9214492] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Breast cancer is a current problem that causes the death of many women. In this work, we test meta-heuristics applied to the segmentation of mammographic images. Traditionally, the application of these algorithms has a direct relationship with optimization problems; however, in this study, its implementation is oriented to the segmentation of mammograms using the Dunn index as an optimization function, and the grey levels to represent each individual. The update of grey levels during the process results in the maximization of the Dunn’s index function; the higher the index, the better the segmentation will be. The results showed a lower error rate using these meta-heuristics for segmentation compared to a well-adopted classical approach known as the Otsu method.
Collapse
|
211
|
Consistent validation of gray-level thresholding image segmentation algorithms based on machine learning classifiers. Stat Pap (Berl) 2019. [DOI: 10.1007/s00362-019-01138-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
212
|
A new entropy-based approach for fuzzy c-means clustering and its application to brain MR image segmentation. Soft comput 2019. [DOI: 10.1007/s00500-018-3594-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
213
|
Thasneem AH, Sathik MM, Mehaboobathunnisa R. A Fast Segmentation and Efficient Slice Reconstruction Technique for Head CT Images. JOURNAL OF INTELLIGENT SYSTEMS 2019. [DOI: 10.1515/jisys-2017-0055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
AbstractThe three-dimensional (3D) reconstruction of medical images usually requires hundreds of two-dimensional (2D) scan images. Segmentation, an obligatory part in reconstruction, needs to be performed for all the slices consuming enormous storage space and time. To reduce storage space and time, this paper proposes a three-stage procedure, namely, slice selection, segmentation and interpolation. The methodology will have the potential to 3D reconstruct the human head from minimum selected slices. The first stage of slice selection is based on structural similarity measurement, discarding the most similar slices with none or minimal impact on details. The second stage of segmentation of the selected slices is performed using our proposed phase-field segmentation method. Validation of our segmentation results is done via comparison with other deformable models, and results show that the proposed method provides fast and accurate segmentation. The third stage of interpolation is based on modified curvature registration-based interpolation, and it is applied to re-create the discarded slices. This method is compared to both standard linear interpolation and registration-based interpolation in 100 tomographic data sets. Results show that the modified curvature registration-based interpolation reconstructs missing slices with 96% accuracy and shows an improvement in sensitivity (95.802%) on par with specificity (95.901%).
Collapse
|
214
|
Zhang J, Ying Q, Ruan Z. Time response of plasmonic spatial differentiators. OPTICS LETTERS 2019; 44:4511-4514. [PMID: 31517918 DOI: 10.1364/ol.44.004511] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 08/12/2019] [Indexed: 06/10/2023]
Abstract
We investigate the time response of plasmonic spatial differentiators based on prism coupling configurations. We show that when the incident light is time-modulated, in addition to the spatial differentiation, the output field is also contributed by the signal derivative with respect to time. To reduce this impact, the incident pulse needs a steady time span, and the shortest steady time span is about 100 fs. We further show that the time modulation does not degrade the resolution of the spatial differentiation. Also, we numerically demonstrate the image processing of edge detection by the plasmonic spatial differentiator, with the time-modulated signal.
Collapse
|
215
|
Waninger JJ, Green MD, Cheze Le Rest C, Rosen B, El Naqa I. Integrating radiomics into clinical trial design. THE QUARTERLY JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING : OFFICIAL PUBLICATION OF THE ITALIAN ASSOCIATION OF NUCLEAR MEDICINE (AIMN) [AND] THE INTERNATIONAL ASSOCIATION OF RADIOPHARMACOLOGY (IAR), [AND] SECTION OF THE SOCIETY OF RADIOPHARMACEUTICAL CHEMISTRY AND BIOLOGY 2019; 63:339-346. [PMID: 31527581 DOI: 10.23736/s1824-4785.19.03217-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
In radiomics, quantitative features that describe phenotypic tumor characteristics are derived from radiographic images. Because radiomics generates information from routine medical images, it is a powerful way to non-invasively examine the spatial and temporal heterogeneity of disease, and thus has potential to significantly impact clinical trial design, execution, and ultimately patient care. The aim of this review article is to discuss how radiomics may address some of the current challenges in clinical randomized control trials, and the difficulties of integrating robust and repeatable radiomics analysis into trial design. Each step of the radiomics process, including image acquisition and reconstruction, image segmentation, feature extraction, and computational analysis, requires extensive standardization in order to be successfully incorporated into clinical trials and inform clinical decision making. By addressing these challenges, the potential of radiomics may be realized.
Collapse
Affiliation(s)
- Jessica J Waninger
- Department of Medical Education, University of Michigan School of Medicine, Ann Arbor, MI, USA.,Michigan Center for Translational Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Michael D Green
- Department of Radiation Oncology, University of Michigan School of Medicine, Ann Arbor, MI, USA.,University of Michigan Rogel Cancer Center, University of Michigan, Ann Arbor, MI, USA
| | | | - Benjamin Rosen
- Department of Radiation Oncology, University of Michigan School of Medicine, Ann Arbor, MI, USA
| | - Issam El Naqa
- Department of Radiation Oncology, University of Michigan School of Medicine, Ann Arbor, MI, USA -
| |
Collapse
|
216
|
Martins SB, Bragantini J, Falcão AX, Yasuda CL. An adaptive probabilistic atlas for anomalous brain segmentation in MR images. Med Phys 2019; 46:4940-4950. [DOI: 10.1002/mp.13771] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 07/04/2019] [Accepted: 07/05/2019] [Indexed: 11/09/2022] Open
Affiliation(s)
- Samuel Botter Martins
- Laboratory of Image Data Science (LIDS) Institute of Computing University of Campinas Campinas Brazil
| | - Jordão Bragantini
- Laboratory of Image Data Science (LIDS) Institute of Computing University of Campinas Campinas Brazil
| | - Alexandre Xavier Falcão
- Laboratory of Image Data Science (LIDS) Institute of Computing University of Campinas Campinas Brazil
| | | |
Collapse
|
217
|
Linares OC, Hamann B, Neto JB. Segmenting Cellular Retinal Images by Optimizing Super-pixels, Multi-level Modularity, and Cell Boundary Representation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:809-818. [PMID: 31478852 DOI: 10.1109/tip.2019.2936743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We introduce an interactive method for retina layer segmentation in gray-level and RGB images based on super-pixels, multi-level optimization of modularity, and boundary erosion. Our method produces highly accurate segmentation results and can segment very large images. We have evaluated our method with two datasets of 2D confocal microscopy (CM) images of a mammalian retina.We have obtained average Jaccard index values of 0.948 and 0.942 respectively, confirming the high-quality segmentation performance of our method relative to a known ground truth segmentation. Average processing time was two seconds.
Collapse
|
218
|
Ito T. Effects of different segmentation methods on geometric morphometric data collection from primate skulls. Methods Ecol Evol 2019. [DOI: 10.1111/2041-210x.13274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Tsuyoshi Ito
- Department of Evolution and Phylogeny, Primate Research Institute Kyoto University Inuyama Aichi Japan
| |
Collapse
|
219
|
Zaid M, Bajaj N, Burrows H, Mathew R, Dai A, Wilke CT, Palasi S, Hergenrother R, Chung C, Fuller CD, Phan J, Gunn GB, Morrison WH, Garden AS, Frank SJ, Rosenthal DI, Andersen M, Otun A, Chambers MS, Koay EJ. Creating customized oral stents for head and neck radiotherapy using 3D scanning and printing. Radiat Oncol 2019; 14:148. [PMID: 31426824 PMCID: PMC6701083 DOI: 10.1186/s13014-019-1357-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Accepted: 08/09/2019] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND To evaluate and establish a digital workflow for the custom designing and 3D printing of mouth opening tongue-depressing (MOTD) stents for patients receiving radiotherapy for head and neck cancer. METHODS We retrospectively identified 3 patients who received radiation therapy (RT) for primary head and neck cancers with MOTD stents. We compared two methods for obtaining the digital impressions of patients' teeth. The first method involved segmentation from computed tomography (CT) scans, as previously established by our group, and the second method used 3D scanning of the patients' articulated stone models that were made during the conventional stent fabrication process. Three independent observers repeated the process to obtain digital impressions which provided data to design customized MOTD stents. For each method, we evaluated the time efficiency, dice similarity coefficient (DSC) for reproducibility, and the 3D printed stents' accuracy. For the 3D scanning method, we evaluated the registration process using manual and automatic approaches. RESULTS For all patients, the 3D scanning method demonstrated a significant advantage over the CT scanning method in terms of time efficiency with over 60% reduction in time consumed (p < 0.0001) and reproducibility with significantly higher DSC (p < 0.001). The printed stents were tested over the articulated dental stone models, and the trueness of fit and accuracy of dental anatomy was found to be significantly better for MOTD stents made using the 3D scanning method. The automated registration showed higher accuracy with errors < 0.001 mm compared to manual registration. CONCLUSIONS We developed an efficient workflow for custom designing and 3D-printing MOTD radiation stents. This workflow represents a considerable improvement over the CT-derived segmentation method. The application of this rapid and efficient digital workflow into radiation oncology practices can expand the use of these toxicity sparing devices to practices that do not currently have the support to make them.
Collapse
Affiliation(s)
- Mohamed Zaid
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Nimit Bajaj
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Hannah Burrows
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Ryan Mathew
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Annie Dai
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Christopher T Wilke
- Department of Radiation Oncology, University of Minnesota Medical School, 516 Delaware St SE, Minneapolis, MN, 55455, USA
| | - Stephen Palasi
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Ryan Hergenrother
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Caroline Chung
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Jack Phan
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - G Brandon Gunn
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - William H Morrison
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Adam S Garden
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Steven J Frank
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - David I Rosenthal
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA
| | - Michael Andersen
- Department of Head and Neck Surgery, Division of Surgery, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Adegbenga Otun
- Department of Head and Neck Surgery, Division of Surgery, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Mark S Chambers
- Department of Head and Neck Surgery, Division of Surgery, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX, 77030, USA
| | - Eugene J Koay
- Department of Radiation Oncology, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Unit 0097, Houston, TX, 77030, USA.
| |
Collapse
|
220
|
MRI quality control for the Italian Neuroimaging Network Initiative: moving towards big data in multiple sclerosis. J Neurol 2019; 266:2848-2858. [PMID: 31422457 DOI: 10.1007/s00415-019-09509-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Revised: 08/12/2019] [Accepted: 08/13/2019] [Indexed: 01/19/2023]
Abstract
The Italian Neuroimaging Network Initiative (INNI) supports the creation of a repository, where MRI, clinical, and neuropsychological data from multiple sclerosis (MS) patients and healthy controls are collected from Italian Research Centers with internationally recognized expertise in MRI applied to MS. However, multicenter MRI data integration needs standardization and quality control (QC). This study aimed to implement quantitative measures for characterizing the standardization and quality of MRI collected within INNI. MRI scans of 423 MS patients, including 3D T1- and T2-weighted, were obtained from INNI repository (from Centers A, B, C, and D). QC measures were implemented to characterize: (1) head positioning relative to the magnet isocenter; (2) intensity inhomogeneity; (3) relative image contrast between brain tissues; and (4) image artefacts. Centers A and D showed the most accurate subject positioning within the MR scanner (median z-offsets = - 2.6 ± 1.7 cm and - 1.1 ± 2 cm). A low, but significantly different, intensity inhomogeneity on 3D T1-weighted MRI was found between all centers (p < 0.05), except for Centers A and C that showed comparable image bias fields. Center D showed the highest relative contrast between gray and normal appearing white matter (NAWM) on 3D T1-weighed MRI (0.63 ± 0.04), while Center B showed the highest relative contrast between NAWM and MS lesions on FLAIR (0.21 ± 0.06). Image artefacts were mainly due to brain movement (60%) and ghosting (35%). The implemented QC procedure ensured systematic data quality assessment within INNI, thus making available a huge amount of high-quality MRI to better investigate pathophysiological substrates and validate novel MRI biomarkers in MS.
Collapse
|
221
|
Fatnassi C, Zaidi H. Fast and accurate pseudo multispectral technique for whole-brain MRI tissue classification. Phys Med Biol 2019; 64:145005. [PMID: 31117058 DOI: 10.1088/1361-6560/ab239e] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Numerous strategies have been proposed to classify brain tissues into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). However, many of them fail when classifying specific regions with low contrast between tissues. In this work, we propose an alternative pseudo multispectral classification (PMC) technique using CIE LAB spaces instead of gray scale T1-weighted MPRAGE images, combined with a new preprocessing technique for contrast enhancement and an optimized iterative K-means clustering. To improve the accuracy of the classification process, gray scale images were converted to multispectral CIE LAB data by applying several transformation matrices. Thus, the amount of information associated with each image voxel was increased. The image contrast was then enhanced by applying a real time function that separates brain tissue distributions and improve image contrast in certain brain regions. The data were then classified using an optimized iterative and convergent K-means classifier. The performance of the proposed approach was assessed using simulation and in vivo human studies through comparison with three common software packages used for brain MR image segmentation, namely FSL, SPM8 and K-means clustering. In the presence of high SNR, the results showed that the four algorithms achieve a good classification. Conversely, in the presence of low SNR, PMC was shown to outperform the other methods by accurately recovering all tissue volumes. The quantitative assessment of brain tissue classification for simulated studies showed that the PMC algorithm resulted in a mean Jaccard index (JI) of 0.74 compared to 0.75 for FSL, 0.7 for SPM and 0.8 for K-means. The in vivo human studies showed that the PMC algorithm resulted in a mean JI of 0.92, which reflects a good spatial overlap between segmented and actual volumes, compared to 0.84 for FSL, 0.78 for SPM and 0.66 for K-means. The proposed algorithm presents a high potential for improving the accuracy of automatic brain tissues classification and was found to be accurate even in the presence of high noise level.
Collapse
Affiliation(s)
- Chemseddine Fatnassi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | | |
Collapse
|
222
|
Indraswari R, Kurita T, Arifin AZ, Suciati N, Astuti ER. Multi-projection deep learning network for segmentation of 3D medical images. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2019.08.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
223
|
Najm M, Kuang H, Federico A, Jogiat U, Goyal M, Hill MD, Demchuk A, Menon BK, Qiu W. Automated brain extraction from head CT and CTA images using convex optimization with shape propagation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:1-8. [PMID: 31200897 DOI: 10.1016/j.cmpb.2019.04.030] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 04/20/2019] [Accepted: 04/28/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Non-Contrast Computer Tomography (NCCT) and CT angiography (CTA) are the most used and widely acceptable imaging modalities in clinical practice for the diagnosis and treatment of acute ischemic stroke (AIS) patients. Brain extraction of CT/CTA images plays an essential role in stroke imaging research. There is no robust automated brain extraction method in the literature that is well established for both NCCT and CTA images. Thus, a validated and automated brain extraction tool for CT imaging would be of great value for both research and clinical practice. METHODS The proposed brain extraction method is based on the contour evolution technique, which extracts brain tissues from acquired NCCT and CTA images in a slice-by-slice fashion. Specifically, the proposed approach makes use of a novel propagation framework, which is initialized by a localized slice with the largest brain section in axial views, followed by a geodesic level-set evolution for automatically extracting the brain section in each slice. In particular, the segmented contour propagated from the previous slice is reused to penalize the defined object function for contour evolution to enforce the shape continuity between any two adjacent contours. We show that the defined contour evolution function can be solved iteratively by globally optimal convex optimization. RESULTS The proposed brain extraction approach is quantitatively evaluated using 40 NCCT and CTA images acquired from 20 AIS patients and drawn from 4 different vendors, compared to manual segmentations using Dice and Jaccard coefficient metrics. The quantitative results show that the proposed segmentation algorithm is consistently accurate for both NCCT and CTA images using Dice metric. The proposed method is further validated on 1736 NCCT and CTA images of 1331 AIS patients acquired from three multi-national multi-centric clinical trials. A visual check performed on these data demonstrates a low failure rate of 0.4% for 1331 NCCT images and a zero-failure rate for 405 CTA images. CONCLUSIONS Both quantitative and qualitative evaluation suggest that the proposed brain extraction approach for NCCT and CTA images can be used for different clinical imaging settings, thus serving to improve current image analysis in the field of neuroimaging.
Collapse
Affiliation(s)
- Mohamed Najm
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Hulin Kuang
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Alyssa Federico
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Uzair Jogiat
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Mayank Goyal
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Michael D Hill
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Andrew Demchuk
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Bijoy K Menon
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Wu Qiu
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada.
| |
Collapse
|
224
|
Das R, Keep B, Washington P, Riedel-Kruse IH. Scientific Discovery Games for Biomedical Research. Annu Rev Biomed Data Sci 2019; 2:253-279. [PMID: 34308269 DOI: 10.1146/annurev-biodatasci-072018-021139] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Over the past decade, scientific discovery games (SDGs) have emerged as a viable approach for biomedical research, engaging hundreds of thousands of volunteer players and resulting in numerous scientific publications. After describing the origins of this novel research approach, we review the scientific output of SDGs across molecular modeling, sequence alignment, neuroscience, pathology, cellular biology, genomics, and human cognition. We find compelling results and technical innovations arising in problem-oriented games such as Foldit and Eterna and in data-oriented games such as EyeWire and Project Discovery. We discuss emergent properties of player communities shared across different projects, including the diversity of communities and the extraordinary contributions of some volunteers, such as paper writing. Finally, we highlight connections to artificial intelligence, biological cloud laboratories, new game genres, science education, and open science that may drive the next generation of SDGs.
Collapse
Affiliation(s)
- Rhiju Das
- Department of Biochemistry and Department of Physics, Stanford University, Stanford, California 94305, USA
| | - Benjamin Keep
- Department of Learning Sciences, Stanford University, Stanford, California 94305, USA
| | - Peter Washington
- Department of Bioengineering, Stanford University, Stanford, California 94305, USA
| | | |
Collapse
|
225
|
Huo Y, Xu Z, Xiong Y, Aboud K, Parvathaneni P, Bao S, Bermudez C, Resnick SM, Cutting LE, Landman BA. 3D whole brain segmentation using spatially localized atlas network tiles. Neuroimage 2019; 194:105-119. [PMID: 30910724 PMCID: PMC6536356 DOI: 10.1016/j.neuroimage.2019.03.041] [Citation(s) in RCA: 143] [Impact Index Per Article: 28.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 02/23/2019] [Accepted: 03/19/2019] [Indexed: 01/18/2023] Open
Abstract
Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA.
| | - Zhoubing Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Yunxi Xiong
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Katherine Aboud
- Department of Special Education, Vanderbilt University, Nashville, TN, USA
| | - Prasanna Parvathaneni
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Shunxing Bao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD, USA
| | - Laurie E Cutting
- Department of Special Education, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Pediatrics, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA
| | - Bennett A Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA; Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA; Institute of Imaging Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
226
|
|
227
|
Lin X, Li X. Image Based Brain Segmentation: From Multi-Atlas Fusion to Deep Learning. Curr Med Imaging 2019; 15:443-452. [DOI: 10.2174/1573405614666180817125454] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2017] [Revised: 07/28/2018] [Accepted: 08/07/2018] [Indexed: 01/10/2023]
Abstract
Background:
This review aims to identify the development of the algorithms for brain
tissue and structure segmentation in MRI images.
Discussion:
Starting from the results of the Grand Challenges on brain tissue and structure segmentation
held in Medical Image Computing and Computer-Assisted Intervention (MICCAI), this
review analyses the development of the algorithms and discusses the tendency from multi-atlas label
fusion to deep learning. The intrinsic characteristics of the winners’ algorithms on the Grand
Challenges from the year 2012 to 2018 are analyzed and the results are compared carefully.
Conclusion:
Although deep learning has got higher rankings in the challenge, it has not yet met the
expectations in terms of accuracy. More effective and specialized work should be done in the future.
Collapse
Affiliation(s)
- Xiangbo Lin
- Faculty of Electronic Information and Electrical Engineering, School of Information and Communication Engineering, Dalian University of Technology, Dalian, LiaoNing Province, China
| | - Xiaoxi Li
- Faculty of Electronic Information and Electrical Engineering, School of Information and Communication Engineering, Dalian University of Technology, Dalian, LiaoNing Province, China
| |
Collapse
|
228
|
Nie K, Al-Hallaq H, Li XA, Benedict SH, Sohn JW, Moran JM, Fan Y, Huang M, Knopp MV, Michalski JM, Monroe J, Obcemea C, Tsien CI, Solberg T, Wu J, Xia P, Xiao Y, El Naqa I. NCTN Assessment on Current Applications of Radiomics in Oncology. Int J Radiat Oncol Biol Phys 2019; 104:302-315. [PMID: 30711529 PMCID: PMC6499656 DOI: 10.1016/j.ijrobp.2019.01.087] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Revised: 01/17/2019] [Accepted: 01/23/2019] [Indexed: 02/06/2023]
Abstract
Radiomics is a fast-growing research area based on converting standard-of-care imaging into quantitative minable data and building subsequent predictive models to personalize treatment. Radiomics has been proposed as a study objective in clinical trial concepts and a potential biomarker for stratifying patients across interventional treatment arms. In recognizing the growing importance of radiomics in oncology, a group of medical physicists and clinicians from NRG Oncology reviewed the current status of the field and identified critical issues, providing a general assessment and early recommendations for incorporation in oncology studies.
Collapse
Affiliation(s)
- Ke Nie
- Department of Radiation Oncology, Rutgers Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, New Jersey.
| | - Hania Al-Hallaq
- Department of Radiation and Cellular Oncology, University of Chicago, Chicago, Illinois
| | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Stanley H Benedict
- Department of Radiation Oncology, University of California-Davis, Sacramento, California
| | - Jason W Sohn
- Department of Radiation Oncology, Allegheny Health Network, Pittsburgh, Pennsylvania
| | - Jean M Moran
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Mi Huang
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Michael V Knopp
- Division of Imaging Science, Department of Radiology, Ohio State University, Columbus, Ohio
| | - Jeff M Michalski
- Department of Radiation Oncology, Washington University School of Medicine in St. Louis, St. Louis, Missouri
| | - James Monroe
- Department of Radiation Oncology, St. Anthony's Cancer Center, St. Louis, Missouri
| | - Ceferino Obcemea
- Radiation Research Program, National Cancer Institute, Bethesda, Maryland
| | - Christina I Tsien
- Department of Radiation Oncology, Washington University School of Medicine in St. Louis, St. Louis, Missouri
| | - Timothy Solberg
- Department of Radiation Oncology, University of California-San Francisco, San Francisco, California
| | - Jackie Wu
- Department of Radiation Oncology, Duke University, Durham, North Carolina
| | - Ping Xia
- Department of Radiation Oncology, Cleveland Clinic, Cleveland, Ohio
| | - Ying Xiao
- Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Issam El Naqa
- Department of Radiation and Cellular Oncology, University of Chicago, Chicago, Illinois
| |
Collapse
|
229
|
Lezama-Del Valle P, Krauel L, LaQuaglia MP. Error traps and culture of safety in pediatric surgical oncology. Semin Pediatr Surg 2019; 28:164-171. [PMID: 31171152 DOI: 10.1053/j.sempedsurg.2019.04.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
This article reviews technical issues to improve surgical safety and avoid surgical errors in pediatric surgical oncology, particularly in the three most common extracranial solid tumors: neuroblastoma, hepatoblastoma and Wilms tumor. The use of adjuvant chemotherapy - when indicated - the use of tumor specific classifications, adequate surgical planning, that may include the use of 3D printable models, improved surgical instruments and technology, and following surgical guidelines, would result in avoiding error, increased safety, and therefore in improved surgical outcomes.
Collapse
Affiliation(s)
- Pablo Lezama-Del Valle
- Surgical Oncology Service, Department of General Surgery, Hospital Infantil de México Federico Gómez, Mexico City, Mexico.
| | - Lucas Krauel
- Pediatric Surgical Oncology Unit, Department of Pediatric Surgery, Hospital Sant Joan de Déu, University of Barcelona, Barcelona, Spain
| | - Michael P LaQuaglia
- Pediatric Surgical Service, Department of Surgery, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
230
|
Automatic Human Brain Tumor Detection in MRI Image Using Template-Based K Means and Improved Fuzzy C Means Clustering Algorithm. BIG DATA AND COGNITIVE COMPUTING 2019. [DOI: 10.3390/bdcc3020027] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In recent decades, human brain tumor detection has become one of the most challenging issues in medical science. In this paper, we propose a model that includes the template-based K means and improved fuzzy C means (TKFCM) algorithm for detecting human brain tumors in a magnetic resonance imaging (MRI) image. In this proposed algorithm, firstly, the template-based K-means algorithm is used to initialize segmentation significantly through the perfect selection of a template, based on gray-level intensity of image; secondly, the updated membership is determined by the distances from cluster centroid to cluster data points using the fuzzy C-means (FCM) algorithm while it contacts its best result, and finally, the improved FCM clustering algorithm is used for detecting tumor position by updating membership function that is obtained based on the different features of tumor image including Contrast, Energy, Dissimilarity, Homogeneity, Entropy, and Correlation. Simulation results show that the proposed algorithm achieves better detection of abnormal and normal tissues in the human brain under small detachment of gray-level intensity. In addition, this algorithm detects human brain tumors within a very short time—in seconds compared to minutes with other algorithms.
Collapse
|
231
|
Alimohamadi Gilakjan S, Hasani Bidgoli J, Aghaizadeh Zorofi R, Ahmadian A. Artificially enriching the training dataset of statistical shape models via constrained cage-based deformation. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2019; 42:573-584. [PMID: 31087232 DOI: 10.1007/s13246-019-00759-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Accepted: 04/27/2019] [Indexed: 11/28/2022]
Abstract
The construction of a powerful statistical shape model (SSM) requires a rich training dataset that includes the large variety of complex anatomical topologies. The lack of real data causes most SSMs unable to generalize possible unseen instances. Artificial enrichment of training data is one of the methods proposed to address this issue. In this paper, we introduce a novel technique called constrained cage-based deformation (CCBD), which has the ability to produce unlimited artificial data that promises to enrich variability within the training dataset. The proposed method is a two-step algorithm: in the first step, it moves a few handles together, and in the second step transfers the displacements of these handles to the base mesh vertices to generate a real new instance. The evaluation of statistical characteristics of the CCBD confirms that our proposed technique outperforms notable data-generating methods quantitatively, in terms of the generalization ability, and with respect to specificity.
Collapse
Affiliation(s)
- Samaneh Alimohamadi Gilakjan
- Department of Biomedical Systems & Medical Physics, Tehran University of Medical Sciences, Tehran, Iran.,Research Center for Biomedical Technologies and Robotics, Imam Khomeini Hospital Complex, Keshavarz Blvd, Tehran, Iran
| | - Javad Hasani Bidgoli
- Control & Intelligent Processing, Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Reza Aghaizadeh Zorofi
- Control & Intelligent Processing, Center of Excellence, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Alireza Ahmadian
- Department of Biomedical Systems & Medical Physics, Tehran University of Medical Sciences, Tehran, Iran. .,Research Center for Biomedical Technologies and Robotics, Imam Khomeini Hospital Complex, Keshavarz Blvd, Tehran, Iran.
| |
Collapse
|
232
|
Rohini P, Sundar S, Ramakrishnan S. Characterization of Alzheimer conditions in MR images using volumetric and sagittal brainstem texture features. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 173:147-155. [PMID: 31046989 DOI: 10.1016/j.cmpb.2019.03.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2019] [Revised: 02/17/2019] [Accepted: 03/06/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Brainstem analysis in Magnetic Resonance Images is essential to detect Alzheimer's condition in the preclinical stages. In this work, an attempt has been made to segment the brainstem in sagittal (2D) and volumetric (3D) images and evaluate texture changes to differentiate Alzheimer's disease (AD) stages. METHOD The images obtained from a public access database are spatial normalized, skull stripped and contrast enhanced. Morphological Reconstruction based Fast and Robust Fuzzy 'C' Means technique is used to cluster the brain tissue in preprocessed images into three groups namely cerebrospinal fluid, grey matter and white matter. Brainstem is segmented from the white matter tissue using connected component labelling. Texture features from volumetric and sagittal brainstem slices are extracted and its statistical significance is evaluated. RESULTS Results show that the proposed approach is able to segment the brainstem from all the considered images. Variation in texture is observed to be less than 2% among sagittal brainstem slices. Additionally, midsagittal and volumetric features are correlated, suggesting that midsagittal brainstem structure gives an estimate of brainstem volume. Texture features extracted from midsagittal slice shows significant variation (p < 0.05) and is able to differentiate AD classes. CONCLUSION Midsagittal brainstem texture features are able to capture the changes occurring in the early stages of disease condition. As the distinction of AD in preclinical stage is complex and clinically significant, this approach could be useful for early diagnosis of the disease.
Collapse
Affiliation(s)
- P Rohini
- Non-Invasive Imaging and Diagnostic Laboratory, Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, 600036, India.
| | - S Sundar
- Department of Mathematics, Indian Institute of Technology Madras, 600036, India.
| | - S Ramakrishnan
- Non-Invasive Imaging and Diagnostic Laboratory, Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, 600036, India.
| |
Collapse
|
233
|
Liu P, El Basha MD, Li Y, Xiao Y, Sanelli PC, Fang R. Deep Evolutionary Networks with Expedited Genetic Algorithms for Medical Image Denoising. Med Image Anal 2019; 54:306-315. [PMID: 30981133 PMCID: PMC6527091 DOI: 10.1016/j.media.2019.03.004] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 01/30/2019] [Accepted: 03/20/2019] [Indexed: 12/19/2022]
Abstract
Deep convolutional neural networks offer state-of-the-art performance for medical image analysis. However, their architectures are manually designed for particular problems. On the one hand, a manual designing process requires many trials to tune a large number of hyperparameters and is thus quite a time-consuming task. On the other hand, the fittest hyperparameters that can adapt to source data properties (e.g., sparsity, noisy features) are not able to be quickly identified for target data properties. For instance, the realistic noise in medical images is usually mixed and complicated, and sometimes unknown, leading to challenges in applying existing methods directly and creating effective denoising neural networks easily. In this paper, we present a Genetic Algorithm (GA)-based network evolution approach to search for the fittest genes to optimize network structures automatically. We expedite the evolutionary process through an experience-based greedy exploration strategy and transfer learning. Our evolutionary algorithm procedure has flexibility, which allows taking advantage of current state-of-the-art modules (e.g., residual blocks) to search for promising neural networks. We evaluate our framework on a classic medical image analysis task: denoising. The experimental results on computed tomography perfusion (CTP) image denoising demonstrate the capability of the method to select the fittest genes for building high-performance networks, named EvoNets. Our results outperform state-of-the-art methods consistently at various noise levels.
Collapse
Affiliation(s)
- Peng Liu
- J. Crayton Pruitt Family Dept. of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, FL 32611 USA
| | - Mohammad D El Basha
- J. Crayton Pruitt Family Dept. of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, FL 32611 USA
| | - Yangjunyi Li
- J. Crayton Pruitt Family Dept. of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, FL 32611 USA
| | - Yao Xiao
- J. Crayton Pruitt Family Dept. of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, FL 32611 USA
| | - Pina C Sanelli
- Imaging Clinical Effectiveness and Outcomes Research, Department of Radiology, Northwell Health, Northwell Health, 300 Community Drive, Manhasset, NY 11030 USA; Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, 500 Hofstra Blvd, Hempstead, NY 11549 USA; Center for Health Innovations and Outcomes Research, Feinstein Institute for Medical Research, 350 Community Dr, Manhasset, NY 11030 USA
| | - Ruogu Fang
- J. Crayton Pruitt Family Dept. of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, FL 32611 USA.
| |
Collapse
|
234
|
Hussain D, Han SM. Computer-aided osteoporosis detection from DXA imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 173:87-107. [PMID: 31046999 DOI: 10.1016/j.cmpb.2019.03.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2019] [Revised: 03/10/2019] [Accepted: 03/13/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Osteoporosis is a skeletal disease caused by a high rate of bone tissue loss, and it is a major cause of bone fracture. In contemporary society, osteoporosis is more common than cancer and stroke and results in a higher rate of morbidity and mortality in the human population. Osteoporosis can conclusively be diagnosed with dual energy X-ray absorptiometry (DXA). In this study, we propose a computer-aided osteoporosis detection (CAOD) technique that automatically measures bone mineral density (BMD) and generates an osteoporosis report from a DXA scan. METHODS The CAOD model denoise and segments DXA images using a non-local mean filter, Machine learning pixel label random forest respectively, and locates regions of interest with higher accuracy. Pixel label random forest classifies a pixel either bone or soft tissue; then contours are extracted from binary image to locate regions of interest and calculate BMD from bone and soft tissues pixels. Mean standard deviation and correlation coefficients statistical analysis were used to evaluate the consistency and accuracy of BMD measurements. RESULTS During a consistency test of BMD measurements using three consecutive scans from Computerized Imaging Reference Systems' Bona Fide Phantom (CIRS-BFP) for the spine, the CAOD model showed an averaged standard deviation of 0.0029 while the standard deviation from manual measurements on the same data set by three different individuals was recorded as 0.1199. During another correlation study of BMD measurements evaluating real human scan images by the CAOD model versus manual measurement, the model scored a correlation coefficient of R2 = 0.9901 while the CIRS-BFP study scored a correlation coefficient of R2 = 0.9709. CONCLUSIONS The CAOD model increases the preciseness and accuracy of BMD measurements. This CAOD method will help clinicians, untrained DXA operators, and researchers (medical scientists, doctors, and bone researchers) use the DXA system with reliable accuracy and overcome workload challenges. It will also improve osteoporosis diagnosis from DXA systems and increase system performance and value.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University 1732, Yongin 17104, Republic of Korea.
| | - Seung-Moo Han
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University 1732, Yongin 17104, Republic of Korea.
| |
Collapse
|
235
|
Javan R, Ellenbogen AL, Greek N, Haji-Momenian S. A prototype assembled 3D-printed phantom of the glenohumeral joint for fluoroscopic-guided shoulder arthrography. Skeletal Radiol 2019; 48:791-802. [PMID: 29948036 DOI: 10.1007/s00256-018-2979-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2018] [Revised: 05/07/2018] [Accepted: 05/14/2018] [Indexed: 02/02/2023]
Abstract
PURPOSE To describe the methodology of constructing a three-dimensional (3D) printed model of the glenohumeral joint, to serve as an interventional phantom for fluoroscopy-guided shoulder arthrography training. MATERIALS AND METHODS The osseous structures, intra-articular space and skin surface of the shoulder were digitally extracted as separate 3D meshes from a normal CT arthrogram of the shoulder, using commercially available software. The osseous structures were 3D-printed in gypsum, a fluoroscopically radiopaque mineral, using binder jet technology. The joint capsule was 3D printed with rubber-like TangoPlus material, using PolyJet technology. The capsule was secured to the humeral head and glenoid to create a sealed intra-articular space. A polyamide mold of the skin was printed using selective laser sintering. The joint was stabilized inside the mold, and the surrounding soft tissues were cast in silicone of varying densities. Fluoroscopically-guided shoulder arthrography was performed using anterior, posterior, and rotator interval approaches. CT arthrographic imaging of the phantom was also performed. RESULTS A life-size phantom of the glenohumeral joint was constructed. The radiopaque osseous structures replicated in-vivo osseous corticomedullary differentiation, with dense cortical bone and less dense medullary cancellous bone. The glenoid labrum was successfully integrated into the printed capsule, and visualized on CT arthrography. The phantom was repeatedly used to perform shoulder arthrography using all three conventional approaches, and simulated the in vivo challenges of needle guidance. CONCLUSIONS 3D printing of a complex capsule, such as the glenohumeral joint, is possible with this technique. Such a model can serve as a valuable training tool.
Collapse
Affiliation(s)
- Ramin Javan
- Department of Radiology, George Washington University Hospital, 900 23rd St NW, Suite G2092, Washington, DC, 20037, USA.
| | - Amy L Ellenbogen
- Department of Radiology, George Washington University Hospital, 900 23rd St NW, Suite G2092, Washington, DC, 20037, USA
| | - Nicholas Greek
- Clinical Learning and Simulation Skills (CLASS) Center, George Washington University School of Medicine, 2300 I (Eye) Street, NW, Ross Hall 405, Washington, DC, USA
| | - Shawn Haji-Momenian
- Department of Radiology, George Washington University Hospital, 900 23rd St NW, Suite G2092, Washington, DC, 20037, USA
| |
Collapse
|
236
|
Pitteri M, Genova H, Lengenfelder J, DeLuca J, Ziccardi S, Rossi V, Calabrese M. Social cognition deficits and the role of amygdala in relapsing remitting multiple sclerosis patients without cognitive impairment. Mult Scler Relat Disord 2019; 29:118-123. [DOI: 10.1016/j.msard.2019.01.030] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 11/20/2018] [Accepted: 01/18/2019] [Indexed: 12/19/2022]
|
237
|
Liu C, Gardner SJ, Wen N, Elshaikh MA, Siddiqui F, Movsas B, Chetty IJ. Automatic Segmentation of the Prostate on CT Images Using Deep Neural Networks (DNN). Int J Radiat Oncol Biol Phys 2019; 104:924-932. [PMID: 30890447 DOI: 10.1016/j.ijrobp.2019.03.017] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 03/05/2019] [Accepted: 03/10/2019] [Indexed: 11/16/2022]
Abstract
PURPOSE Recent advances in deep neural networks (DNNs) have unlocked opportunities for their application for automatic image segmentation. We have evaluated a DNN-based algorithm for automatic segmentation of the prostate gland on a large cohort of patient images. METHODS AND MATERIALS Planning-CT data sets for 1114 patients with prostate cancer were retrospectively selected and divided into 2 groups. Group A contained 1104 data sets, with 1 physician-generated prostate gland contour for each data set. Among these image sets, 771 were used for training, 193 for validation, and 140 for testing. Group B contained 10 data sets, each including prostate contours delineated by 5 independent physicians and a consensus contour generated using the STAPLE method in the CERR software package. All images were resampled to a spatial resolution of 1 × 1 × 1.5 mm. A region (128 × 128 × 64 voxels) containing the prostate was selected to train a DNN. The best-performing model on the validation data sets was used to segment the prostate on all testing images. Results were compared between DNN and physician-generated contours using the Dice similarity coefficient, Hausdorff distances, regional contour distances, and center-of-mass distances. RESULTS The mean Dice similarity coefficients between DNN-based prostate segmentation and physician-generated contours for test data in Group A, Group B, and Group B-consensus were 0.85 ± 0.06 (range, 0.65-0.93), 0.85 ± 0.04 (range, 0.80-0.91), and 0.88 ± 0.03 (range, 0.82-0.92), respectively. The Hausdorff distance was 7.0 ± 3.5 mm, 7.3 ± 2.0 mm, and 6.3 ± 2.0 mm for Group A, Group B, and Group B-consensus, respectively. The mean center-of-mass distances for all 3 data set groups were within 5 mm. CONCLUSIONS A DNN-based algorithm was used to automatically segment the prostate for a large cohort of patients with prostate cancer. DNN-based prostate segmentations were compared to the consensus contour for a smaller group of patients; the agreement between DNN segmentations and consensus contour was similar to the agreement reported in a previous study. Clinical use of DNNs is promising, but further investigation is warranted.
Collapse
Affiliation(s)
- Chang Liu
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan.
| | - Stephen J Gardner
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Ning Wen
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Mohamed A Elshaikh
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Farzan Siddiqui
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Benjamin Movsas
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Indrin J Chetty
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| |
Collapse
|
238
|
Gsaxner C, Roth PM, Wallner J, Egger J. Exploit fully automatic low-level segmented PET data for training high-level deep learning algorithms for the corresponding CT data. PLoS One 2019; 14:e0212550. [PMID: 30835746 PMCID: PMC6400332 DOI: 10.1371/journal.pone.0212550] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Accepted: 02/05/2019] [Indexed: 11/20/2022] Open
Abstract
We present an approach for fully automatic urinary bladder segmentation in CT images with artificial neural networks in this study. Automatic medical image analysis has become an invaluable tool in the different treatment stages of diseases. Especially medical image segmentation plays a vital role, since segmentation is often the initial step in an image analysis pipeline. Since deep neural networks have made a large impact on the field of image processing in the past years, we use two different deep learning architectures to segment the urinary bladder. Both of these architectures are based on pre-trained classification networks that are adapted to perform semantic segmentation. Since deep neural networks require a large amount of training data, specifically images and corresponding ground truth labels, we furthermore propose a method to generate such a suitable training data set from Positron Emission Tomography/Computed Tomography image data. This is done by applying thresholding to the Positron Emission Tomography data for obtaining a ground truth and by utilizing data augmentation to enlarge the dataset. In this study, we discuss the influence of data augmentation on the segmentation results, and compare and evaluate the proposed architectures in terms of qualitative and quantitative segmentation performance. The results presented in this study allow concluding that deep neural networks can be considered a promising approach to segment the urinary bladder in CT images.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute for Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz, Styria, Austria
| | - Peter M. Roth
- Institute for Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
| | - Jürgen Wallner
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz, Styria, Austria
| | - Jan Egger
- Institute for Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz, Styria, Austria
| |
Collapse
|
239
|
Nie D, Wang L, Adeli E, Lao C, Lin W, Shen D. 3-D Fully Convolutional Networks for Multimodal Isointense Infant Brain Image Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:1123-1136. [PMID: 29994385 PMCID: PMC6230311 DOI: 10.1109/tcyb.2018.2797905] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Accurate segmentation of infant brain images into different regions of interest is one of the most important fundamental steps in studying early brain development. In the isointense phase (approximately 6-8 months of age), white matter and gray matter exhibit similar levels of intensities in magnetic resonance (MR) images, due to the ongoing myelination and maturation. This results in extremely low tissue contrast and thus makes tissue segmentation very challenging. Existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single modality. To address the challenge, we propose a novel 3-D multimodal fully convolutional network (FCN) architecture for segmentation of isointense phase brain MR images. Specifically, we extend the conventional FCN architectures from 2-D to 3-D, and, rather than directly using FCN, we intuitively integrate coarse (naturally high-resolution) and dense (highly semantic) feature maps to better model tiny tissue regions, in addition, we further propose a transformation module to better connect the aggregating layers; we also propose a fusion module to better serve the fusion of feature maps. We compare the performance of our approach with several baseline and state-of-the-art methods on two sets of isointense phase brain images. The comparison results show that our proposed 3-D multimodal FCN model outperforms all previous methods by a large margin in terms of segmentation accuracy. In addition, the proposed framework also achieves faster segmentation results compared to all other methods. Our experiments further demonstrate that: 1) carefully integrating coarse and dense feature maps can considerably improve the segmentation performance; 2) batch normalization can speed up the convergence of the networks, especially when hierarchical feature aggregations occur; and 3) integrating multimodal information can further boost the segmentation performance.
Collapse
|
240
|
Cirillo MD, Mirdell R, Sjöberg F, Pham TD. Tensor Decomposition for Colour Image Segmentation of Burn Wounds. Sci Rep 2019; 9:3291. [PMID: 30824754 PMCID: PMC6397199 DOI: 10.1038/s41598-019-39782-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 01/28/2019] [Indexed: 11/09/2022] Open
Abstract
Research in burns has been a continuing demand over the past few decades, and important advancements are still needed to facilitate more effective patient stabilization and reduce mortality rate. Burn wound assessment, which is an important task for surgical management, largely depends on the accuracy of burn area and burn depth estimates. Automated quantification of these burn parameters plays an essential role for reducing these estimate errors conventionally carried out by clinicians. The task for automated burn area calculation is known as image segmentation. In this paper, a new segmentation method for burn wound images is proposed. The proposed methods utilizes a method of tensor decomposition of colour images, based on which effective texture features can be extracted for classification. Experimental results showed that the proposed method outperforms other methods not only in terms of segmentation accuracy but also computational speed.
Collapse
Affiliation(s)
- Marco D Cirillo
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden.
| | - Robin Mirdell
- The Burn Centre, Department of Plastic Surgery, Hand Surgery, and Burns, Linköping University, Linköping, Sweden
- Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
| | - Folke Sjöberg
- The Burn Centre, Department of Plastic Surgery, Hand Surgery, and Burns, Linköping University, Linköping, Sweden
- Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
| | - Tuan D Pham
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden.
| |
Collapse
|
241
|
Kozei A, Nikolov N, Haluzynskyi O, Burburska S. Method of Threshold CT Image Segmentation of Skeletal Bones. INNOVATIVE BIOSYSTEMS AND BIOENGINEERING 2019. [DOI: 10.20535/ibb.2019.3.1.154897] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
|
242
|
Automated Fractured Bone Segmentation and Labeling from CT Images. J Med Syst 2019; 43:60. [DOI: 10.1007/s10916-019-1176-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Accepted: 01/21/2019] [Indexed: 10/27/2022]
|
243
|
Zhang Y, Shi F, Cheng J, Wang L, Yap PT, Shen D. Longitudinally Guided Super-Resolution of Neonatal Brain Magnetic Resonance Images. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:662-674. [PMID: 29994176 PMCID: PMC6043407 DOI: 10.1109/tcyb.2017.2786161] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Neonatal magnetic resonance (MR) images typically have low spatial resolution and insufficient tissue contrast. Interpolation methods are commonly used to upsample the images for the subsequent analysis. However, the resulting images are often blurry and susceptible to partial volume effects. In this paper, we propose a novel longitudinally guided super-resolution (SR) algorithm for neonatal images. This is motivated by the fact that anatomical structures evolve slowly and smoothly as the brain develops after birth. We propose a strategy involving longitudinal regularization, similar to bilateral filtering, in combination with low-rank and total variation constraints to solve the ill-posed inverse problem associated with image SR. Experimental results on neonatal MR images demonstrate that the proposed algorithm recovers clear structural details and outperforms state-of-the-art methods both qualitatively and quantitatively.
Collapse
|
244
|
Zhang C, Bruggink R, Baan F, Bronkhorst E, Maal T, He H, Ongkosuwito EM. A new segmentation algorithm for measuring CBCT images of nasal airway: a pilot study. PeerJ 2019; 7:e6246. [PMID: 30713816 PMCID: PMC6354662 DOI: 10.7717/peerj.6246] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 12/07/2018] [Indexed: 11/21/2022] Open
Abstract
Background Three-dimensional (3D) modeling of the nasal airway space is becoming increasingly important for assessment in breathing disorders. Processing cone beam computed tomography (CBCT) scans of this region is complicated, however, by the intricate anatomy of the sinuses compared to the simpler nasopharynx. A gold standard for these measures also is lacking. Previous work has shown that software programs can vary in accuracy and reproducibility outcomes of these measurements. This study reports the reproducibility and accuracy of an algorithm, airway segmentor (AS), designed for nasal airway space analysis using a 3D printed anthropomorphic nasal airway model. Methods To test reproducibility, two examiners independently used AS to edit and segment 10 nasal airway CBCT scans. The intra- and inter-examiner reproducibility of the nasal airway volume was evaluated using paired t-tests and intraclass correlation coefficients. For accuracy testing, the CBCT data for pairs of nasal cavities were 3D printed to form hollow shell models. The water-equivalent method was used to calculate the inner volume as the gold standard, and the models were then embedded into a dry human skull as a phantom and subjected to CBCT. AS, along with the software programs MIMICS 19.0 and INVIVO 5, was applied to calculate the inner volume of the models from the CBCT scan of the phantom. The accuracy was reported as a percentage of the gold standard. Results The intra-examiner reproducibility was high, and the inter-examiner reproducibility was clinically acceptable. AS and MIMICS presented accurate volume calculations, while INVIVO 5 significantly overestimated the mockup of the nasal airway volume. Conclusion With the aid of a 3D printing technique, the new algorithm AS was found to be a clinically reliable and accurate tool for the segmentation and reconstruction of the nasal airway space.
Collapse
Affiliation(s)
- Chen Zhang
- The State Key Laboratory Breeding Base of Basic Science of Stomatology (Hubei-MOST) & Key Laboratory of Oral Biomedicine Ministry of Education, School & Hospital of Stomatology, Wuhan University, Wuhan, China.,Department of Dentistry, Section of Orthodontics and Craniofacial Biology, Radboud University Nijmegen Medical Center, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Robin Bruggink
- Department of Dentistry, Section of Orthodontics and Craniofacial Biology, Radboud University Nijmegen Medical Center, Radboud University Nijmegen, Nijmegen, Netherlands.,3DLAB The Netherlands, Radboud University Medical Center, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Frank Baan
- Department of Dentistry, Section of Orthodontics and Craniofacial Biology, Radboud University Nijmegen Medical Center, Radboud University Nijmegen, Nijmegen, Netherlands.,3DLAB The Netherlands, Radboud University Medical Center, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Ewald Bronkhorst
- Department of Dentistry, Section of Preventive and Restorative Dentistry, Radboud University Nijmegen Medical Center, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Thomas Maal
- 3DLAB The Netherlands, Radboud University Medical Center, Radboud University Nijmegen, Nijmegen, Netherlands.,Department of Oral and Maxillofacial Surgery, Radboud University Nijmegen Medical Center, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Hong He
- The State Key Laboratory Breeding Base of Basic Science of Stomatology (Hubei-MOST) & Key Laboratory of Oral Biomedicine Ministry of Education, School & Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Edwin M Ongkosuwito
- Department of Dentistry, Section of Orthodontics and Craniofacial Biology, Radboud University Nijmegen Medical Center, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
245
|
Improvement of MRI Brain Image Segmentation Using Fuzzy Unsupervised Learning. IRANIAN JOURNAL OF RADIOLOGY 2019. [DOI: 10.5812/iranjradiol.69063] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
246
|
Sawyer TW, Rice PFS, Sawyer DM, Koevary JW, Barton JK. Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue. J Med Imaging (Bellingham) 2019; 6:014002. [PMID: 30746391 PMCID: PMC6350616 DOI: 10.1117/1.jmi.6.1.014002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 12/27/2018] [Indexed: 12/31/2022] Open
Abstract
Ovarian cancer has the lowest survival rate among all gynecologic cancers predominantly due to late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depth-resolved, high-resolution images of biological tissue in real-time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must first be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluate a set of algorithms to segment OCT images of mouse ovaries. We examine five preprocessing techniques and seven segmentation algorithms. While all preprocessing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32 % ± 1.2 % . Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 94.8 % ± 1.2 % compared with manual segmentation. Even so, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.
Collapse
Affiliation(s)
- Travis W. Sawyer
- University of Arizona, College of Optical Sciences, Tucson, Arizona, United States
| | - Photini F. S. Rice
- University of Arizona, Department of Biomedical Engineering, Tucson, Arizona, United States
| | | | - Jennifer W. Koevary
- University of Arizona, Department of Biomedical Engineering, Tucson, Arizona, United States
| | - Jennifer K. Barton
- University of Arizona, College of Optical Sciences, Tucson, Arizona, United States
- University of Arizona, Department of Biomedical Engineering, Tucson, Arizona, United States
| |
Collapse
|
247
|
Napel S, Mu W, Jardim‐Perassi BV, Aerts HJWL, Gillies RJ. Quantitative imaging of cancer in the postgenomic era: Radio(geno)mics, deep learning, and habitats. Cancer 2018; 124:4633-4649. [PMID: 30383900 PMCID: PMC6482447 DOI: 10.1002/cncr.31630] [Citation(s) in RCA: 120] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 07/11/2018] [Accepted: 07/17/2018] [Indexed: 11/07/2022]
Abstract
Although cancer often is referred to as "a disease of the genes," it is indisputable that the (epi)genetic properties of individual cancer cells are highly variable, even within the same tumor. Hence, preexisting resistant clones will emerge and proliferate after therapeutic selection that targets sensitive clones. Herein, the authors propose that quantitative image analytics, known as "radiomics," can be used to quantify and characterize this heterogeneity. Virtually every patient with cancer is imaged radiologically. Radiomics is predicated on the beliefs that these images reflect underlying pathophysiologies, and that they can be converted into mineable data for improved diagnosis, prognosis, prediction, and therapy monitoring. In the last decade, the radiomics of cancer has grown from a few laboratories to a worldwide enterprise. During this growth, radiomics has established a convention, wherein a large set of annotated image features (1-2000 features) are extracted from segmented regions of interest and used to build classifier models to separate individual patients into their appropriate class (eg, indolent vs aggressive disease). An extension of this conventional radiomics is the application of "deep learning," wherein convolutional neural networks can be used to detect the most informative regions and features without human intervention. A further extension of radiomics involves automatically segmenting informative subregions ("habitats") within tumors, which can be linked to underlying tumor pathophysiology. The goal of the radiomics enterprise is to provide informed decision support for the practice of precision oncology.
Collapse
Affiliation(s)
- Sandy Napel
- Department of RadiologyStanford UniversityStanfordCalifornia
| | - Wei Mu
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| | | | - Hugo J. W. L. Aerts
- Dana‐Farber Cancer Institute, Department of Radiology, Brigham and Women’s HospitalHarvard Medical SchoolBostonMassachusetts
| | - Robert J. Gillies
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| |
Collapse
|
248
|
O'Mara AR, Collins JM, King AE, Vickers JC, Kirkcaldie MTK. Accurate and Unbiased Quantitation of Amyloid-β Fluorescence Images Using ImageSURF. Curr Alzheimer Res 2018; 16:102-108. [PMID: 30543169 DOI: 10.2174/1567205016666181212152622] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 11/22/2018] [Accepted: 11/29/2018] [Indexed: 12/25/2022]
Abstract
BACKGROUND Images of amyloid-β pathology characteristic of Alzheimer's disease are difficult to consistently and accurately segment, due to diffuse deposit boundaries and imaging variations. METHODS We evaluated the performance of ImageSURF, our open-source ImageJ plugin, which considers a range of image derivatives to train image classifiers. We compared ImageSURF to standard image thresholding to assess its reproducibility, accuracy and generalizability when used on fluorescence images of amyloid pathology. RESULTS ImageSURF segments amyloid-β images significantly more faithfully, and with significantly greater generalizability, than optimized thresholding. CONCLUSION In addition to its superior performance in capturing human evaluations of pathology images, ImageSURF is able to segment image sets of any size in a consistent and unbiased manner, without requiring additional blinding, and can be retrospectively applied to existing images. The training process yields a classifier file which can be shared as supplemental data, allowing fully open methods and data, and enabling more direct comparisons between different studies.
Collapse
Affiliation(s)
- Aidan R O'Mara
- Wicking Dementia Research and Education Centre, College of Health and Medicine, University of Tasmania, Hobart, Australia
| | - Jessica M Collins
- Wicking Dementia Research and Education Centre, College of Health and Medicine, University of Tasmania, Hobart, Australia
| | - Anna E King
- Wicking Dementia Research and Education Centre, College of Health and Medicine, University of Tasmania, Hobart, Australia
| | - James C Vickers
- Wicking Dementia Research and Education Centre, College of Health and Medicine, University of Tasmania, Hobart, Australia
| | - Matthew T K Kirkcaldie
- Wicking Dementia Research and Education Centre, College of Health and Medicine, University of Tasmania, Hobart, Australia
| |
Collapse
|
249
|
Lormand C, Zellmer GF, Németh K, Kilgour G, Mead S, Palmer AS, Sakamoto N, Yurimoto H, Moebis A. Weka Trainable Segmentation Plugin in ImageJ: A Semi-Automatic Tool Applied to Crystal Size Distributions of Microlites in Volcanic Rocks. MICROSCOPY AND MICROANALYSIS : THE OFFICIAL JOURNAL OF MICROSCOPY SOCIETY OF AMERICA, MICROBEAM ANALYSIS SOCIETY, MICROSCOPICAL SOCIETY OF CANADA 2018; 24:667-675. [PMID: 30588911 DOI: 10.1017/s1431927618015428] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Crystals within volcanic rocks record geochemical and textural signatures during magmatic evolution before eruption. Clues to this magmatic history can be examined using crystal size distribution (CSD) studies. The analysis of CSDs is a standard petrological tool, but laborious due to manual hand-drawing of crystal margins. The trainable Weka segmentation (TWS) plugin in ImageJ is a promising alternative. It uses machine learning and image segmentation to classify an image. We recorded back-scattered electron (BSE) images of three volcanic samples with different crystallinity (35, 50 and ≥85 vol. %), using scanning electron microscopes (SEM) of variable image resolutions, which we then tested using TWS. Crystal measurements obtained from the automatically segmented images are compared with those of the manual segmentation. Samples up to 50 vol. % crystallinity are successfully segmented using TWS. Segmentation at significantly higher crystallinities fails, as crystal boundaries cannot be distinguished. Accuracy performance tests for the TWS classifiers yield high F-scores (>0.930), hence, TWS is a successful and fast computing tool for outlining crystals from BSE images of glassy rocks. Finally, reliable CSD's can be derived using a low-cost desktop SEM, paving the way for a wide range of research to take advantage of this new petrological method.
Collapse
Affiliation(s)
- Charline Lormand
- 1School of Agriculture and Environment,Volcanic Risk Solutions,Massey University,P.O. Box 11222,Palmerston North 4442,New Zealand
| | - Georg F Zellmer
- 1School of Agriculture and Environment,Volcanic Risk Solutions,Massey University,P.O. Box 11222,Palmerston North 4442,New Zealand
| | - Károly Németh
- 1School of Agriculture and Environment,Volcanic Risk Solutions,Massey University,P.O. Box 11222,Palmerston North 4442,New Zealand
| | - Geoff Kilgour
- 2GNS Science,Wairakei Research Centre,P.O. Box 2000,Taupo 3352,New Zealand
| | - Stuart Mead
- 1School of Agriculture and Environment,Volcanic Risk Solutions,Massey University,P.O. Box 11222,Palmerston North 4442,New Zealand
| | - Alan S Palmer
- 3Department of Soil and Earth Sciences,School of Agriculture and Environment,Massey University,P.O. Box 11222,Palmerston North 4442,New Zealand
| | - Naoya Sakamoto
- 4Isotope Imaging Laboratory,Creative Research Institution,Hokkaido University,Sapporo 060-0810,Japan
| | - Hisayoshi Yurimoto
- 4Isotope Imaging Laboratory,Creative Research Institution,Hokkaido University,Sapporo 060-0810,Japan
| | - Anja Moebis
- 3Department of Soil and Earth Sciences,School of Agriculture and Environment,Massey University,P.O. Box 11222,Palmerston North 4442,New Zealand
| |
Collapse
|
250
|
Vector Field Convolution-Based B-Spline Deformation Model for 3D Segmentation of Cartilage in MRI. Symmetry (Basel) 2018. [DOI: 10.3390/sym10110591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In this paper, a novel 3D vector field convolution (VFC)-based B-spline deformation model is proposed for accurate and robust cartilage segmentation. Firstly, the anisotropic diffusion method is utilized for noise reduction, and the Sinc interpolation method is employed for resampling. Then, to extract the rough cartilage, features derived from
Collapse
|