151
|
Carvalho V, Gonçalves IM, Souza A, Souza MS, Bento D, Ribeiro JE, Lima R, Pinho D. Manual and Automatic Image Analysis Segmentation Methods for Blood Flow Studies in Microchannels. MICROMACHINES 2021; 12:317. [PMID: 33803615 PMCID: PMC8002955 DOI: 10.3390/mi12030317] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/12/2021] [Accepted: 03/14/2021] [Indexed: 01/16/2023]
Abstract
In blood flow studies, image analysis plays an extremely important role to examine raw data obtained by high-speed video microscopy systems. This work shows different ways to process the images which contain various blood phenomena happening in microfluidic devices and in microcirculation. For this purpose, the current methods used for tracking red blood cells (RBCs) flowing through a glass capillary and techniques to measure the cell-free layer thickness in different kinds of microchannels will be presented. Most of the past blood flow experimental data have been collected and analyzed by means of manual methods, that can be extremely reliable, but they are highly time-consuming, user-intensive, repetitive, and the results can be subjective to user-induced errors. For this reason, it is crucial to develop image analysis methods able to obtain the data automatically. Concerning automatic image analysis methods for individual RBCs tracking and to measure the well known microfluidic phenomena cell-free layer, two developed methods are presented and discussed in order to demonstrate their feasibility to obtain accurate data acquisition in such studies. Additionally, a comparison analysis between manual and automatic methods was performed.
Collapse
Affiliation(s)
- Violeta Carvalho
- Mechanical Engineering and Resource Sustainability Center (MEtRICs), Mechanical Engineering Department, University of Minho, 4800-058 Guimarães, Portugal; (V.C.); (D.P.)
| | - Inês M. Gonçalves
- Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, 1049-001 Lisboa, Portugal;
| | - Andrews Souza
- Centro para a Valorização de Resíduos (CVR), University of Minho, 4800-028 Guimarães, Portugal;
| | - Maria S. Souza
- Center for MicroElectromechanical Systems (CMEMS), University of Minho, 4800-058 Guimarães, Portugal;
| | - David Bento
- Transport Phenomena Research Center (CEFT), Faculdade de Engenharia da Universidade do Porto (FEUP), Rua Dr. Roberto Frias, 4200-465 Porto, Portugal;
- Polytechnic Institute of Bragança, ESTiG/IPB, C. Sta. Apolónia, 5300-857 Bragança, Portugal;
| | - João E. Ribeiro
- Polytechnic Institute of Bragança, ESTiG/IPB, C. Sta. Apolónia, 5300-857 Bragança, Portugal;
- Centro de Investigação de Montanha (CIMO), Polytechnic Institute of Bragança, 5300-252, Bragança, Portugal
| | - Rui Lima
- Mechanical Engineering and Resource Sustainability Center (MEtRICs), Mechanical Engineering Department, University of Minho, 4800-058 Guimarães, Portugal; (V.C.); (D.P.)
- Transport Phenomena Research Center (CEFT), Faculdade de Engenharia da Universidade do Porto (FEUP), Rua Dr. Roberto Frias, 4200-465 Porto, Portugal;
| | - Diana Pinho
- Mechanical Engineering and Resource Sustainability Center (MEtRICs), Mechanical Engineering Department, University of Minho, 4800-058 Guimarães, Portugal; (V.C.); (D.P.)
- Center for MicroElectromechanical Systems (CMEMS), University of Minho, 4800-058 Guimarães, Portugal;
- Polytechnic Institute of Bragança, ESTiG/IPB, C. Sta. Apolónia, 5300-857 Bragança, Portugal;
| |
Collapse
|
152
|
Ziccardi S, Pitteri M, Genova HM, Calabrese M. Social Cognition in Multiple Sclerosis: A 3-Year Follow-Up MRI and Behavioral Study. Diagnostics (Basel) 2021; 11:diagnostics11030484. [PMID: 33803307 PMCID: PMC8001246 DOI: 10.3390/diagnostics11030484] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 02/26/2021] [Accepted: 03/05/2021] [Indexed: 12/19/2022] Open
Abstract
Social cognition (SC) has become a topic of widespread interest in the last decade. SC deficits were described in multiple sclerosis (MS) patients, in association with amygdala lesions, even in those without formal cognitive impairment. In this 3-year follow-up study, we aimed at longitudinally investigating the evolution of SC deficits and amygdala damage in a group of cognitive-normal MS patients, and the association between SC and psychological well-being. After 3 years (T3) from the baseline examination (T0), 26 relapsing-remitting MS patients (RRMS) were retested with a neuropsychological battery and SC tasks (theory of mind, facial emotion recognition, empathy). A SC composite score (SCcomp) was calculated for each patient. Emotional state, fatigue, and quality of life (QoL) were also evaluated. RRMS patients at T3 underwent a 3T-MRI as performed at T0, from which were calculated both volume and cortical lesion volume (CLV) of the amygdalae. Compared to T0, at T3 all RRMS patients were still cognitive-normal and remained stable in their global SC impaired performance. At T0, SCcomp correlated with amygdala CLV (p = 0.002) while, at T3, was more associated with amygdala volume (p = 0.035) rather than amygdala CLV (p = 0.043). SCcomp change T3-T0 correlated with global emotional state (p = 0.043), depression (p = 0.046), anxiety (p = 0.034), fatigue (p = 0.025), and QoL-social functioning (p = 0.033). We showed the longitudinal stability of SC deficits in cognitive-normal RRMS patients, mirroring the amygdala structural damage and the psychological well-being. These results highlight that SC exerts a key role in MS.
Collapse
Affiliation(s)
- Stefano Ziccardi
- Neurology Section, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, 37134 Verona, Italy;
- Correspondence: (S.Z.); (M.C.)
| | - Marco Pitteri
- Neurology Section, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, 37134 Verona, Italy;
| | - Helen M. Genova
- Kessler Foundation, 120 Eagle’Rock Ave, Suite 100, East Hanover, NJ 07936, USA;
- Department of Physical Medicine and Rehabilitation, New Jersey Medical School, Rutgers University, Newark, NJ 07101, USA
| | - Massimiliano Calabrese
- Neurology Section, Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona, 37134 Verona, Italy;
- Correspondence: (S.Z.); (M.C.)
| |
Collapse
|
153
|
Halder S, Acharya S, Kou W, Kahrilas PJ, Pandolfino JE, Patankar NA. Mechanics informed fluoroscopy of esophageal transport. Biomech Model Mechanobiol 2021; 20:925-940. [PMID: 33651206 DOI: 10.1007/s10237-021-01420-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Accepted: 01/07/2021] [Indexed: 12/28/2022]
Abstract
Fluoroscopy is a radiographic procedure for evaluating esophageal disorders such as achalasia, dysphasia and gastroesophageal reflux disease. It performs dynamic imaging of the swallowing process and provides anatomical detail and a qualitative idea of how well swallowed fluid is transported through the esophagus. In this work, we present a method called mechanics informed fluoroscopy (FluoroMech) that derives patient-specific quantitative information about esophageal function. FluoroMech uses a convolutional neural network to perform segmentation of image sequences generated from the fluoroscopy, and the segmented images become input to a one-dimensional model that predicts the flow rate and pressure distribution in fluid transported through the esophagus. We have extended this model to identify and estimate potential physiomarkers such as esophageal wall stiffness and active relaxation ahead of the peristaltic wave in the esophageal musculature. FluoroMech requires minimal computational time and hence can potentially be applied clinically in the diagnosis of esophageal disorders.
Collapse
Affiliation(s)
- Sourav Halder
- Theoretical and Applied Mechanics, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
| | - Shashank Acharya
- Department of Mechanical Engineering, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
| | - Wenjun Kou
- Department of Medicine, Feinberg School of Medicine, Northwestern University, 676 North Saint Clair St., Chicago, IL, 60611, USA
| | - Peter J Kahrilas
- Department of Medicine, Feinberg School of Medicine, Northwestern University, 676 North Saint Clair St., Chicago, IL, 60611, USA
| | - John E Pandolfino
- Department of Medicine, Feinberg School of Medicine, Northwestern University, 676 North Saint Clair St., Chicago, IL, 60611, USA
| | - Neelesh A Patankar
- Theoretical and Applied Mechanics, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA. .,Department of Mechanical Engineering, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA.
| |
Collapse
|
154
|
Lacalle D, Castro-Abril HA, Randelovic T, Domínguez C, Heras J, Mata E, Mata G, Méndez Y, Pascual V, Ochoa I. SpheroidJ: An Open-Source Set of Tools for Spheroid Segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105837. [PMID: 33221056 DOI: 10.1016/j.cmpb.2020.105837] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 11/08/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES Spheroids are the most widely used 3D models for studying the effects of different micro-environmental characteristics on tumour behaviour, and for testing different preclinical and clinical treatments. In order to speed up the study of spheroids, imaging methods that automatically segment and measure spheroids are instrumental; and, several approaches for automatic segmentation of spheroid images exist in the literature. However, those methods fail to generalise to a diversity of experimental conditions. The aim of this work is the development of a set of tools for spheroid segmentation that works in a diversity of settings. METHODS In this work, we have tackled the spheroid segmentation task by first developing a generic segmentation algorithm that can be easily adapted to different scenarios. This generic algorithm has been employed to reduce the burden of annotating a dataset of images that, in turn, has been employed to train several deep learning architectures for semantic segmentation. Both our generic algorithm and the constructed deep learning models have been tested with several datasets of spheroid images where the spheroids were grown under several experimental conditions, and the images acquired using different equipment. RESULTS The developed generic algorithm can be particularised to different scenarios; however, those particular algorithms fail to generalise to different conditions. By contrast, the best deep learning model, constructed using the HRNet-Seg architecture, generalises properly to a diversity of scenarios. In order to facilitate the dissemination and use of our algorithms and models, we present SpheroidJ, a set of open-source tools for spheroid segmentation. CONCLUSIONS In this work, we have developed an algorithm and trained several models for spheroid segmentation that can be employed with images acquired under different conditions. Thanks to this work, the analysis of spheroids acquired under different conditions will be more reliable and comparable; and, the developed tools will help to advance our understanding of tumour behaviour.
Collapse
Affiliation(s)
- David Lacalle
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Héctor Alfonso Castro-Abril
- Tissue MicroEnvironment (TME) lab, Institute for Health Research Aragón (IIS Aragón), Zaragoza, Spain; Aragon Institute of Engineering Research (I3A), University of Zaragoza, Zaragoza, Spain; Grupo de modelado y métodos numéricos en Ingeniería, Universidad Nacional de Colombia, Colombia
| | - Teodora Randelovic
- Tissue MicroEnvironment (TME) lab, Institute for Health Research Aragón (IIS Aragón), Zaragoza, Spain; Aragon Institute of Engineering Research (I3A), University of Zaragoza, Zaragoza, Spain
| | - César Domínguez
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Jónathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, Spain.
| | - Eloy Mata
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Gadea Mata
- Confocal Microscopy Core Unit, Spanish National Cancer Research Centre, Madrid, Spain
| | - Yolanda Méndez
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Vico Pascual
- Department of Mathematics and Computer Science, University of La Rioja, Spain
| | - Ignacio Ochoa
- Tissue MicroEnvironment (TME) lab, Institute for Health Research Aragón (IIS Aragón), Zaragoza, Spain; Aragon Institute of Engineering Research (I3A), University of Zaragoza, Zaragoza, Spain; Biomedical Research Networking Centre in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Instituto de Salud Carlos III, Madrid, Spain
| |
Collapse
|
155
|
El Kader IA, Xu G, Shuai Z, Saminu S. Brain Tumor Detection and Classification by Hybrid CNN-DWA Model Using MR Images. Curr Med Imaging 2021; 17:1248-1255. [PMID: 33655844 DOI: 10.2174/1573405617666210224113315] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 12/24/2020] [Accepted: 12/29/2020] [Indexed: 11/22/2022]
Abstract
OBJECTIVE Medical image processing is an exciting research area. In this paper, we proposed new brain tumor detection and classification model using MR brain images to help the doctors in early detection and classification of the brain tumor with high performance and best Accuracy. MATERIALS we trained and validated our model using five databases, including BRATS2012, BRATS2013, BRATS2014, BRATS2015, and ISLES-SISS 2015. METHODS The advantage of the hybrid model proposed is its novelty that is used for the first time; our new method is based on a hybrid deep convolution neural network and deep watershed auto-encoder (CNN-DWA) model. The method consists of six phases, first phase is input MR images, second phase is preprocessing using filter and morphology operation, third phase is matrix that represents MR brain images, fourth is applying the hybrid CNN-DWA, fifth is brain tumor classification, and detection, while sixth phase is the performance of the model using five values. RESULTS AND CONCLUSIONS The novelty of our hybrid CNN-DWA model showed the best results and high performance with Accuracy around 98% and loss validation 0, 1. Hybrid model can classify and detect the Tumor clearly using MR images; comparing with other models like CNN, DNN, and DWA, we discover that the proposed model performs better than the above-mentioned models. Depending on the better performance of the proposed hybrid model, this helps in developing computer-aided system for early detection of brain tumors and helps the doctors to diagnose the patients better.
Collapse
Affiliation(s)
- Isselmou Abd El Kader
- Hebei University of Technology, State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Department of Biomedical Engineering, Tianjin 300130. China
| | - Guizhi Xu
- Hebei University of Technology, State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Department of Biomedical Engineering, Tianjin 300130. China
| | - Zhang Shuai
- Hebei University of Technology, State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Department of Biomedical Engineering, Tianjin 300130. China
| | - Sani Saminu
- Hebei University of Technology, State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Department of Biomedical Engineering, Tianjin 300130. China
| |
Collapse
|
156
|
Shirokikh B, Shevtsov A, Dalechina A, Krivov E, Kostjuchenko V, Golanov A, Gombolevskiy V, Morozov S, Belyaev M. Accelerating 3D Medical Image Segmentation by Adaptive Small-Scale Target Localization. J Imaging 2021; 7:35. [PMID: 34460634 PMCID: PMC8321270 DOI: 10.3390/jimaging7020035] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 01/28/2021] [Accepted: 02/05/2021] [Indexed: 11/25/2022] Open
Abstract
The prevailing approach for three-dimensional (3D) medical image segmentation is to use convolutional networks. Recently, deep learning methods have achieved human-level performance in several important applied problems, such as volumetry for lung-cancer diagnosis or delineation for radiation therapy planning. However, state-of-the-art architectures, such as U-Net and DeepMedic, are computationally heavy and require workstations accelerated with graphics processing units for fast inference. However, scarce research has been conducted concerning enabling fast central processing unit computations for such networks. Our paper fills this gap. We propose a new segmentation method with a human-like technique to segment a 3D study. First, we analyze the image at a small scale to identify areas of interest and then process only relevant feature-map patches. Our method not only reduces the inference time from 10 min to 15 s but also preserves state-of-the-art segmentation quality, as we illustrate in the set of experiments with two large datasets.
Collapse
Affiliation(s)
- Boris Shirokikh
- Center for Neurobiology and Brain Restoration, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia; (A.S.); (M.B.)
| | - Alexey Shevtsov
- Center for Neurobiology and Brain Restoration, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia; (A.S.); (M.B.)
- Sector of Data Analysis for Neuroscience, Kharkevich Institute for Information Transmission Problems, 127051 Moscow, Russia;
- Department of Radio Engineering and Cybernetics, Moscow Institute of Physics and Technology, 141701 Moscow, Russia
| | | | - Egor Krivov
- Sector of Data Analysis for Neuroscience, Kharkevich Institute for Information Transmission Problems, 127051 Moscow, Russia;
- Department of Radio Engineering and Cybernetics, Moscow Institute of Physics and Technology, 141701 Moscow, Russia
| | | | - Andrey Golanov
- Department of Radiosurgery and Radiation, Burdenko Neurosurgery Institute, 125047 Moscow, Russia;
| | - Victor Gombolevskiy
- Medical Research Department, Research and Practical Clinical Center of Diagnostics and Telemedicine Technologies of the Department of Health Care of Moscow, 127051 Moscow, Russia; (V.G.); (S.M.)
| | - Sergey Morozov
- Medical Research Department, Research and Practical Clinical Center of Diagnostics and Telemedicine Technologies of the Department of Health Care of Moscow, 127051 Moscow, Russia; (V.G.); (S.M.)
| | - Mikhail Belyaev
- Center for Neurobiology and Brain Restoration, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia; (A.S.); (M.B.)
| |
Collapse
|
157
|
Parallel pathway dense neural network with weighted fusion structure for brain tumor segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.11.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
158
|
LaLonde R, Xu Z, Irmakci I, Jain S, Bagci U. Capsules for biomedical image segmentation. Med Image Anal 2021; 68:101889. [PMID: 33246227 PMCID: PMC7944580 DOI: 10.1016/j.media.2020.101889] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 08/25/2020] [Accepted: 10/23/2020] [Indexed: 01/31/2023]
Abstract
Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature. This is made possible via the introduction of locally-constrained routing and transformation matrix sharing, which reduces the parameter/memory burden and allows for the segmentation of objects at large resolutions. To compensate for the loss of global information in constraining the routing, we propose the concept of "deconvolutional" capsules to create a deep encoder-decoder style network, called SegCaps. We extend the masked reconstruction regularization to the task of segmentation and perform thorough ablation experiments on each component of our method. The proposed convolutional-deconvolutional capsule network, SegCaps, shows state-of-the-art results while using a fraction of the parameters of popular segmentation networks. To validate our proposed method, we perform experiments segmenting pathological lungs from clinical and pre-clinical thoracic computed tomography (CT) scans and segmenting muscle and adipose (fat) tissue from magnetic resonance imaging (MRI) scans of human subjects' thighs. Notably, our experiments in lung segmentation represent the largest-scale study in pathological lung segmentation in the literature, where we conduct experiments across five extremely challenging datasets, containing both clinical and pre-clinical subjects, and nearly 2000 computed-tomography scans. Our newly developed segmentation platform outperforms other methods across all datasets while utilizing less than 5% of the parameters in the popular U-Net for biomedical image segmentation. Further, we demonstrate capsules' ability to generalize to unseen handling of rotations/reflections on natural images.
Collapse
Affiliation(s)
- Rodney LaLonde
- Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL
| | | | | | - Sanjay Jain
- Johns Hopkins University, Baltimore, MD US State
| | - Ulas Bagci
- Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL.
| |
Collapse
|
159
|
A novel approach for brain tumor detection by self-organizing map (SOM) using adaptive network based fuzzy inference system (ANFIS) for robotic systems. INTERNATIONAL JOURNAL OF INTELLIGENT UNMANNED SYSTEMS 2021. [DOI: 10.1108/ijius-08-2020-0038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeOne of the foremost research disciplines in medical image processing is to identify tumors, which is a challenging task practicing traditional methods. To overcome this, various research studies have been done effectively.Design/methodology/approachMedical image processing is evolving swiftly with modern technologies being developed every day. The advanced technologies improve medical fields in diagnosing diseases at the more advanced stages and serve to provide proper treatment.FindingsEither the mass growth or abnormal growth concerning the cells in the brain is called a brain tumor.Originality/valueThe brain tumor can be categorized into two significant varieties, non-cancerous and cancerous. The carcinogenic tumors or cancerous is termed as malignant and non-carcinogenic tumors are termed benign tumors. If the cells in the tumor are healthy then it is a benign tumor, whereas, the abnormal growth or the uncontrollable growth of the cell is indicated as malignant. To find the tumor the magnetic resonance imaging (MRI) is carried out which is a tiresome and monotonous task done by a radiologist. In-order to diagnosis the brain tumor at the initial stage effectively with improved accuracy, the computer-aided robotic research technology is incorporated. There are numerous segmentation procedures, which help in identifying tumor cells from MRI images. It is necessary to select a proper segmentation mechanism to detect brain tumors effectively that can be aided with robotic systems. This research paper focuses on self-organizing map (SOM) by applying the adaptive network-based fuzzy inference system (ANFIS). The execution measures are determined to employ the confusion matrix, accuracy, sensitivity, and furthermore, specificity. The results achieved conclusively explicate that the proposed model presents more reliable outcomes when compared to existing techniques.
Collapse
|
160
|
Liang S, Li C, Gao Z, Shang D, Yu J, Meng X. The Predictive Value of Tumor Volume and Its Change on Short-Term Outcome for Esophageal Squamous Cell Carcinoma Treated With Radiotherapy or Chemoradiotherapy. Front Oncol 2021; 10:586145. [PMID: 33634014 PMCID: PMC7901880 DOI: 10.3389/fonc.2020.586145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 12/14/2020] [Indexed: 12/09/2022] Open
Abstract
Objectives To investigate the tumor volume and its change on short-term outcome in esophageal squamous cell carcinoma (ESCC) patients who underwent definitive radiotherapy or chemoradiotherapy. Methods and Materials All data were retrospectively collected from 418 ESCC patients who received radiotherapy or chemoradiotherapy at our institution between 2015 and 2019. Short-term outcome using the treatment response evaluation was assessed according to the RECIST 1.1. The tumor volume change rate (TVCR) was defined as follows: TVCR = {1 - [gross tumor volume (GTV) at shrinking irradiation field planning)]/(GTV at the initial treatment planning)} ×100%. Chi square test was used to compare the clinic characteristics in different TVCR groups, and the difference between initial GTV (GTVi) and shrinking GTV (GTVs) was compared using Wilcoxon's sign rank test. Logistic regression analysis and Spearman correlation was performed. Results There was a significant decrease in GTVi compared to GTVs (P < 0.001). In univariate analysis, age, cT-stage, TNM stage, treatment modality, GTVi, and TVCR were associated with short-term outcome (all P < 0.05). In multivariate analysis, gender and TVCR were statistically significant (P = 0.010, <0.001) with short-term outcome, and the combined predictive value of gender and TVCR exceeded that of TVCR (AUC, 0.876 vs 0.855). Conclusions TVCR could serve to forecast short-term outcome of radiotherapy or chemoradiotherapy in ESCC. It was of great significance to guide the individualized treatment of ESCC.
Collapse
Affiliation(s)
- Shuai Liang
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Chengming Li
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Zhenhua Gao
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Dongping Shang
- Department of Radiation Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Jinming Yu
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xue Meng
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| |
Collapse
|
161
|
Xie Y, Zhang J, Lu H, Shen C, Xia Y. SESV: Accurate Medical Image Segmentation by Predicting and Correcting Errors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:286-296. [PMID: 32956049 DOI: 10.1109/tmi.2020.3025308] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Medical image segmentation is an essential task in computer-aided diagnosis. Despite their prevalence and success, deep convolutional neural networks (DCNNs) still need to be improved to produce accurate and robust enough segmentation results for clinical use. In this paper, we propose a novel and generic framework called Segmentation-Emendation-reSegmentation-Verification (SESV) to improve the accuracy of existing DCNNs in medical image segmentation, instead of designing a more accurate segmentation model. Our idea is to predict the segmentation errors produced by an existing model and then correct them. Since predicting segmentation errors is challenging, we design two ways to tolerate the mistakes in the error prediction. First, rather than using a predicted segmentation error map to correct the segmentation mask directly, we only treat the error map as the prior that indicates the locations where segmentation errors are prone to occur, and then concatenate the error map with the image and segmentation mask as the input of a re-segmentation network. Second, we introduce a verification network to determine whether to accept or reject the refined mask produced by the re-segmentation network on a region-by-region basis. The experimental results on the CRAG, ISIC, and IDRiD datasets suggest that using our SESV framework can improve the accuracy of DeepLabv3+ substantially and achieve advanced performance in the segmentation of gland cells, skin lesions, and retinal microaneurysms. Consistent conclusions can also be drawn when using PSPNet, U-Net, and FPN as the segmentation network, respectively. Therefore, our SESV framework is capable of improving the accuracy of different DCNNs on different medical image segmentation tasks.
Collapse
|
162
|
Grossiord E, Risser L, Kanoun S, Aziza R, Chiron H, Ysebaert L, Malgouyres F, Ken S. Semi-automatic segmentation of whole-body images in longitudinal studies. Biomed Phys Eng Express 2021; 7. [DOI: 10.1088/2057-1976/abce16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 11/26/2020] [Indexed: 11/12/2022]
Abstract
Abstract
We propose a semi-automatic segmentation pipeline designed for longitudinal studies considering structures with large anatomical variability, where expert interactions are required for relevant segmentations. Our pipeline builds on the regularized Fast Marching (rFM) segmentation approach by Risser et al (2018). It consists in transporting baseline multi-label FM seeds on follow-up images, selecting the relevant ones and finally performing the rFM approach. It showed increased, robust and faster results compared to clinical manual segmentation. Our method was evaluated on 3D synthetic images and patients’ whole-body MRI. It allowed a robust and flexible handling of organs longitudinal deformations while considerably reducing manual interventions.
Collapse
|
163
|
Yang T, Cui X, Bai X, Li L, Gong Y. RA-SIFA: Unsupervised domain adaptation multi-modality cardiac segmentation network combining parallel attention module and residual attention unit. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2021; 29:1065-1078. [PMID: 34719432 DOI: 10.3233/xst-210966] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
BACKGROUND Convolutional neural network has achieved a profound effect on cardiac image segmentation. The diversity of medical imaging equipment brings the challenge of domain shift for cardiac image segmentation. OBJECTIVE In order to solve the domain shift existed in multi-modality cardiac image segmentation, this study aims to investigate and test an unsupervised domain adaptation network RA-SIFA, which combines a parallel attention module (PAM) and residual attention unit (RAU). METHODS First, the PAM is introduced in the generator of RA-SIFA to fuse global information, which can reduce the domain shift from the respect of image alignment. Second, the shared encoder adopts the RAU, which has residual block based on the spatial attention module to alleviate the problem that the convolution layer is insensitive to spatial position. Therefore, RAU enables to further reduce the domain shift from the respect of feature alignment. RA-SIFA model can realize the unsupervised domain adaption (UDA) through combining the image and feature alignment, and then solve the domain shift of cardiac image segmentation in a complementary manner. RESULTS The model is evaluated using MM-WHS2017 datasets. Compared with SIFA, the Dice of our new RA-SIFA network is improved by 8.4%and 3.2%in CT and MR images, respectively, while, the average symmetric surface distance (ASD) is reduced by 3.4 and 0.8mm in CT and MR images, respectively. CONCLUSION The study results demonstrate that our new RA-SIFA network can effectively improve the accuracy of whole-heart segmentation from CT and MR images.
Collapse
Affiliation(s)
- Tiejun Yang
- Key Laboratory of Grain Information Processing and Control, Henan University of Technology, Ministry of Education, Zhengzhou, China
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, China
| | - Xiaojuan Cui
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou, China
| | - Xinhao Bai
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou, China
| | - Lei Li
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, China
| | - Yuehong Gong
- College of Information Science and Engineering, Henan University of Technology, Zhengzhou, China
| |
Collapse
|
164
|
Chen X, Lian C, Wang L, Deng H, Kuang T, Fung S, Gateno J, Yap PT, Xia JJ, Shen D. Anatomy-Regularized Representation Learning for Cross-Modality Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:274-285. [PMID: 32956048 PMCID: PMC8120796 DOI: 10.1109/tmi.2020.3025133] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
An increasing number of studies are leveraging unsupervised cross-modality synthesis to mitigate the limited label problem in training medical image segmentation models. They typically transfer ground truth annotations from a label-rich imaging modality to a label-lacking imaging modality, under an assumption that different modalities share the same anatomical structure information. However, since these methods commonly use voxel/pixel-wise cycle-consistency to regularize the mappings between modalities, high-level semantic information is not necessarily preserved. In this paper, we propose a novel anatomy-regularized representation learning approach for segmentation-oriented cross-modality image synthesis. It learns a common feature encoding across different modalities to form a shared latent space, where 1) the input and its synthesis present consistent anatomical structure information, and 2) the transformation between two images in one domain is preserved by their syntheses in another domain. We applied our method to the tasks of cross-modality skull segmentation and cardiac substructure segmentation. Experimental results demonstrate the superiority of our method in comparison with state-of-the-art cross-modality medical image segmentation methods.
Collapse
|
165
|
Massa HA, Johnson JM, McMillan AB. Comparison of deep learning synthesis of synthetic CTs using clinical MRI inputs. Phys Med Biol 2020; 65:23NT03. [PMID: 33120371 DOI: 10.1088/1361-6560/abc5cb] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
There has been substantial interest in developing techniques for synthesizing CT-like images from MRI inputs, with important applications in simultaneous PET/MR and radiotherapy planning. Deep learning has recently shown great potential for solving this problem. The goal of this research was to investigate the capability of four common clinical MRI sequences (T1-weighted gradient-echo [T1], T2-weighted fat-suppressed fast spin-echo [T2-FatSat], post-contrast T1-weighted gradient-echo [T1-Post], and fast spin-echo T2-weighted fluid-attenuated inversion recovery [CUBE-FLAIR]) as inputs into a deep CT synthesis pipeline. Data were obtained retrospectively in 92 subjects who had undergone an MRI and CT scan on the same day. The patient's MR and CT scans were registered to one another using affine registration. The deep learning model was a convolutional neural network encoder-decoder with skip connections similar to the U-net architecture and Inception V3 inspired blocks instead of sequential convolution blocks. After training with 150 epochs and a batch size of 6, the model was evaluated using structural similarity index (SSIM), peak SNR (PSNR), mean absolute error (MAE), and dice coefficient. We found that feasible results were attainable for each image type, and no single image type was superior for all analyses. The MAE (in HU) of the resulting synthesized CT in the whole brain was 51.236 ± 4.504 for CUBE-FLAIR, 45.432 ± 8.517 for T1, 44.558 ± 7.478 for T1-Post, and 45.721 ± 8.7767 for T2, showing not only feasible, but also very compelling results on clinical images. Deep learning-based synthesis of CT images from MRI is possible with a wide range of inputs, suggesting that viable images can be created from a wide range of clinical input types.
Collapse
Affiliation(s)
- Haley A Massa
- Department of Electrical and Computer Engineering, University of Wisconsin, Madison, Wisconsin, United States of America. Department of Radiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | | | | |
Collapse
|
166
|
Engelkes K. Accuracy of bone segmentation and surface generation strategies analyzed by using synthetic CT volumes. J Anat 2020; 238:1456-1471. [PMID: 33325545 DOI: 10.1111/joa.13383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 11/19/2020] [Accepted: 11/25/2020] [Indexed: 11/30/2022] Open
Abstract
Different kinds of bone measurements are commonly derived from computed-tomography (CT) volumes to answer a multitude of questions in biology and related fields. The underlying steps of bone segmentation and, optionally, polygon surface generation are crucial to keep the measurement error small. In this study, the performance of different, easily accessible segmentation techniques (global thresholding, automatic local thresholding, weighted random walk, neural network, and watershed) and surface generation approaches (different algorithms combined with varying degrees of simplification) was analyzed and recommendations for minimizing inaccuracies were derived. The different approaches were applied to synthetic CT volumes for which the correct segmentation and surface geometry were known. The most accurate segmentations of the synthetic volumes were achieved by setting a case-specific window to the gray value histogram and subsequently applying automatic local thresholding with appropriately chosen thresholding method and radius. Surfaces generated by the Amira® module Generate Lego Surface in combination with careful surface simplification were the most accurate. Surfaces with sub-voxel accuracy were obtained even for synthetic CT volumes with low contrast-to-noise ratios. Segmentation trials with real CT volumes supported the findings. Very accurate segmentations and surfaces can be derived from CT volumes by using readily accessible software packages. The presented results and derived recommendations will help to reduce the measurement error in future studies. Furthermore, the demonstrated strategies for assessing segmentation and surface qualities can be adopted to quantify the performance of new segmentation approaches in future studies.
Collapse
Affiliation(s)
- Karolin Engelkes
- Center of Natural History (CeNak), Universität Hamburg, Hamburg, Germany
| |
Collapse
|
167
|
Gautier MK, Ginsberg SD. A method for quantification of vesicular compartments within cells using 3D reconstructed confocal z-stacks: Comparison of ImageJ and Imaris to count early endosomes within basal forebrain cholinergic neurons. J Neurosci Methods 2020; 350:109038. [PMID: 33338543 DOI: 10.1016/j.jneumeth.2020.109038] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2020] [Revised: 12/07/2020] [Accepted: 12/10/2020] [Indexed: 01/04/2023]
Abstract
BACKGROUND Phenotypic changes in vesicular compartments are an early pathological hallmark of many peripheral and central diseases. For example, accurate assessment of early endosome pathology is crucial to the study of Down syndrome (DS) and Alzheimer's disease (AD), as well as other neurological disorders with endosomal-lysosomal pathology. NEW METHOD We describe a method for quantification of immunolabeled early endosomes within transmitter-identified basal forebrain cholinergic neurons (BFCNs) using 3-dimensional (3D) reconstructed confocal z-stacks employing Imaris software. RESULTS Quantification of 3D reconstructed z-stacks was performed using two different image analysis programs: ImageJ and Imaris. We found ImageJ consistently overcounted the number of early endosomes present within individual BFCNs. Difficulty separating densely packed early endosomes within defined BFCNs was observed in ImageJ compared to Imaris. COMPARISON WITH EXISTING METHODS Previous methods quantifying endosomal-lysosomal pathology relied on confocal microscopy images taken in a single plane of focus. Since early endosomes are distributed throughout the soma and neuronal processes of BFCNs, critical insight into the abnormal early endosome phenotype may be lost as a result of analyzing only a single image of the perikaryon. Rather than relying on a representative sampling, this protocol enables precise, direct quantification of all immunolabeled vesicles within a defined cell of interest. CONCLUSIONS Imaris is an ideal program for accurately counting punctate vesicles in the context of dual label confocal microscopy. Superior image resolution and detailed algorithms offered by Imaris make precise and rigorous quantification of individual early endosomes dispersed throughout a BFCN in 3D space readily achievable.
Collapse
Affiliation(s)
- Megan K Gautier
- Center for Dementia Research, Nathan Kline Institute, Orangeburg, NY, USA; Program of Pathobiology and Translational Medicine, Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY, USA; NYU Neuroscience Institute, NYU Grossman School of Medicine, New York, NY, USA
| | - Stephen D Ginsberg
- Center for Dementia Research, Nathan Kline Institute, Orangeburg, NY, USA; Department of Psychiatry, NYU Grossman School of Medicine, New York, NY, USA; Department of Neuroscience & Physiology, NYU Grossman School of Medicine, New York, NY, USA; NYU Neuroscience Institute, NYU Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
168
|
Lee M, Kim J, EY Kim R, Kim HG, Oh SW, Lee MK, Wang SM, Kim NY, Kang DW, Rieu Z, Yong JH, Kim D, Lim HK. Split-Attention U-Net: A Fully Convolutional Network for Robust Multi-Label Segmentation from Brain MRI. Brain Sci 2020; 10:E974. [PMID: 33322640 PMCID: PMC7764312 DOI: 10.3390/brainsci10120974] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 11/30/2020] [Accepted: 12/07/2020] [Indexed: 02/03/2023] Open
Abstract
Multi-label brain segmentation from brain magnetic resonance imaging (MRI) provides valuable structural information for most neurological analyses. Due to the complexity of the brain segmentation algorithm, it could delay the delivery of neuroimaging findings. Therefore, we introduce Split-Attention U-Net (SAU-Net), a convolutional neural network with skip pathways and a split-attention module that segments brain MRI scans. The proposed architecture employs split-attention blocks, skip pathways with pyramid levels, and evolving normalization layers. For efficient training, we performed pre-training and fine-tuning with the original and manually modified FreeSurfer labels, respectively. This learning strategy enables involvement of heterogeneous neuroimaging data in the training without the need for many manual annotations. Using nine evaluation datasets, we demonstrated that SAU-Net achieved better segmentation accuracy with better reliability that surpasses those of state-of-the-art methods. We believe that SAU-Net has excellent potential due to its robustness to neuroanatomical variability that would enable almost instantaneous access to accurate neuroimaging biomarkers and its swift processing runtime compared to other methods investigated.
Collapse
Affiliation(s)
- Minho Lee
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - JeeYoung Kim
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Regina EY Kim
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
- Institute of Human Genomic Study, College of Medicine, Korea University, Ansan 15355, Korea
- Department of Psychiatry, University of Iowa, Iowa City, IA 52242, USA
| | - Hyun Gi Kim
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Se Won Oh
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Min Kyoung Lee
- Department of Radiology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea;
| | - Sheng-Min Wang
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| | - Nak-Young Kim
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| | - Dong Woo Kang
- Department of Psychiatry, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - ZunHyan Rieu
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Jung Hyun Yong
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Donghyeon Kim
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Hyun Kook Lim
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| |
Collapse
|
169
|
Lin Q, Luo M, Gao R, Li T, Man Z, Cao Y, Wang H. Deep learning based automatic segmentation of metastasis hotspots in thorax bone SPECT images. PLoS One 2020; 15:e0243253. [PMID: 33270746 PMCID: PMC7714246 DOI: 10.1371/journal.pone.0243253] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 11/17/2020] [Indexed: 11/30/2022] Open
Abstract
SPECT imaging has been identified as an effective medical modality for diagnosis, treatment, evaluation and prevention of a range of serious diseases and medical conditions. Bone SPECT scan has the potential to provide more accurate assessment of disease stage and severity. Segmenting hotspot in bone SPECT images plays a crucial role to calculate metrics like tumor uptake and metabolic tumor burden. Deep learning techniques especially the convolutional neural networks have been widely exploited for reliable segmentation of hotspots or lesions, organs and tissues in the traditional structural medical images (i.e., CT and MRI) due to their ability of automatically learning the features from images in an optimal way. In order to segment hotspots in bone SPECT images for automatic assessment of metastasis, in this work, we develop several deep learning based segmentation models. Specifically, each original whole-body bone SPECT image is processed to extract the thorax area, followed by image mirror, translation and rotation operations, which augments the original dataset. We then build segmentation models based on two commonly-used famous deep networks including U-Net and Mask R-CNN by fine-tuning their structures. Experimental evaluation conducted on a group of real-world bone SEPCT images reveals that the built segmentation models are workable on identifying and segmenting hotspots of metastasis in bone SEPCT images, achieving a value of 0.9920, 0.7721, 0.6788 and 0.6103 for PA (accuracy), CPA (precision), Rec (recall) and IoU, respectively. Finally, we conclude that the deep learning technology have the huge potential to identify and segment hotspots in bone SPECT images.
Collapse
Affiliation(s)
- Qiang Lin
- School of Mathematics and Computer Science, Northwest Minzu University, Lanzhou, Gansu, China
- Key Laboratory of China’s Ethnic Languages and Information Technology of Ministry of Education, Northwest Minzu University, Lanzhou, Gansu, China
- Key Laboratory of Streaming Computing and Applications, Northwest Minzu University, Lanzhou, Gansu, China
| | - Mingyang Luo
- Key Laboratory of China’s Ethnic Languages and Information Technology of Ministry of Education, Northwest Minzu University, Lanzhou, Gansu, China
- Key Laboratory of Streaming Computing and Applications, Northwest Minzu University, Lanzhou, Gansu, China
| | - Ruiting Gao
- Key Laboratory of China’s Ethnic Languages and Information Technology of Ministry of Education, Northwest Minzu University, Lanzhou, Gansu, China
- Key Laboratory of Streaming Computing and Applications, Northwest Minzu University, Lanzhou, Gansu, China
| | - Tongtong Li
- School of Mathematics and Computer Science, Northwest Minzu University, Lanzhou, Gansu, China
- Key Laboratory of Streaming Computing and Applications, Northwest Minzu University, Lanzhou, Gansu, China
| | - Zhengxing Man
- School of Mathematics and Computer Science, Northwest Minzu University, Lanzhou, Gansu, China
- Key Laboratory of Streaming Computing and Applications, Northwest Minzu University, Lanzhou, Gansu, China
| | - Yongchun Cao
- School of Mathematics and Computer Science, Northwest Minzu University, Lanzhou, Gansu, China
- Key Laboratory of China’s Ethnic Languages and Information Technology of Ministry of Education, Northwest Minzu University, Lanzhou, Gansu, China
- Key Laboratory of Streaming Computing and Applications, Northwest Minzu University, Lanzhou, Gansu, China
| | - Haijun Wang
- Department of Nuclear Medicine, Gansu Provincial Hospital, Lanzhou, Gansu, China
| |
Collapse
|
170
|
Abstract
Macular edema occurs in a wide variety of ophthalmological diseases. The diagnostics and treatment are an important part of modern ophthalmology. Due to the continuous development, artificial intelligence (AI) offers many opportunities to improve the management of macular edema. This article provides the readership with an overview of this interesting topic.
Collapse
|
171
|
Clark AE, Biffi B, Sivera R, Dall'Asta A, Fessey L, Wong TL, Paramasivam G, Dunaway D, Schievano S, Lees CC. Developing and testing an algorithm for automatic segmentation of the fetal face from three-dimensional ultrasound images. ROYAL SOCIETY OPEN SCIENCE 2020; 7:201342. [PMID: 33391808 PMCID: PMC7735327 DOI: 10.1098/rsos.201342] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 10/06/2020] [Indexed: 06/12/2023]
Abstract
Fetal craniofacial abnormalities are challenging to detect and diagnose on prenatal ultrasound (US). Image segmentation and computer analysis of three-dimensional US volumes of the fetal face may provide an objective measure to quantify fetal facial features and identify abnormalities. We have developed and tested an atlas-based partially automated facial segmentation algorithm; however, the volumes require additional manual segmentation (MS), which is time and labour intensive and may preclude this method from clinical adoption. These manually refined segmentations can then be used as a reference (atlas) by the partially automated segmentation algorithm to improve algorithmic performance with the aim of eliminating the need for manual refinement and developing a fully automated system. This study assesses the inter- and intra-operator variability of MS and tests an optimized version of our automatic segmentation (AS) algorithm. The manual refinements of 15 fetal faces performed by three operators and repeated by one operator were assessed by Dice score, average symmetrical surface distance and volume difference. The performance of the partially automatic algorithm with difference size atlases was evaluated by Dice score and computational time. Assessment of the manual refinements showed low inter- and intra-operator variability demonstrating its suitability for optimizing the AS algorithm. The algorithm showed improved performance following an increase in the atlas size in turn reducing the need for manual refinement.
Collapse
Affiliation(s)
- A. E. Clark
- Queen Charlotte's and Chelsea Hospital, Imperial Healthcare NHS Trust, London, UK
- Imperial College London, London, UK
| | - B. Biffi
- Imperial College London, London, UK
| | | | - A. Dall'Asta
- Queen Charlotte's and Chelsea Hospital, Imperial Healthcare NHS Trust, London, UK
- Imperial College London, London, UK
- Department of Medicine and Surgery, Obstetrics and Gynaecology Unit, University of Parma, Italy
| | | | - T.-L. Wong
- Queen Charlotte's and Chelsea Hospital, Imperial Healthcare NHS Trust, London, UK
| | - G. Paramasivam
- Queen Charlotte's and Chelsea Hospital, Imperial Healthcare NHS Trust, London, UK
- Imperial College London, London, UK
| | - D. Dunaway
- University College London GOS Institute of Child Health, London, UK
- Great Ormond Street Hospital for Children, London, UK
| | - S. Schievano
- University College London GOS Institute of Child Health, London, UK
- Great Ormond Street Hospital for Children, London, UK
| | - C. C. Lees
- Queen Charlotte's and Chelsea Hospital, Imperial Healthcare NHS Trust, London, UK
- Institute of Reproductive and Developmental Biology, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
| |
Collapse
|
172
|
Amyar A, Modzelewski R, Li H, Ruan S. Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation. Comput Biol Med 2020. [PMID: 33065387 DOI: 10.1101/2020.04.16.20064709] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
This paper presents an automatic classification segmentation tool for helping screening COVID-19 pneumonia using chest CT imaging. The segmented lesions can help to assess the severity of pneumonia and follow-up the patients. In this work, we propose a new multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Three learning tasks: segmentation, classification and reconstruction are jointly performed with different datasets. Our motivation is on the one hand to leverage useful information contained in multiple related tasks to improve both segmentation and classification performances, and on the other hand to deal with the problems of small data because each task can have a relatively small dataset. Our architecture is composed of a common encoder for disentangled feature representation with three tasks, and two decoders and a multi-layer perceptron for reconstruction, segmentation and classification respectively. The proposed model is evaluated and compared with other image segmentation techniques using a dataset of 1369 patients including 449 patients with COVID-19, 425 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.88 for the segmentation and an area under the ROC curve higher than 97% for the classification.
Collapse
Affiliation(s)
- Amine Amyar
- General Electric Healthcare, Buc, France; LITIS - EA4108 - Quantif, University of Rouen, Rouen, France.
| | - Romain Modzelewski
- LITIS - EA4108 - Quantif, University of Rouen, Rouen, France; Nuclear Medicine Department, Henri Becquerel Center, Rouen, France.
| | - Hua Li
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA.
| | - Su Ruan
- LITIS - EA4108 - Quantif, University of Rouen, Rouen, France.
| |
Collapse
|
173
|
Bennai MT, Guessoum Z, Mazouzi S, Cormier S, Mezghiche M. A stochastic multi-agent approach for medical-image segmentation: Application to tumor segmentation in brain MR images. Artif Intell Med 2020; 110:101980. [DOI: 10.1016/j.artmed.2020.101980] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/04/2020] [Accepted: 10/25/2020] [Indexed: 10/23/2022]
|
174
|
Shirly S, Ramesh K. Review on 2D and 3D MRI Image Segmentation Techniques. Curr Med Imaging 2020; 15:150-160. [PMID: 31975661 DOI: 10.2174/1573405613666171123160609] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2017] [Revised: 10/23/2017] [Accepted: 11/14/2017] [Indexed: 11/22/2022]
Abstract
BACKGROUND Magnetic Resonance Imaging is most widely used for early diagnosis of abnormalities in human organs. Due to the technical advancement in digital image processing, automatic computer aided medical image segmentation has been widely used in medical diagnostics. DISCUSSION Image segmentation is an image processing technique which is used for extracting image features, searching and mining the medical image records for better and accurate medical diagnostics. Commonly used segmentation techniques are threshold based image segmentation, clustering based image segmentation, edge based image segmentation, region based image segmentation, atlas based image segmentation, and artificial neural network based image segmentation. CONCLUSION This survey aims at providing an insight about different 2-Dimensional and 3- Dimensional MRI image segmentation techniques and to facilitate better understanding to the people who are new in this field. This comparative study summarizes the benefits and limitations of various segmentation techniques.
Collapse
Affiliation(s)
- S Shirly
- Department of Computer Applications, Anna University Regional-Campus, Tirunelveli, Tamil Nadu, India
| | - K Ramesh
- Department of Computer Applications, Anna University Regional-Campus, Tirunelveli, Tamil Nadu, India
| |
Collapse
|
175
|
|
176
|
Bennai MT, Mazouzi S, Guessoum Z, Mezghiche M, Cormier S. A Cooperative Approach Based on Local Detection of Similarities and Discontinuities for Brain MR Images Segmentation. J Med Syst 2020; 44:145. [PMID: 32712718 DOI: 10.1007/s10916-020-01610-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 07/15/2020] [Indexed: 11/27/2022]
Abstract
This paper introduces a new cooperative multi-agent approach for segmenting brain Magnetic Resonance Images (MRIs). MRIs are manually processed by human radiology experts for the identification of many diseases and the monitoring of their evolution. However, such a task is time-consuming and depends on expert decision, which can be affected by many factors. Therefore, various types of research were and are still conducted to automate MRI processing, mainly MRI segmentation. The approach presented in this paper, without any parametrization or prior knowledge, uses a set of situated agents, locally interacting to segment images according to two main phases: the detection of discontinuities and the detection of similarities. An implementation of this approach was tested on phantom brain MR images to assess the results and prove its efficiency. Experimental results ensure a minimum of 89% Dice coefficient with increasing values of the noise and the intensity non-uniformity.
Collapse
Affiliation(s)
- Mohamed T Bennai
- LIMOSE Laboratory, Faculty of Sciences, University of M'hamed Bougara of Boumerdes, Avenue de l'indépendance, 35000, Boumerdes, Algeria.
- CReSTIC EA 3804, Université de Reims Champagne Ardenne, Reims, France.
| | - Smaine Mazouzi
- Department of Computer Science, University 20 Août 1955, Skikda, Algeria
| | - Zahia Guessoum
- CReSTIC EA 3804, Université de Reims Champagne Ardenne, Reims, France
| | - Mohamed Mezghiche
- LIMOSE Laboratory, Faculty of Sciences, University of M'hamed Bougara of Boumerdes, Avenue de l'indépendance, 35000, Boumerdes, Algeria
| | - Stéphane Cormier
- CReSTIC EA 3804, Université de Reims Champagne Ardenne, Reims, France
| |
Collapse
|
177
|
Automatic left ventricle segmentation in short-axis MRI using deep convolutional neural networks and central-line guided level set approach. Comput Biol Med 2020; 122:103877. [DOI: 10.1016/j.compbiomed.2020.103877] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 06/20/2020] [Accepted: 06/20/2020] [Indexed: 12/29/2022]
|
178
|
Yang G, Lv T, Shen Y, Li S, Yang J, Chen Y, Shu H, Luo L, Coatrieux JL. Vessel Structure Extraction using Constrained Minimal Path Propagation. Artif Intell Med 2020; 105:101846. [PMID: 32505425 DOI: 10.1016/j.artmed.2020.101846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2018] [Revised: 10/23/2019] [Accepted: 03/20/2020] [Indexed: 11/18/2022]
Abstract
Minimal path method has been widely recognized as an efficient tool for extracting vascular structures in medical imaging. In a previous paper, a method termed minimal path propagation with backtracking (MPP-BT) was derived to deal with curve-like structures such as vessel centerlines. A robust approach termed CMPP (constrained minimal path propagation) is here proposed to extend this work. The proposed method utilizes another minimal path propagation procedure to extract the complete vessel lumen after the centerlines have been found. Moreover, a process named local MPP-BT is applied to handle structure missing caused by the so-called close loop problems. This approach is fast and unsupervised with only one roughly set start point required in the whole process to get the entire vascular structure. A variety of datasets, including 2D cardiac angiography, 2D retinal images and 3D kidney CT angiography, are used for validation. A quantitative evaluation, together with a comparison to recently reported methods, is performed on retinal images for which a ground truth is available. The proposed method leads to specificity (Sp) and sensitivity (Se) values equal to 0.9750 and 0.6591. This evaluation is also extended to 3D synthetic vascular datasets and shows that the specificity (Sp) and sensitivity (Se) values are higher than 0.99. Parameter setting and computation cost are analyzed in this paper.
Collapse
Affiliation(s)
- Guanyu Yang
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France; Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 210096, China
| | - Tianling Lv
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 210096, China
| | - Yunpeng Shen
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China
| | - Shuo Li
- Department of Medical Imaging, Western University, London, ON, Canada; Digital Image Group of London, London, ON, Canada
| | - Jian Yang
- Key Laboratory of Photoelectronic Imaging Technology and System, Ministry of Education, China.
| | - Yang Chen
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France; Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 210096, China.
| | - Huazhong Shu
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France; Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 210096, China
| | - Limin Luo
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France; Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 210096, China
| | - Jean-Louis Coatrieux
- Centre de Recherche en Information Biomedicale Sino-Francais (LIA CRIBs), Rennes, France
| |
Collapse
|
179
|
Detection and Localization of Early-Stage Multiple Brain Tumors Using a Hybrid Technique of Patch-Based Processing, k-means Clustering and Object Counting. Int J Biomed Imaging 2020; 2020:9035096. [PMID: 32494290 PMCID: PMC7199552 DOI: 10.1155/2020/9035096] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 08/22/2019] [Accepted: 08/30/2019] [Indexed: 01/18/2023] Open
Abstract
Brain tumors are a major health problem that affect the lives of many people. These tumors are classified as benign or cancerous. The latter can be fatal if not properly diagnosed and treated. Therefore, the diagnosis of brain tumors at the early stages of their development can significantly improve the chances of patient's full recovery after treatment. In addition to laboratory analyses, clinicians and surgeons extract information from medical images, recorded by various systems such as magnetic resonance imaging (MRI), X-ray, and computed tomography (CT). The extracted information is used to identify the essential characteristics of brain tumors (location, size, and type) in order to achieve an accurate diagnosis to determine the most appropriate treatment protocol. In this paper, we present an automated machine vision technique for the detection and localization of brain tumors in MRI images at their very early stages using a combination of k-means clustering, patch-based image processing, object counting, and tumor evaluation. The technique was tested on twenty real MRI images and was found to be capable of detecting multiple tumors in MRI images regardless of their intensity level variations, size, and location including those with very small sizes. In addition to its use for diagnosis, the technique can be integrated into automated treatment instruments and robotic surgery systems.
Collapse
|
180
|
Onofrey JA, Staib LH, Huang X, Zhang F, Papademetris X, Metaxas D, Rueckert D, Duncan JS. Sparse Data-Driven Learning for Effective and Efficient Biomedical Image Segmentation. Annu Rev Biomed Eng 2020; 22:127-153. [PMID: 32169002 PMCID: PMC9351438 DOI: 10.1146/annurev-bioeng-060418-052147] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Sparsity is a powerful concept to exploit for high-dimensional machine learning and associated representational and computational efficiency. Sparsity is well suited for medical image segmentation. We present a selection of techniques that incorporate sparsity, including strategies based on dictionary learning and deep learning, that are aimed at medical image segmentation and related quantification.
Collapse
Affiliation(s)
- John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Urology, Yale School of Medicine, New Haven, Connecticut 06520, USA
| | - Lawrence H Staib
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| | - Xiaojie Huang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Citadel Securities, Chicago, Illinois 60603, USA
| | - Fan Zhang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
| | - Xenophon Papademetris
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, Piscataway, New Jersey 08854, USA
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London SW7 2AZ, United Kingdom
| | - James S Duncan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut 06520, USA;
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut 06520, USA;
| |
Collapse
|
181
|
Noguchi S, Nishio M, Yakami M, Nakagomi K, Togashi K. Bone segmentation on whole-body CT using convolutional neural network with novel data augmentation techniques. Comput Biol Med 2020; 121:103767. [DOI: 10.1016/j.compbiomed.2020.103767] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2020] [Revised: 04/15/2020] [Accepted: 04/15/2020] [Indexed: 10/24/2022]
|
182
|
A review on segmentation of knee articular cartilage: from conventional methods towards deep learning. Artif Intell Med 2020; 106:101851. [DOI: 10.1016/j.artmed.2020.101851] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 02/09/2020] [Accepted: 03/29/2020] [Indexed: 12/14/2022]
|
183
|
Carass A, Roy S, Gherman A, Reinhold JC, Jesson A, Arbel T, Maier O, Handels H, Ghafoorian M, Platel B, Birenbaum A, Greenspan H, Pham DL, Crainiceanu CM, Calabresi PA, Prince JL, Roncal WRG, Shinohara RT, Oguz I. Evaluating White Matter Lesion Segmentations with Refined Sørensen-Dice Analysis. Sci Rep 2020; 10:8242. [PMID: 32427874 PMCID: PMC7237671 DOI: 10.1038/s41598-020-64803-w] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Accepted: 04/20/2020] [Indexed: 11/09/2022] Open
Abstract
The Sørensen-Dice index (SDI) is a widely used measure for evaluating medical image segmentation algorithms. It offers a standardized measure of segmentation accuracy which has proven useful. However, it offers diminishing insight when the number of objects is unknown, such as in white matter lesion segmentation of multiple sclerosis (MS) patients. We present a refinement for finer grained parsing of SDI results in situations where the number of objects is unknown. We explore these ideas with two case studies showing what can be learned from our two presented studies. Our first study explores an inter-rater comparison, showing that smaller lesions cannot be reliably identified. In our second case study, we demonstrate fusing multiple MS lesion segmentation algorithms based on the insights into the algorithms provided by our analysis to generate a segmentation that exhibits improved performance. This work demonstrates the wealth of information that can be learned from refined analysis of medical image segmentations.
Collapse
Affiliation(s)
- Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA.
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD, 21218, USA.
| | - Snehashis Roy
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, 20817, USA
| | - Adrian Gherman
- Department of Biostatistics, The Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Jacob C Reinhold
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Andrew Jesson
- Centre For Intelligent Machines, McGill University, Montréal, QC, H3A 0E9, Canada
| | - Tal Arbel
- Centre For Intelligent Machines, McGill University, Montréal, QC, H3A 0E9, Canada
| | - Oskar Maier
- Institute of Medical Informatics, University of Lübeck, 23538, Lübeck, Germany
| | - Heinz Handels
- Institute of Medical Informatics, University of Lübeck, 23538, Lübeck, Germany
| | - Mohsen Ghafoorian
- Institute for Computing and Information Sciences, Radboud University, 6525, HP, Nijmegen, Netherlands
| | - Bram Platel
- Diagnostic Image Analysis Group, Radboud University Medical Center, 6525, GA, Nijmegen, Netherlands
| | - Ariel Birenbaum
- Department of Electrical Engineering, Tel-Aviv University, Tel-Aviv, 69978, Israel
| | - Hayit Greenspan
- Department of Biomedical Engineering, Tel-Aviv University, Tel-Aviv, 69978, Israel
| | - Dzung L Pham
- CNRM, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, 20817, USA
| | - Ciprian M Crainiceanu
- Department of Biostatistics, The Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Peter A Calabresi
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, 21287, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, 21218, USA
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - William R Gray Roncal
- Department of Computer Science, The Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Russell T Shinohara
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics & Epidemiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Ipek Oguz
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37203, USA
| |
Collapse
|
184
|
Yang W, Yu X, Zhang J, Deng X. Plasmonic transmitted optical differentiator based on the subwavelength gold gratings. OPTICS LETTERS 2020; 45:2295-2298. [PMID: 32287217 DOI: 10.1364/ol.390566] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 03/11/2020] [Indexed: 06/11/2023]
Abstract
A nanoscale plasmonic optical differentiator based on subwavelength gold gratings is investigated theoretically and experimentally without Fourier transform lenses and prisms. In the vicinity of surface plasmon resonance (SPR), the transfer function of subwavelength gold gratings is derived by optical scattering matrix theory. Simulated by the finite difference time domain (FDTD) method, the wavelengths of optical spatial differentiation performed by subwavelength gold gratings are tuned by the grating period and duty cycle, while the throughput of edge extraction is mainly adjusted by the grating thickness. Without Fourier transformation, the fabricated plasmonic optical differentiator experimentally achieves real-time optical spatial differentiation in transmission and implements SPR enhanced high-throughput edge extraction of a microscale image with a resolution of 10 µm at 650 nm, which has potential applications in areas of optical analog computing, optical imaging, and optical information processing.
Collapse
|
185
|
Zhou J, Peng Z, Song Y, Chang Y, Pei X, Sheng L, Xu XG. A method of using deep learning to predict three-dimensional dose distributions for intensity-modulated radiotherapy of rectal cancer. J Appl Clin Med Phys 2020; 21:26-37. [PMID: 32281254 PMCID: PMC7286006 DOI: 10.1002/acm2.12849] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Revised: 12/08/2019] [Accepted: 02/19/2020] [Indexed: 01/01/2023] Open
Abstract
Purpose To develop and test a three‐dimensional (3D) deep learning model for predicting 3D voxel‐wise dose distributions for intensity‐modulated radiotherapy (IMRT). Methods A total of 122 postoperative rectal cancer cases treated by IMRT were considered in the study, of which 100 cases were randomly selected as the training–validating set and the remaining as the testing set. A 3D deep learning model named 3D U‐Res‐Net_B was constructed to predict 3D dose distributions. Eight types of 3D matrices from CT images, contoured structures, and beam configurations were fed into the independent input channel, respectively, and the 3D matrix of dose distributions was taken as the output to train the 3D model. The obtained 3D model was used to predict new 3D dose distributions. The predicted accuracy was evaluated in two aspects: (a) The dice similarity coefficients (DSCs) of different isodose volumes, the average dose difference of all voxels within the body, and 3%/5 mm global gamma passing rates of organs at risks (OARs) and planned target volume (PTV) were used to address the spatial correspondence between predicted and clinical delivered 3D dose distributions; (b) The dosimetric index (DI) including homogeneity index, conformity index, V50, V45 for PTV and OARs between predicted and clinical truth were statistically analyzed with the paired‐samples t test. The model was also compared with 3D U‐Net and the same architecture model without beam configurations input (named as 3D U‐Res‐Net_O). Results The 3D U‐Res‐Net_B model predicted 3D dose distributions accurately. For the 22 testing cases, the average prediction bias ranged from −1.94% to 1.58%, and the overall mean absolute errors (MAEs) was 3.92 ± 4.16%; there was no statistically significant difference for nearly all DIs. The model had a DSCs value above 0.9 for most isodose volumes, and global 3D gamma passing rates varying from 0.81 to 0.90 for PTV and OARs, clearly outperforming 3D U‐Res‐Net_O and being slightly superior to 3D U‐Net. Conclusions This study developed a more general deep learning model by considering beam configurations input and achieved an accurate 3D voxel‐wise dose prediction for rectal cancer treated by IMRT, a potentially easier clinical implementation for more comprehensive automatic planning.
Collapse
Affiliation(s)
- Jieping Zhou
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China.,National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, Anhui, China
| | - Zhao Peng
- Department of Engineering and Applied Physics, School of Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Yuchen Song
- Department of Engineering and Applied Physics, School of Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Yankui Chang
- Department of Engineering and Applied Physics, School of Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Xi Pei
- Department of Engineering and Applied Physics, School of Physics, University of Science and Technology of China, Hefei, Anhui, China.,Anhui Wisdom Technology Company Limited, Hefei, Anhui, China
| | - Liusi Sheng
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, Anhui, China
| | - X George Xu
- Department of Engineering and Applied Physics, School of Physics, University of Science and Technology of China, Hefei, Anhui, China.,Nuclear and Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
186
|
PRF-RW: a progressive random forest-based random walk approach for interactive semi-automated pulmonary lobes segmentation. INT J MACH LEARN CYB 2020. [DOI: 10.1007/s13042-020-01111-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
187
|
Skin Lesion Segmentation from Dermoscopic Images Using Convolutional Neural Network. SENSORS 2020; 20:s20061601. [PMID: 32183041 PMCID: PMC7147706 DOI: 10.3390/s20061601] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 02/26/2020] [Accepted: 03/09/2020] [Indexed: 12/23/2022]
Abstract
Clinical treatment of skin lesion is primarily dependent on timely detection and delimitation of lesion boundaries for accurate cancerous region localization. Prevalence of skin cancer is on the higher side, especially that of melanoma, which is aggressive in nature due to its high metastasis rate. Therefore, timely diagnosis is critical for its treatment before the onset of malignancy. To address this problem, medical imaging is used for the analysis and segmentation of lesion boundaries from dermoscopic images. Various methods have been used, ranging from visual inspection to the textural analysis of the images. However, accuracy of these methods is low for proper clinical treatment because of the sensitivity involved in surgical procedures or drug application. This presents an opportunity to develop an automated model with good accuracy so that it may be used in a clinical setting. This paper proposes an automated method for segmenting lesion boundaries that combines two architectures, the U-Net and the ResNet, collectively called Res-Unet. Moreover, we also used image inpainting for hair removal, which improved the segmentation results significantly. We trained our model on the ISIC 2017 dataset and validated it on the ISIC 2017 test set as well as the PH2 dataset. Our proposed model attained a Jaccard Index of 0.772 on the ISIC 2017 test set and 0.854 on the PH2 dataset, which are comparable results to the current available state-of-the-art techniques.
Collapse
|
188
|
Petrović N, Moyà-Alcover G, Varona J, Jaume-I-Capó A. Crowdsourcing human-based computation for medical image analysis: A systematic literature review. Health Informatics J 2020; 26:2446-2469. [PMID: 32141371 DOI: 10.1177/1460458220907435] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Computer-assisted algorithms for the analysis of medical images require human interactions to achieve satisfying results. Human-based computation and crowdsourcing offer a solution to this problem. We performed a systematic literature review of studies on crowdsourcing human-based computation for medical image analysis based on the guidelines proposed by Kitchenham and Charters. We identified 43 studies relevant to the objective of this research. We determined three primary purposes and problems that crowdsourcing human-based computation systems can solve. We found that the users provided five information types. We compared systems that use pre-, post-evaluation and quality control methods to select and filter the user inputs. We analyzed the metrics used for the evaluation of the crowdsourcing human-based computation system performance. Finally, we identified the most popular crowdsourcing human-based computation platforms with their advantages and disadvantages.Crowdsourcing human-based computation systems can successfully solve medical image analysis problems. However, the application of crowdsourcing human-based computation systems in this research area is still limited and more studies should be conducted to obtain generalizable results. We provided guidelines to practitioners and researchers based on the results obtained in this research.
Collapse
|
189
|
Lee Y, Veerubhotla K, Jeong MH, Lee CH. Deep Learning in Personalization of Cardiovascular Stents. J Cardiovasc Pharmacol Ther 2020; 25:110-120. [DOI: 10.1177/1074248419878405] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
Abstract
Deep learning (DL) application has demonstrated its enormous potential in accomplishing biomedical tasks, such as vessel segmentation, brain visualization, and speech recognition. This review article has mainly covered recent advances in the principles of DL algorithms, existing DL software, and designing strategies of DL models. Latest progresses in cardiovascular devices, especially DL-based cardiovascular stent used for angioplasty, differential and advanced diagnostic means, and the treatment outcomes involved with coronary artery disease (CAD), are discussed. Also presented is DL-based discovery of new materials and future medical technologies that will facilitate the development of tailored and personalized treatment strategies by identifying and forecasting individual impending risks of cardiovascular diseases.
Collapse
Affiliation(s)
- Yugyung Lee
- School of Computing and Engineering, University of Missouri-Kansas City, MO, USA
| | - Krishna Veerubhotla
- Division of Pharmaceutical Sciences, School of Pharmacy, University of Missouri-Kansas City, MO, USA
| | - Myung Ho Jeong
- Department of Cardiovascular Medicine of Chonnam National University, Gwang-Ju, South Korea
| | - Chi H. Lee
- Division of Pharmaceutical Sciences, School of Pharmacy, University of Missouri-Kansas City, MO, USA
| |
Collapse
|
190
|
Rohini P, Sundar S, Ramakrishnan S. Differentiation of early mild cognitive impairment in brainstem MR images using multifractal detrended moving average singularity spectral features. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101780] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
191
|
Dhanachandra N, Chanu YJ, Singh KM. A new hybrid image segmentation approach using clustering and black hole algorithm. Comput Intell 2020. [DOI: 10.1111/coin.12297] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
| | - Y. Jina Chanu
- CSE DepartmentNational Institute of Technology Manipur India
| | | |
Collapse
|
192
|
Garcia-Granero A, Pellino G, Giner F, Frasson M, Fletcher-Sanfeliu D, Romaguera VP, Flor-Lorente B, Gamundi M, Brogi L, Garcia-Calderón D, Gonzalez-Argente FX, Garcia-Granero E. A mathematical 3D-method applied to MRI to evaluate prostatic infiltration in advanced rectal cancer. Tech Coloproctol 2020; 24:605-607. [PMID: 32107687 DOI: 10.1007/s10151-020-02170-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 02/07/2020] [Indexed: 12/30/2022]
Affiliation(s)
- A Garcia-Granero
- Colorectal Surgery Unit, Hospital Universitario Son Espases, Mallorca, Spain
| | - G Pellino
- Colorectal Surgery Unit, Hospital Vall D'Hebron, Passeig de la Vall d'Hebron 119-129, 08035, Barcelona, Spain. .,Department of Advanced Medical and Surgical Sciences, Università Degli Studi Della Campania "Luigi Vanvitelli", Naples, Italy.
| | - F Giner
- Department of Pathology Hospital, Universitario y Politéctico la Fe, Valencia, Spain
| | - M Frasson
- Colorectal Surgery Unit, Hospital Universitario y Politéctico la Fe, Valencia, Spain
| | - D Fletcher-Sanfeliu
- Cardiovascular Surgery Department, Hospital, Universitario Son Espases, Mallorca, Spain
| | - V P Romaguera
- Colorectal Surgery Unit, Hospital Universitario y Politéctico la Fe, Valencia, Spain
| | - B Flor-Lorente
- Colorectal Surgery Unit, Hospital Universitario y Politéctico la Fe, Valencia, Spain
| | - M Gamundi
- Colorectal Surgery Unit, Hospital Universitario Son Espases, Mallorca, Spain
| | - L Brogi
- 3D-Reconstruction Unit and Simulation Center, Hospital Universitario Son Espases, Mallorca, Spain
| | | | | | - E Garcia-Granero
- Colorectal Surgery Unit, Hospital Universitario y Politéctico la Fe, Valencia, Spain
| |
Collapse
|
193
|
Qin W, Wu Y, Li S, Chen Y, Yang Y, Liu X, Zheng H, Liang D, Hu Z. Automated segmentation of the left ventricle from MR cine imaging based on deep learning architecture. Biomed Phys Eng Express 2020; 6:025009. [DOI: 10.1088/2057-1976/ab7363] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
194
|
AlZu’bi S, Shehab M, Al-Ayyoub M, Jararweh Y, Gupta B. Parallel implementation for 3D medical volume fuzzy segmentation. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2018.07.026] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
195
|
Nitkunanantharajah S, Zahnd G, Olivo M, Navab N, Mohajerani P, Ntziachristos V. Skin Surface Detection in 3D Optoacoustic Mesoscopy Based on Dynamic Programming. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:458-467. [PMID: 31329549 DOI: 10.1109/tmi.2019.2928393] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Optoacoustic (photoacoustic) mesoscopy offers unique capabilities in skin imaging and resolves skin features associated with detection, diagnosis, and management of disease. A critical first step in the quantitative analysis of clinical optoacoustic images is to identify the skin surface in a rapid, reliable, and automated manner. Nevertheless, most common edge- and surface-detection algorithms cannot reliably detect the skin surface on 3D raster-scan optoacoustic mesoscopy (RSOM) images, due to discontinuities and diffuse interfaces in the image. We present herein a novel dynamic programming approach that extracts the skin boundary as a 2D surface in one single step, as opposed to consecutive extraction of several independent 1D contours. A domain-specific energy function is introduced, taking into account the properties of volumetric optoacoustic mesoscopy images. The accuracy of the proposed method is validated on scans of the volar forearm of 19 volunteers with different skin complexions, for which the skin surface has been traced manually to provide a reference. In addition, the robustness and the limitations of the method are demonstrated on data where the skin boundaries are low-contrast or ill-defined. The automatic skin surface detection method can improve the speed and accuracy in the analysis of quantitative features seen on the RSOM images and accelerate the clinical translation of the technique. Our method can likely be extended to identify other types of surfaces in the RSOM and other imaging modalities.
Collapse
|
196
|
Active learning for accuracy enhancement of semantic segmentation with CNN-corrected label curations: Evaluation on kidney segmentation in abdominal CT. Sci Rep 2020; 10:366. [PMID: 31941938 PMCID: PMC6962335 DOI: 10.1038/s41598-019-57242-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 12/24/2019] [Indexed: 01/10/2023] Open
Abstract
Segmentation is fundamental to medical image analysis. Recent advances in fully convolutional networks has enabled automatic segmentation; however, high labeling efforts and difficulty in acquiring sufficient and high-quality training data is still a challenge. In this study, a cascaded 3D U-Net with active learning to increase training efficiency with exceedingly limited data and reduce labeling efforts is proposed. Abdominal computed tomography images of 50 kidneys were used for training. In stage I, 20 kidneys with renal cell carcinoma and four substructures were used for training by manually labelling ground truths. In stage II, 20 kidneys from the previous stage and 20 newly added kidneys were used with convolutional neural net (CNN)-corrected labelling for the newly added data. Similarly, in stage III, 50 kidneys were used. The Dice similarity coefficient was increased with the completion of each stage, and shows superior performance when compared with a recent segmentation network based on 3D U-Net. The labeling time for CNN-corrected segmentation was reduced by more than half compared to that in manual segmentation. Active learning was therefore concluded to be capable of reducing labeling efforts through CNN-corrected segmentation and increase training efficiency by iterative learning with limited data.
Collapse
|
197
|
Saygili A, Albayrak S. Knee Meniscus Segmentation and Tear Detection from MRI: A Review. Curr Med Imaging 2020; 16:2-15. [DOI: 10.2174/1573405614666181017122109] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 09/20/2018] [Accepted: 09/29/2018] [Indexed: 12/22/2022]
Abstract
Background:
Automatic diagnostic systems in medical imaging provide useful information
to support radiologists and other relevant experts. The systems that help radiologists in their
analysis and diagnosis appear to be increasing.
Discussion:
Knee joints are intensively studied structures, as well. In this review, studies that
automatically segment meniscal structures from the knee joint MR images and detect tears have
been investigated. Some of the studies in the literature merely perform meniscus segmentation,
while others include classification procedures that detect both meniscus segmentation and anomalies
on menisci. The studies performed on the meniscus were categorized according to the methods
they used. The methods used and the results obtained from such studies were analyzed along with
their drawbacks, and the aspects to be developed were also emphasized.
Conclusion:
The work that has been done in this area can effectively support the decisions that will
be made by radiology and orthopedics specialists. Furthermore, these operations, which were performed
manually on MR images, can be performed in a shorter time with the help of computeraided
systems, which enables early diagnosis and treatment.
Collapse
Affiliation(s)
- Ahmet Saygili
- Computer Engineering Department, Corlu Faculty of Engineering, Namık Kemal University, Tekirdağ, Turkey
| | - Songül Albayrak
- Computer Engineering Department, Faculty of Electric and Electronics, Yıldız Technical University, İstanbul, Turkey
| |
Collapse
|
198
|
Jha D, Smedsrud PH, Riegler MA, Halvorsen P, de Lange T, Johansen D, Johansen HD. Kvasir-SEG: A Segmented Polyp Dataset. MULTIMEDIA MODELING 2020. [DOI: 10.1007/978-3-030-37734-2_37] [Citation(s) in RCA: 109] [Impact Index Per Article: 27.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
199
|
Chowdhary CL, Acharjya D. Segmentation and Feature Extraction in Medical Imaging: A Systematic Review. ACTA ACUST UNITED AC 2020. [DOI: 10.1016/j.procs.2020.03.179] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
200
|
Chen S, Wang L, Li G, Wu TH, Diachina S, Tejera B, Kwon JJ, Lin FC, Lee YT, Xu T, Shen D, Ko CC. Machine learning in orthodontics: Introducing a 3D auto-segmentation and auto-landmark finder of CBCT images to assess maxillary constriction in unilateral impacted canine patients. Angle Orthod 2020; 90:77-84. [PMID: 31403836 PMCID: PMC8087054 DOI: 10.2319/012919-59.1] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2019] [Accepted: 05/01/2019] [Indexed: 12/21/2022] Open
Abstract
OBJECTIVES To (1) introduce a novel machine learning method and (2) assess maxillary structure variation in unilateral canine impaction for advancing clinically viable information. MATERIALS AND METHODS A machine learning algorithm utilizing Learning-based multi-source IntegratioN frameworK for Segmentation (LINKS) was used with cone-beam computed tomography (CBCT) images to quantify volumetric skeletal maxilla discrepancies of 30 study group (SG) patients with unilaterally impacted maxillary canines and 30 healthy control group (CG) subjects. Fully automatic segmentation was implemented for maxilla isolation, and maxillary volumetric and linear measurements were performed. Analysis of variance was used for statistical evaluation. RESULTS Maxillary structure was successfully auto-segmented, with an average dice ratio of 0.80 for three-dimensional image segmentations and a minimal mean difference of two voxels on the midsagittal plane for digitized landmarks between the manually identified and the machine learning-based (LINKS) methods. No significant difference in bone volume was found between impaction ([2.37 ± 0.34] [Formula: see text] 104 mm3) and nonimpaction ([2.36 ± 0.35] [Formula: see text] 104 mm3) sides of SG. The SG maxillae had significantly smaller volumes, widths, heights, and depths (P < .05) than CG. CONCLUSIONS The data suggest that palatal expansion could be beneficial for those with unilateral canine impaction, as underdevelopment of the maxilla often accompanies that condition in the early teen years. Fast and efficient CBCT image segmentation will allow large clinical data sets to be analyzed effectively.
Collapse
|