1
|
Nalepa J, Kotowski K, Machura B, Adamski S, Bozek O, Eksner B, Kokoszka B, Pekala T, Radom M, Strzelczak M, Zarudzki L, Krason A, Arcadu F, Tessier J. Deep learning automates bidimensional and volumetric tumor burden measurement from MRI in pre- and post-operative glioblastoma patients. Comput Biol Med 2023; 154:106603. [PMID: 36738710 DOI: 10.1016/j.compbiomed.2023.106603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 01/11/2023] [Accepted: 01/22/2023] [Indexed: 02/05/2023]
Abstract
Tumor burden assessment by magnetic resonance imaging (MRI) is central to the evaluation of treatment response for glioblastoma. This assessment is, however, complex to perform and associated with high variability due to the high heterogeneity and complexity of the disease. In this work, we tackle this issue and propose a deep learning pipeline for the fully automated end-to-end analysis of glioblastoma patients. Our approach simultaneously identifies tumor sub-regions, including the enhancing tumor, peritumoral edema and surgical cavity in the first step, and then calculates the volumetric and bidimensional measurements that follow the current Response Assessment in Neuro-Oncology (RANO) criteria. Also, we introduce a rigorous manual annotation process which was followed to delineate the tumor sub-regions by the human experts, and to capture their segmentation confidences that are later used while training deep learning models. The results of our extensive experimental study performed over 760 pre-operative and 504 post-operative adult patients with glioma obtained from the public database (acquired at 19 sites in years 2021-2020) and from a clinical treatment trial (47 and 69 sites for pre-/post-operative patients, 2009-2011) and backed up with thorough quantitative, qualitative and statistical analysis revealed that our pipeline performs accurate segmentation of pre- and post-operative MRIs in a fraction of the manual delineation time (up to 20 times faster than humans). Volumetric measurements were in strong agreement with experts with the Intraclass Correlation Coefficient (ICC): 0.959, 0.703, 0.960 for ET, ED, and cavity. Similarly, automated RANO compared favorably with experienced readers (ICC: 0.681 and 0.866) producing consistent and accurate results. Additionally, we showed that RANO measurements are not always sufficient to quantify tumor burden. The high performance of the automated tumor burden measurement highlights the potential of the tool for considerably improving and simplifying radiological evaluation of glioblastoma in clinical trials and clinical practice.
Collapse
Affiliation(s)
- Jakub Nalepa
- Graylight Imaging, Gliwice, Poland; Department of Algorithmics and Software, Silesian University of Technology, Gliwice, Poland.
| | | | | | | | - Oskar Bozek
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland
| | - Bartosz Eksner
- Department of Radiology and Nuclear Medicine, ZSM Chorzów, Chorzów, Poland
| | - Bartosz Kokoszka
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland
| | - Tomasz Pekala
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland
| | - Mateusz Radom
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland
| | - Marek Strzelczak
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland
| | - Lukasz Zarudzki
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland
| | - Agata Krason
- Roche Pharmaceutical Research & Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland
| | - Filippo Arcadu
- Roche Pharmaceutical Research & Early Development, Early Clinical Development Informatics, Roche Innovation Center Basel, Basel, Switzerland
| | - Jean Tessier
- Roche Pharmaceutical Research & Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland
| |
Collapse
|
2
|
Ghosal P, Chowdhury T, Kumar A, Bhadra AK, Chakraborty J, Nandi D. MhURI:A Supervised Segmentation Approach to Leverage Salient Brain Tissues in Magnetic Resonance Images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105841. [PMID: 33221057 PMCID: PMC9096474 DOI: 10.1016/j.cmpb.2020.105841] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 11/07/2020] [Indexed: 05/09/2023]
Abstract
BACKGROUND AND OBJECTIVES Accurate segmentation of critical tissues from a brain MRI is pivotal for characterization and quantitative pattern analysis of the human brain and thereby, identifies the earliest signs of various neurodegenerative diseases. To date, in most cases, it is done manually by the radiologists. The overwhelming workload in some of the thickly populated nations may cause exhaustion leading to interruption for the doctors, which may pose a continuing threat to patient safety. A novel fusion method called U-Net inception based on 3D convolutions and transition layers is proposed to address this issue. METHODS A 3D deep learning method called Multi headed U-Net with Residual Inception (MhURI) accompanied by Morphological Gradient channel for brain tissue segmentation is proposed, which incorporates Residual Inception 2-Residual (RI2R) module as the basic building block. The model exploits the benefits of morphological pre-processing for structural enhancement of MR images. A multi-path data encoding pipeline is introduced on top of the U-Net backbone, which encapsulates initial global features and captures the information from each MRI modality. RESULTS The proposed model has accomplished encouraging outcomes, which appreciates the adequacy in terms of some of the established quality metrices when compared with some of the state-of-the-art methods while evaluating with respect to two popular publicly available data sets. CONCLUSION The model is entirely automatic and able to segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) from brain MRI effectively with sufficient accuracy. Hence, it may be considered to be a potential computer-aided diagnostic (CAD) tool for radiologists and other medical practitioners in their clinical diagnosis workflow.
Collapse
Affiliation(s)
- Palash Ghosal
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Tamal Chowdhury
- Department of Electronics and Communication Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Amish Kumar
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Ashok Kumar Bhadra
- Department of Radiology, KPC Medical College and Hospital, Jadavpur, 700032, West Bengal, India.
| | - Jayasree Chakraborty
- Department of Hepatopancreatobiliary Service, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA.
| | - Debashis Nandi
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| |
Collapse
|
3
|
Tamoor M, Younas I. Automatic segmentation of medical images using a novel Harris Hawk optimization method and an active contour model. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2021; 29:721-739. [PMID: 34024808 DOI: 10.3233/xst-210879] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Medical image segmentation is a key step to assist diagnosis of several diseases, and accuracy of a segmentation method is important for further treatments of different diseases. Different medical imaging modalities have different challenges such as intensity inhomogeneity, noise, low contrast, and ill-defined boundaries, which make automated segmentation a difficult task. To handle these issues, we propose a new fully automated method for medical image segmentation, which utilizes the advantages of thresholding and an active contour model. In this study, a Harris Hawks optimizer is applied to determine the optimal thresholding value, which is used to obtain the initial contour for segmentation. The obtained contour is further refined by using a spatially varying Gaussian kernel in the active contour model. The proposed method is then validated using a standard skin dataset (ISBI 2016), which consists of variable-sized lesions and different challenging artifacts, and a standard cardiac magnetic resonance dataset (ACDC, MICCAI 2017) with a wide spectrum of normal hearts, congenital heart diseases, and cardiac dysfunction. Experimental results show that the proposed method can effectively segment the region of interest and produce superior segmentation results for skin (overall Dice Score 0.90) and cardiac dataset (overall Dice Score 0.93), as compared to other state-of-the-art algorithms.
Collapse
Affiliation(s)
- Maria Tamoor
- FAST School of Computing, National University of Computer and Emerging Sciences, Lahore, Pakistan
| | - Irfan Younas
- FAST School of Computing, National University of Computer and Emerging Sciences, Lahore, Pakistan
| |
Collapse
|
4
|
A Survey on Computer-Aided Diagnosis of Brain Disorders through MRI Based on Machine Learning and Data Mining Methodologies with an Emphasis on Alzheimer Disease Diagnosis and the Contribution of the Multimodal Fusion. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10051894] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Computer-aided diagnostic (CAD) systems use machine learning methods that provide a synergistic effect between the neuroradiologist and the computer, enabling an efficient and rapid diagnosis of the patient’s condition. As part of the early diagnosis of Alzheimer’s disease (AD), which is a major public health problem, the CAD system provides a neuropsychological assessment that helps mitigate its effects. The use of data fusion techniques by CAD systems has proven to be useful, they allow for the merging of information relating to the brain and its tissues from MRI, with that of other types of modalities. This multimodal fusion refines the quality of brain images by reducing redundancy and randomness, which contributes to improving the clinical reliability of the diagnosis compared to the use of a single modality. The purpose of this article is first to determine the main steps of the CAD system for brain magnetic resonance imaging (MRI). Then to bring together some research work related to the diagnosis of brain disorders, emphasizing AD. Thus the most used methods in the stages of classification and brain regions segmentation are described, highlighting their advantages and disadvantages. Secondly, on the basis of the raised problem, we propose a solution within the framework of multimodal fusion. In this context, based on quantitative measurement parameters, a performance study of multimodal CAD systems is proposed by comparing their effectiveness with those exploiting a single MRI modality. In this case, advances in information fusion techniques in medical imagery are accentuated, highlighting their advantages and disadvantages. The contribution of multimodal fusion and the interest of hybrid models are finally addressed, as well as the main scientific assertions made, in the field of brain disease diagnosis.
Collapse
|
5
|
George MM, Kalaivani S. Retrospective correction of intensity inhomogeneity with sparsity constraints in transform-domain: Application to brain MRI. Magn Reson Imaging 2019; 61:207-223. [PMID: 31009687 DOI: 10.1016/j.mri.2019.04.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 04/05/2019] [Accepted: 04/18/2019] [Indexed: 11/27/2022]
Abstract
An effective retrospective correction method is introduced in this paper for intensity inhomogeneity which is an inherent artifact in MR images. Intensity inhomogeneity problem is formulated as the decomposition of acquired image into true image and bias field which are expected to have sparse approximation in suitable transform domains based on their known properties. Piecewise constant nature of the true image lends itself to have a sparse approximation in framelet domain. While spatially smooth property of the bias field supports a sparse representation in Fourier domain. The algorithm attains optimal results by seeking the sparsest solutions for the unknown variables in the search space through L1 norm minimization. The objective function associated with defined problem is convex and is efficiently solved by the linearized alternating direction method. Thus, the method estimates the optimal true image and bias field simultaneously in an L1 norm minimization framework by promoting sparsity of the solutions in suitable transform domains. Furthermore, the methodology doesn't require any preprocessing, any predefined specifications or parametric models that are critically controlled by user-defined parameters. The qualitative and quantitative validation of the proposed methodology in simulated and real human brain MR images demonstrates the efficacy and superiority in performance compared to some of the distinguished algorithms for intensity inhomogeneity correction.
Collapse
Affiliation(s)
- Maryjo M George
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India.
| | - S Kalaivani
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India.
| |
Collapse
|
6
|
Abstract
In brain magnetic resonance (MR) images, image quality is often degraded due to the influence of noise and outliers, which brings some difficulties for doctors to segment and extract brain tissue accurately. In this paper, a modified robust fuzzy c-means (MRFCM) algorithm for brain MR image segmentation is proposed. According to the gray level information of the pixels in the local neighborhood, the deviation values of each adjacent pixel are calculated in kernel space based on their median value, and the normalized adaptive weighted measure of each pixel is obtained. Both impulse noise and Gaussian noise in the image can be effectively suppressed, and the detail and edge information of the brain MR image can be better preserved. At the same time, the gray histogram is used to replace single pixel during the clustering process. The results of segmentation of MRFCM are compared with the state-of-the-art algorithms based on fuzzy clustering, and the proposed algorithm has the stronger anti-noise property, better robustness to various noises and higher segmentation accuracy.
Collapse
|
7
|
Ju M, Choi Y, Seo J, Sa J, Lee S, Chung Y, Park D. A Kinect-Based Segmentation of Touching-Pigs for Real-Time Monitoring. SENSORS 2018; 18:s18061746. [PMID: 29843479 PMCID: PMC6021839 DOI: 10.3390/s18061746] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 05/23/2018] [Accepted: 05/27/2018] [Indexed: 02/06/2023]
Abstract
Segmenting touching-pigs in real-time is an important issue for surveillance cameras intended for the 24-h tracking of individual pigs. However, methods to do so have not yet been reported. We particularly focus on the segmentation of touching-pigs in a crowded pig room with low-contrast images obtained using a Kinect depth sensor. We reduce the execution time by combining object detection techniques based on a convolutional neural network (CNN) with image processing techniques instead of applying time-consuming operations, such as optimization-based segmentation. We first apply the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO) to solve the separation problem for touching-pigs. If the quality of the YOLO output is not satisfied, then we try to find the possible boundary line between the touching-pigs by analyzing the shape. Our experimental results show that this method is effective to separate touching-pigs in terms of both accuracy (i.e., 91.96%) and execution time (i.e., real-time execution), even with low-contrast images obtained using a Kinect depth sensor.
Collapse
Affiliation(s)
- Miso Ju
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Younchang Choi
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Jihyun Seo
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Jaewon Sa
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Sungju Lee
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Yongwha Chung
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| | - Daihee Park
- Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea.
| |
Collapse
|