1
|
Keles A, Ozisik PA, Algin O, Celebi FV, Bendechache M. Decoding pulsatile patterns of cerebrospinal fluid dynamics through enhancing interpretability in machine learning. Sci Rep 2024; 14:17854. [PMID: 39090141 PMCID: PMC11294568 DOI: 10.1038/s41598-024-67928-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 07/17/2024] [Indexed: 08/04/2024] Open
Abstract
Analyses of complex behaviors of Cerebrospinal Fluid (CSF) have become increasingly important in diseases diagnosis. The changes of the phase-contrast magnetic resonance imaging (PC-MRI) signal formed by the velocity of flowing CSF are represented as a set of velocity-encoded images or maps, which can be thought of as signal data in the context of medical imaging, enabling the evaluation of pulsatile patterns throughout a cardiac cycle. However, automatic segmentation of the CSF region in a PC-MRI image is challenging, and implementing an explained ML method using pulsatile data as a feature remains unexplored. This paper presents lightweight machine learning (ML) algorithms to perform CSF lumen segmentation in spinal, utilizing sets of velocity-encoded images or maps as a feature. The Dataset contains 57 PC-MRI slabs by 3T MRI scanner from control and idiopathic scoliosis participants are involved to collect data. The ML models are trained with 2176 time series images. Different cardiac periods image (frame) numbers of PC-MRIs are interpolated in the preprocessing step to align to features of equal size. The fivefold cross-validation procedure is used to estimate the success of the ML models. Additionally, the study focusses on enhancing the interpretability of the highest-accuracy eXtreme gradient boosting (XGB) model by applying the shapley additive explanations (SHAP) technique. The XGB algorithm presented its highest accuracy, with an average fivefold accuracy of 0.99% precision, 0.95% recall, and 0.97% F1 score. We evaluated the significance of each pulsatile feature's contribution to predictions, offering a more profound understanding of the model's behavior in distinguishing CSF lumen pixels with SHAP. Introducing a novel approach in the field, develop ML models offer comprehension into feature extraction and selection from PC-MRI pulsatile data. Moreover, the explained ML model offers novel and valuable insights to domain experts, contributing to an enhanced scholarly understanding of CSF dynamics.
Collapse
Affiliation(s)
- Ayse Keles
- Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Ankara Medipol University, Ankara, Turkey.
| | - Pinar Akdemir Ozisik
- Department of Neurosurgery, School of Medicine, Ankara Yildirim Beyazit University, Ankara, Turkey
- Ankara City Hospital, Orthopedics and Neurology Tower, Bilkent, 06800, Ankara, Turkey
| | - Oktay Algin
- Interventional MR Clinical R&D Institute, Ankara University, Ankara, Turkey
- National MR Research Center (UMRAM), Bilkent University, Ankara, Turkey
- Radiology Department, Medical Faculty, Ankara University, Ankara, Turkey
| | - Fatih Vehbi Celebi
- Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Ankara Yildirim Beyazit University, 06010, Ayvalı, Keçiören, Ankara, Turkey
| | - Malika Bendechache
- Lero and ADAPT Research Centres, School of Computer Science, University of Galway, Galway, Ireland
| |
Collapse
|
2
|
Napravnik M, Hržić F, Tschauner S, Štajduhar I. Building RadiologyNET: an unsupervised approach to annotating a large-scale multimodal medical database. BioData Min 2024; 17:22. [PMID: 38997749 PMCID: PMC11245804 DOI: 10.1186/s13040-024-00373-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 06/30/2024] [Indexed: 07/14/2024] Open
Abstract
BACKGROUND The use of machine learning in medical diagnosis and treatment has grown significantly in recent years with the development of computer-aided diagnosis systems, often based on annotated medical radiology images. However, the lack of large annotated image datasets remains a major obstacle, as the annotation process is time-consuming and costly. This study aims to overcome this challenge by proposing an automated method for annotating a large database of medical radiology images based on their semantic similarity. RESULTS An automated, unsupervised approach is used to create a large annotated dataset of medical radiology images originating from the Clinical Hospital Centre Rijeka, Croatia. The pipeline is built by data-mining three different types of medical data: images, DICOM metadata and narrative diagnoses. The optimal feature extractors are then integrated into a multimodal representation, which is then clustered to create an automated pipeline for labelling a precursor dataset of 1,337,926 medical images into 50 clusters of visually similar images. The quality of the clusters is assessed by examining their homogeneity and mutual information, taking into account the anatomical region and modality representation. CONCLUSIONS The results indicate that fusing the embeddings of all three data sources together provides the best results for the task of unsupervised clustering of large-scale medical data and leads to the most concise clusters. Hence, this work marks the initial step towards building a much larger and more fine-grained annotated dataset of medical radiology images.
Collapse
Affiliation(s)
- Mateja Napravnik
- Faculty of Engineering, University of Rijeka, Vukovarska 58, Rijeka, 51000, Croatia
| | - Franko Hržić
- Faculty of Engineering, University of Rijeka, Vukovarska 58, Rijeka, 51000, Croatia
- Center for Artificial Intelligence and Cybersecurity, Radmile Matejcic 2, Rijeka, 51000, Croatia
| | - Sebastian Tschauner
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Neue Stiftingtalstraße 6, Graz, 8010, Austria
| | - Ivan Štajduhar
- Faculty of Engineering, University of Rijeka, Vukovarska 58, Rijeka, 51000, Croatia.
- Center for Artificial Intelligence and Cybersecurity, Radmile Matejcic 2, Rijeka, 51000, Croatia.
| |
Collapse
|
3
|
Alsaleh AM, Albalawi E, Algosaibi A, Albakheet SS, Khan SB. Few-Shot Learning for Medical Image Segmentation Using 3D U-Net and Model-Agnostic Meta-Learning (MAML). Diagnostics (Basel) 2024; 14:1213. [PMID: 38928629 PMCID: PMC11202447 DOI: 10.3390/diagnostics14121213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 05/24/2024] [Accepted: 05/30/2024] [Indexed: 06/28/2024] Open
Abstract
Deep learning has attained state-of-the-art results in general image segmentation problems; however, it requires a substantial number of annotated images to achieve the desired outcomes. In the medical field, the availability of annotated images is often limited. To address this challenge, few-shot learning techniques have been successfully adapted to rapidly generalize to new tasks with only a few samples, leveraging prior knowledge. In this paper, we employ a gradient-based method known as Model-Agnostic Meta-Learning (MAML) for medical image segmentation. MAML is a meta-learning algorithm that quickly adapts to new tasks by updating a model's parameters based on a limited set of training samples. Additionally, we use an enhanced 3D U-Net as the foundational network for our models. The enhanced 3D U-Net is a convolutional neural network specifically designed for medical image segmentation. We evaluate our approach on the TotalSegmentator dataset, considering a few annotated images for four tasks: liver, spleen, right kidney, and left kidney. The results demonstrate that our approach facilitates rapid adaptation to new tasks using only a few annotated images. In 10-shot settings, our approach achieved mean dice coefficients of 93.70%, 85.98%, 81.20%, and 89.58% for liver, spleen, right kidney, and left kidney segmentation, respectively. In five-shot sittings, the approach attained mean Dice coefficients of 90.27%, 83.89%, 77.53%, and 87.01% for liver, spleen, right kidney, and left kidney segmentation, respectively. Finally, we assess the effectiveness of our proposed approach on a dataset collected from a local hospital. Employing five-shot sittings, we achieve mean Dice coefficients of 90.62%, 79.86%, 79.87%, and 78.21% for liver, spleen, right kidney, and left kidney segmentation, respectively.
Collapse
Affiliation(s)
- Aqilah M. Alsaleh
- College of Computer Science and Information Technology, King Faisal University, Al Hofuf 400-31982, AlAhsa, Saudi Arabia; (E.A.); (A.A.)
- Department of Information Technology, AlAhsa Health Cluster, Al Hofuf 3158-36421, AlAhsa, Saudi Arabia
| | - Eid Albalawi
- College of Computer Science and Information Technology, King Faisal University, Al Hofuf 400-31982, AlAhsa, Saudi Arabia; (E.A.); (A.A.)
| | - Abdulelah Algosaibi
- College of Computer Science and Information Technology, King Faisal University, Al Hofuf 400-31982, AlAhsa, Saudi Arabia; (E.A.); (A.A.)
| | - Salman S. Albakheet
- Department of Radiology, King Faisal General Hospital, Al Hofuf 36361, AlAhsa, Saudi Arabia;
| | - Surbhi Bhatia Khan
- Department of Data Science, School of Science Engineering and Environment, University of Salford, Manchester M5 4WT, UK;
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
| |
Collapse
|
4
|
Haughton DR, Barr EC, Gupta AK, Toohey WC, Kania AM. Investigating Fryette's mechanics in computed tomography scans: an analysis of vertebrae spinal physiology using open-sourced datasets and three-dimensional vertebral orientation. J Osteopath Med 2024; 0:jom-2023-0088. [PMID: 38837124 DOI: 10.1515/jom-2023-0088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 04/08/2024] [Indexed: 06/06/2024]
Abstract
CONTEXT Fryette's mechanics is taught as a simplistic model of coupled vertebral movement, fundamental in osteopathic practice. This study seeks to better understand the likelihood of Fryette's model by calculating vertebral orientation in computed tomography (CT) scans. Given previous findings of low angular coupled movements during overall spinal motion, static calculations provide a unique perspective on the likelihood of Fryette's mechanics. OBJECTIVES This analysis aims to evaluate the efficacy of Fryette's principles in predicting vertebral positioning in CT scans by comparing their 3-dimensional (3D) orientation to movements described by Fryette. METHODS 3D models of 953 thoracic and lumbar vertebrae were obtained from 82 CT scans within the VerSe`20 open-source dataset. A stepwise algorithm generated three unique symmetry planes for each vertebra, offering 3D angular orientation with respect to the vertebral level below. A total of 422 vertebrae were omitted from the analysis due to the presence of pathologies significant enough to affect their motion, inaccurate symmetry plane calculations, or absence of vertebral level below. The remaining 531 vertebra were analyzed to compare quantitative coupled positioning against expected coupled spinal movements in line with Fryette's mechanics. One-sample proportional z-scoring was implemented for all vertebral levels with an ∝=0.05 and a null hypothesis of Fryette's primed positioning occurring by chance of 50 %. Further analysis was performed with individual z-scoring for each individual level to see which levels were statistically significant. RESULTS Data from the VerSe`20 dataset revealed that 56.9 % of successfully analyzed vertebrae demonstrated positions compatible with Fryette's mechanics (p=0.0014, power=89 %). The 302 vertebral levels that did display coupled positioning consisted of Type I (166 vertebrae) and Type II (136 vertebrae) compatible with Fryette's mechanics. Levels that demonstrated statistical significance consisted of T5 (p=0, power=99 %), T6 (p=0.0023, power=77 %), T7 (p=0.041, power=46 %), and T10 (p=0.017, power=60 %). CONCLUSIONS Our analysis suggests that the static positions of vertebrae in CT scans may align with Fryette's descriptions, although not very often. Notably, vertebral levels T5-T7 and T10 exhibit strong evidence of their static positions aligning with expected movements, warranting further investigation into the Fryette phenomenon at these levels. Future studies should explore the dynamic implications of these findings to enhance our understanding of spinal biomechanics.
Collapse
Affiliation(s)
- Dillon R Haughton
- 448838 Burrell College of Osteopathic Medicine , Las Cruces, NM, USA
| | - Emily C Barr
- 448838 Burrell College of Osteopathic Medicine , Las Cruces, NM, USA
| | - Akhil K Gupta
- 448838 Burrell College of Osteopathic Medicine , Las Cruces, NM, USA
| | - Walker C Toohey
- 448838 Burrell College of Osteopathic Medicine , Las Cruces, NM, USA
| | - Adrienne M Kania
- Department of Clinical Medicine, 448838 Burrell College of Osteopathic Medicine , Las Cruces, NM, USA
| |
Collapse
|
5
|
Vahdati S, Khosravi B, Mahmoudi E, Zhang K, Rouzrokh P, Faghani S, Moassefi M, Tahmasebi A, Andriole KP, Chang P, Farahani K, Flores MG, Folio L, Houshmand S, Giger ML, Gichoya JW, Erickson BJ. A Guideline for Open-Source Tools to Make Medical Imaging Data Ready for Artificial Intelligence Applications: A Society of Imaging Informatics in Medicine (SIIM) Survey. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01083-0. [PMID: 38558368 DOI: 10.1007/s10278-024-01083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 02/29/2024] [Accepted: 03/08/2024] [Indexed: 04/04/2024]
Abstract
In recent years, the role of Artificial Intelligence (AI) in medical imaging has become increasingly prominent, with the majority of AI applications approved by the FDA being in imaging and radiology in 2023. The surge in AI model development to tackle clinical challenges underscores the necessity for preparing high-quality medical imaging data. Proper data preparation is crucial as it fosters the creation of standardized and reproducible AI models while minimizing biases. Data curation transforms raw data into a valuable, organized, and dependable resource and is a fundamental process to the success of machine learning and analytical projects. Considering the plethora of available tools for data curation in different stages, it is crucial to stay informed about the most relevant tools within specific research areas. In the current work, we propose a descriptive outline for different steps of data curation while we furnish compilations of tools collected from a survey applied among members of the Society of Imaging Informatics (SIIM) for each of these stages. This collection has the potential to enhance the decision-making process for researchers as they select the most appropriate tool for their specific tasks.
Collapse
Affiliation(s)
- Sanaz Vahdati
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Bardia Khosravi
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Elham Mahmoudi
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Kuan Zhang
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Pouria Rouzrokh
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Shahriar Faghani
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Mana Moassefi
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA
| | - Aylin Tahmasebi
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Katherine P Andriole
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Peter Chang
- Department of Radiological Sciences, Irvine Medical Center, University of California, Orange, CA, USA
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | | | - Les Folio
- Diagnostic Imaging & Interventional Radiology Moffitt Cancer Center, Tampa, FL, USA
| | - Sina Houshmand
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Maryellen L Giger
- Department of Radiology, The University of Chicago, Chicago, IL, USA
| | - Judy W Gichoya
- Department of Radiology, Emory University School of Medicine, Atlanta, GA, USA
| | - Bradley J Erickson
- Artificial Intelligence Laboratory, Department of Radiology, Mayo Clinic, 200 1st Street, SW, Rochester, MN, 55905, USA.
| |
Collapse
|
6
|
Basodi S, Raja R, Gazula H, Romero JT, Panta S, Maullin-Sapey T, Nichols TE, Calhoun VD. Decentralized Mixed Effects Modeling in COINSTAC. Neuroinformatics 2024; 22:163-175. [PMID: 38424371 DOI: 10.1007/s12021-024-09657-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/15/2024] [Indexed: 03/02/2024]
Abstract
Performing group analysis on magnetic resonance imaging (MRI) data with linear mixed-effects (LME) models is challenging due to its large dimensionality and inherent multi-level covariance structure. In addition, as large-scale collaborative projects become commonplace in neuroimaging, data must increasingly be stored and analyzed from different locations. In such settings, substantial overhead can occur in terms of data transfer and coordination between participating research groups. In some cases, data cannot be pooled together due to privacy or regulatory concerns. In this work, we propose a decentralized LME model to perform a large-scale analysis of data from different collaborations without data pooling. This method is efficient as it overcomes the hurdles of data sharing and has lower bandwidth and memory requirements for analysis than the centralized modeling approach. We evaluate our model using features extracted from structural magnetic resonance imaging (sMRI) data. Results highlight gray matter reductions in the temporal lobe/insula and medial frontal regions in schizophrenia, consistent with prior studies. Our analysis also demonstrates that decentralized LME models achieve similar performance compared to the models trained with all the data in one location. We also implement the decentralized LME approach in COINSTAC, an open source, decentralized platform for federating neuroimaging analysis, providing an easy to use tool for dissemination to the neuroimaging community.
Collapse
Affiliation(s)
- Sunitha Basodi
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science, Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA
| | - Rajikha Raja
- St. Jude Children's Research Hospital, Memphis, TN, USA
| | - Harshvardhan Gazula
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Javier Tomas Romero
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science, Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA
| | - Sandeep Panta
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science, Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA
| | | | - Thomas E Nichols
- Nuffield Department of Population Health, University of Oxford, Oxford, UK
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Vince D Calhoun
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science, Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA.
| |
Collapse
|
7
|
Behnam F, Khajouei R, Ahmadian L. The retention duration of digital images in picture archiving and communication systems. Heliyon 2024; 10:e27847. [PMID: 38524536 PMCID: PMC10958697 DOI: 10.1016/j.heliyon.2024.e27847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 03/03/2024] [Accepted: 03/07/2024] [Indexed: 03/26/2024] Open
Abstract
Introduction Every year, a large number of medical images such as MRIs, CT scans, and radiographs are prepared in hospitals, and a lot of money is spent on their preparation. Picture archiving and communication system (PACS) is an integrated image management system for maintaining and storing digital images. The objective of this study was to determine the storage duration of digital images in PACS. Methods This was a scoping review study in which we searched the PubMed and Embase databases using a combination of terms related to radiography, storage, and duration. The reference lists of included articles were checked to identify other relevant articles. Moreover, we searched Google to retrieve relevant gray literature and other information sources including guidelines. The selection process was carried out in three stages and was reported based on the PRISMA flowchart and the data were extracted using the data collection form. Results Based on the database search 2867 articles were identified, of which 13 articles were eligible for inclusion. Searching for gray literature identified 7 relevant sources. The results showed that based on the institutions' plans and regulations, different countries have different storage policies. In general, to store images between 6 and 240 months for short-term storage and between 0 and 240 months for long-term storage were considered. Conclusion Due to financial constraints and storage space requirements, healthcare organizations can provide a solution by drafting guidelines on the appropriate storage duration for medical images. The findings of this study can assist healthcare authorities and healthcare centers in employing PACS systems to manage and minimize storage space for medical images, thereby reducing storage costs.
Collapse
Affiliation(s)
- Farzaneh Behnam
- Medical Informatics Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran
| | - Reza Khajouei
- Department of Health Information Sciences, Faculty of Management and Medical Information Sciences, Kerman University of Medical Sciences, Kerman, Iran
| | - Leila Ahmadian
- Medical Informatics Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran
| |
Collapse
|
8
|
Elhadad A, Jamjoom M, Abulkasim H. Reduction of NIFTI files storage and compression to facilitate telemedicine services based on quantization hiding of downsampling approach. Sci Rep 2024; 14:5168. [PMID: 38431641 PMCID: PMC10908832 DOI: 10.1038/s41598-024-54820-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 02/16/2024] [Indexed: 03/05/2024] Open
Abstract
Magnetic resonance imaging is a medical imaging technique to create comprehensive images of the tissues and organs in the body. This study presents an advanced approach for storing and compressing neuroimaging informatics technology initiative files, a standard format in magnetic resonance imaging. It is designed to enhance telemedicine services by facilitating efficient and high-quality communication between healthcare practitioners and patients. The proposed downsampling approach begins by opening the neuroimaging informatics technology initiative file as volumetric data and then planning it into several slice images. Then, the quantization hiding technique will be applied to each of the two consecutive slice images to generate the stego slice with the same size. This involves the following major steps: normalization, microblock generation, and discrete cosine transformation. Finally, it assembles the resultant stego slice images to produce the final neuroimaging informatics technology initiative file as volumetric data. The upsampling process, designed to be completely blind, reverses the downsampling steps to reconstruct the subsequent image slice accurately. The efficacy of the proposed method was evaluated using a magnetic resonance imaging dataset, focusing on peak signal-to-noise ratio, signal-to-noise ratio, structural similarity index, and Entropy as key performance metrics. The results demonstrate that the proposed approach not only significantly reduces file sizes but also maintains high image quality.
Collapse
Affiliation(s)
- Ahmed Elhadad
- Department of Computer Science, Faculty of Computers and Information, South Valley University, Qena, Egypt
| | - Mona Jamjoom
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Hussein Abulkasim
- Department of Mathematics and Computer Science, Faculty of Science, New Valley University, El-Kharja, Egypt.
- College of Engineering and Technology, University of Science and Technology of Fujairah, Fujairah, United Arab Emirates.
| |
Collapse
|
9
|
Galbusera F, Cina A. Image annotation and curation in radiology: an overview for machine learning practitioners. Eur Radiol Exp 2024; 8:11. [PMID: 38316659 PMCID: PMC10844188 DOI: 10.1186/s41747-023-00408-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/07/2023] [Indexed: 02/07/2024] Open
Abstract
"Garbage in, garbage out" summarises well the importance of high-quality data in machine learning and artificial intelligence. All data used to train and validate models should indeed be consistent, standardised, traceable, correctly annotated, and de-identified, considering local regulations. This narrative review presents a summary of the techniques that are used to ensure that all these requirements are fulfilled, with special emphasis on radiological imaging and freely available software solutions that can be directly employed by the interested researcher. Topics discussed include key imaging concepts, such as image resolution and pixel depth; file formats for medical image data storage; free software solutions for medical image processing; anonymisation and pseudonymisation to protect patient privacy, including compliance with regulations such as the Regulation (EU) 2016/679 "General Data Protection Regulation" (GDPR) and the 1996 United States Act of Congress "Health Insurance Portability and Accountability Act" (HIPAA); methods to eliminate patient-identifying features within images, like facial structures; free and commercial tools for image annotation; and techniques for data harmonisation and normalisation.Relevance statement This review provides an overview of the methods and tools that can be used to ensure high-quality data for machine learning and artificial intelligence applications in radiology.Key points• High-quality datasets are essential for reliable artificial intelligence algorithms in medical imaging.• Software tools like ImageJ and 3D Slicer aid in processing medical images for AI research.• Anonymisation techniques protect patient privacy during dataset preparation.• Machine learning models can accelerate image annotation, enhancing efficiency and accuracy.• Data curation ensures dataset integrity, compliance, and quality for artificial intelligence development.
Collapse
Affiliation(s)
- Fabio Galbusera
- Spine Center, Schulthess Clinic, Lengghalde 2, Zurich, 8008, Switzerland.
| | - Andrea Cina
- Spine Center, Schulthess Clinic, Lengghalde 2, Zurich, 8008, Switzerland
- ETH Zürich, Department of Health Sciences and Technologies, Zurich, Switzerland
| |
Collapse
|
10
|
DeBay DR, Brewer KD. Combined PET/MR: Where Anatomical Imaging Meets Cellular Function. Methods Mol Biol 2024; 2729:391-408. [PMID: 38006508 DOI: 10.1007/978-1-0716-3499-8_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2023]
Abstract
Recent technological advances in medical imaging have allowed for both sequential and simultaneous acquisition of magnetic resonance imaging (MRI) and positron emission tomography (PET) data. Simultaneous PET/MRI offers distinct advantages by efficiently capturing functional and metabolic processes with co-localized, high-resolution anatomical images while minimizing time and movement. We will describe some of the technical and logistic requirements for optimizing sequential and simultaneous PET/MRI in the preclinical research setting.
Collapse
Affiliation(s)
- Drew R DeBay
- Biomedical Translational Imaging Centre (BIOTIC), Halifax, Canada
- Department of Physics and Atmospheric Science, Dalhousie University, Halifax, Canada
| | - Kimberly D Brewer
- School of Biomedical Engineering, Dalhousie University, Halifax, NS, Canada.
- Department of Physics and Atmospheric Science, Dalhousie University, Halifax, Canada.
- Diagnostic Radiology, Dalhousie University, Halifax, Canada.
| |
Collapse
|
11
|
Akkaya UM, Kalkan H. A New Approach for Multimodal Usage of Gene Expression and Its Image Representation for the Detection of Alzheimer's Disease. Biomolecules 2023; 13:1563. [PMID: 38002245 PMCID: PMC10669658 DOI: 10.3390/biom13111563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 10/12/2023] [Accepted: 10/20/2023] [Indexed: 11/26/2023] Open
Abstract
Alzheimer's disease (AD) is a complex neurodegenerative disorder and the multifaceted nature of it requires innovative approaches that integrate various data modalities to enhance its detection. However, due to the cost of collecting multimodal data, multimodal datasets suffer from an insufficient number of samples. To mitigate the impact of a limited sample size on classification, we introduce a novel deep learning method (One2MFusion) which combines gene expression data with their corresponding 2D representation as a new modality. The gene vectors were first mapped to a discriminative 2D image for training a convolutional neural network (CNN). In parallel, the gene sequences were used to train a feed forward neural network (FNN) and the outputs of the FNN and CNN were merged, and a joint deep network was trained for the binary classification of AD, normal control (NC), and mild cognitive impairment (MCI) samples. The fusion of the gene expression data and gene-originated 2D image increased the accuracy (area under the curve) from 0.86 (obtained using a 2D image) to 0.91 for AD vs. NC and from 0.76 (obtained using a 2D image) to 0.88 for MCI vs. NC. The results show that representing gene expression data in another discriminative form increases the classification accuracy when fused with base data.
Collapse
Affiliation(s)
| | - Habil Kalkan
- Department of Computer Engineering, Gebze Technical University, 41400 Gebze, Turkey;
| |
Collapse
|
12
|
Larobina M. Thirty Years of the DICOM Standard. Tomography 2023; 9:1829-1838. [PMID: 37888737 PMCID: PMC10610864 DOI: 10.3390/tomography9050145] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 09/21/2023] [Accepted: 09/27/2023] [Indexed: 10/28/2023] Open
Abstract
Digital Imaging and Communications in Medicine (DICOM) is an international standard that defines a format for storing medical images and a protocol to enable and facilitate data communication among medical imaging systems. The DICOM standard has been instrumental in transforming the medical imaging world over the last three decades. Its adoption has been a significant experience for manufacturers, healthcare users, and research scientists. In this review, thirty years after introducing the standard, we discuss the innovation, advantages, and limitations of adopting the DICOM and its possible future directions.
Collapse
Affiliation(s)
- Michele Larobina
- Istituto di Biostrutture e Bioimmagini, Consiglio Nazionale delle Ricerche (CNR), I-80145 Napoli, Italy
| |
Collapse
|
13
|
Akindele RG, Yu M, Kanda PS, Owoola EO, Aribilola I. Denoising of Nifti (MRI) Images with a Regularized Neighborhood Pixel Similarity Wavelet Algorithm. SENSORS (BASEL, SWITZERLAND) 2023; 23:7780. [PMID: 37765837 PMCID: PMC10536345 DOI: 10.3390/s23187780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 07/26/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023]
Abstract
The recovery of semantics from corrupted images is a significant challenge in image processing. Noise can obscure features, interfere with accurate analysis, and bias results. To address this issue, the Regularized Neighborhood Pixel Similarity Wavelet algorithm (PixSimWave) was developed for denoising Nifti (magnetic resonance imaging (MRI)). The PixSimWave algorithm uses regularized pixel similarity detection to improve the accuracy of noise reduction by creating patches to analyze the intensity of pixels and locate matching pixels, as well as adaptive neighborhood filtering to estimate noisy pixel values by allocating each pixel a weight based on its similarity. The wavelet transform breaks down the image into scales and orientations, allowing a sparse image representation to allocate a soft threshold on its similarity to the original pixels. The proposed method was evaluated on simulated and raw T1w MRIs, outperforming other methods in terms of an SSIM value of 0.9908 for a low Rician noise level of 3% and 0.9881 for a high noise level of 17%. The addition of Gaussian noise improved PSNR and SSIM, with the results indicating that the proposed method outperformed other models while preserving edges and textures. In summary, the PixSimWave algorithm is a viable noise-elimination approach that employs both sparse wavelet coefficients and regularized similarity with decreased computation time, improving the accuracy of noise reduction in images.
Collapse
Affiliation(s)
- Romoke Grace Akindele
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China; (P.S.K.); (E.O.O.)
| | - Ming Yu
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China; (P.S.K.); (E.O.O.)
- School of Artificial Intelligence, Hebei University of Technology, Tianjin 300401, China
| | - Paul Shekonya Kanda
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China; (P.S.K.); (E.O.O.)
| | - Eunice Oluwabunmi Owoola
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China; (P.S.K.); (E.O.O.)
| | - Ifeoluwapo Aribilola
- Software Research Institute, Technological University of the Shannon, Midlands Midwest, Co. Westmeath, N37 HD68 Athlone, Ireland;
| |
Collapse
|
14
|
Tu DY, Lin PC, Chou HH, Shen MR, Hsieh SY. Slice-Fusion: Reducing False Positives in Liver Tumor Detection for Mask R-CNN. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:3267-3277. [PMID: 37027274 DOI: 10.1109/tcbb.2023.3265394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Automatic liver tumor detection from computed tomography (CT) makes clinical examinations more accurate. However, deep learning-based detection algorithms are characterized by high sensitivity and low precision, which hinders diagnosis given that false-positive tumors must first be identified and excluded. These false positives arise because detection models incorrectly identify partial volume artifacts as lesions, which in turn stems from the inability to learn the perihepatic structure from a global perspective. To overcome this limitation, we propose a novel slice-fusion method in which mining the global structural relationship between the tissues in the target CT slices and fusing the features of adjacent slices according to the importance of the tissues. Furthermore, we design a new network based on our slice-fusion method and Mask R-CNN detection model, called Pinpoint-Net. We evaluated proposed model on the Liver Tumor Segmentation Challenge (LiTS) dataset and our liver metastases dataset. Experiments demonstrated that our slice-fusion method not only enhance tumor detection ability via reducing the number of false-positive tumors smaller than 10mm, but also improve segmentation performance. Without bells and whistles, a single Pinpoint-Net showed outstanding performance in liver tumor detection and segmentation on LiTS test dataset compared with other state-of-the-art models.
Collapse
|
15
|
Taira M, Mugikura S, Mori N, Hozawa A, Saito T, Nakamura T, Kiyomoto H, Kobayashi T, Ogishima S, Nagami F, Uruno A, Shimizu R, Kobayashi T, Yasuda J, Kure S, Sakurai M, Motoike IN, Kumada K, Nakaya N, Obara T, Oba K, Sekiguchi A, Thyreau B, Mutoh T, Takano Y, Abe M, Maikusa N, Tatewaki Y, Taki Y, Yaegashi N, Tomita H, Kinoshita K, Kuriyama S, Fuse N, Yamamoto M. Tohoku Medical Megabank Brain Magnetic Resonance Imaging Study: Rationale, Design, and Background. JMA J 2023; 6:246-264. [PMID: 37560377 PMCID: PMC10407421 DOI: 10.31662/jmaj.2022-0220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 04/24/2023] [Indexed: 08/11/2023] Open
Abstract
The Tohoku Medical Megabank Brain Magnetic Resonance Imaging Study (TMM Brain MRI Study) was established to collect multimodal information through neuroimaging and neuropsychological assessments to evaluate the cognitive function and mental health of residents who experienced the Great East Japan Earthquake (GEJE) and associated tsunami. The study also aimed to promote advances in personalized healthcare and medicine related to mental health and cognitive function among the general population. We recruited participants for the first (baseline) survey starting in July 2014, enrolling individuals who were participating in either the TMM Community-Based Cohort Study (TMM CommCohort Study) or the TMM Birth and Three-Generation Cohort Study (TMM BirThree Cohort Study). We collected multiple magnetic resonance imaging (MRI) sequences, including 3D T1-weighted sequences, magnetic resonance angiography (MRA), diffusion tensor imaging (DTI), pseudo-continuous arterial spin labeling (pCASL), and three-dimensional fluid-attenuated inversion recovery (FLAIR) sequences. To assess neuropsychological status, we used both questionnaire- and interview-based rating scales. The former assessments included the Tri-axial Coping Scale, Impact of Event Scale in Japanese, Profile of Mood States, and 15-item Depression, Anxiety, and Stress Scale, whereas the latter assessments included the Mini-Mental State Examination, Japanese version. A total of 12,164 individuals were recruited for the first (baseline) survey, including those unable to complete all assessments. In parallel, we returned the MRI results to the participants and subsequently shared the MRI data through the TMM Biobank. At present, the second (first follow-up) survey of the study started in October 2019 is underway. In this study, we established a large and comprehensive database that included robust neuroimaging data as well as psychological and cognitive assessment data. In combination with genomic and omics data already contained in the TMM Biobank database, these data could provide new insights into the relationships of pathological processes with neuropsychological disorders, including age-related cognitive impairment.
Collapse
Affiliation(s)
- Makiko Taira
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Tohoku University Hospital, Tohoku University, Sendai, Japan
| | - Shunji Mugikura
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Tohoku University Hospital, Tohoku University, Sendai, Japan
| | - Naoko Mori
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Tohoku University Hospital, Tohoku University, Sendai, Japan
| | - Atsushi Hozawa
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
| | - Tomo Saito
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University, Sendai, Japan
| | - Tomohiro Nakamura
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
| | - Hideyasu Kiyomoto
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
| | - Tadao Kobayashi
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
| | - Soichi Ogishima
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University, Sendai, Japan
| | - Fuji Nagami
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
| | - Akira Uruno
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
| | - Ritsuko Shimizu
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
| | - Tomoko Kobayashi
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Tohoku University Hospital, Tohoku University, Sendai, Japan
| | - Jun Yasuda
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Miyagi Cancer Center, Natori, Japan
| | - Shigeo Kure
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Tohoku University Hospital, Tohoku University, Sendai, Japan
- Miyagi Children's Hospital, Sendai, Japan
| | - Miyuki Sakurai
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
| | - Ikuko N Motoike
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Kazuki Kumada
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
| | - Naoki Nakaya
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
| | - Taku Obara
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
| | - Kentaro Oba
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Institute of Development, Aging and Cancer, Tohoku University, Sendai, Japan
| | - Atsushi Sekiguchi
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Integrative Brain Imaging Center, National Center of Neurology and Psychiatry, Tokyo, Japan
| | - Benjamin Thyreau
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Institute of Development, Aging and Cancer, Tohoku University, Sendai, Japan
| | - Tatsushi Mutoh
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Institute of Development, Aging and Cancer, Tohoku University, Sendai, Japan
| | - Yuji Takano
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Institute of Development, Aging and Cancer, Tohoku University, Sendai, Japan
- University of Human Environments, Matsuyama, Japan
| | - Mitsunari Abe
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Integrative Brain Imaging Center, National Center of Neurology and Psychiatry, Tokyo, Japan
- Graduate School of Medicine, Fukushima Medical University, Fukushima, Japan
| | - Norihide Maikusa
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Integrative Brain Imaging Center, National Center of Neurology and Psychiatry, Tokyo, Japan
- Graduate School of Art and Science, University of Tokyo, Tokyo, Japan
| | - Yasuko Tatewaki
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Institute of Development, Aging and Cancer, Tohoku University, Sendai, Japan
| | - Yasuyuki Taki
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Institute of Development, Aging and Cancer, Tohoku University, Sendai, Japan
| | - Nobuo Yaegashi
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Tohoku University Hospital, Tohoku University, Sendai, Japan
- Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University, Sendai, Japan
| | - Hiroaki Tomita
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Tohoku University Hospital, Tohoku University, Sendai, Japan
| | - Kengo Kinoshita
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University, Sendai, Japan
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Shinichi Kuriyama
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- International Research Institute of Disaster Science, Tohoku University, Sendai, Japan
- The United Centers for Advanced Research and Translational Medicine, Tohoku University, Sendai, Japan
| | - Nobuo Fuse
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University, Sendai, Japan
| | - Masayuki Yamamoto
- Tohoku Medical Megabank Organization, Tohoku University, Sendai, Japan
- Graduate School of Medicine, Tohoku University, Sendai, Japan
- Advanced Research Center for Innovations in Next-Generation Medicine, Tohoku University, Sendai, Japan
| |
Collapse
|
16
|
Faragallah OS, El-Hoseny HM, El-sayed HS. Efficient brain tumor segmentation using OTSU and K-means clustering in homomorphic transform. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
|
17
|
Farber GK, Gage S, Kemmer D, White R. Common measures in mental health: a joint initiative by funders and journals. Lancet Psychiatry 2023; 10:465-470. [PMID: 37084745 PMCID: PMC10198931 DOI: 10.1016/s2215-0366(23)00139-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/24/2023] [Accepted: 03/26/2023] [Indexed: 04/23/2023]
Abstract
There is notable heterogeneity in how clinical and phenotypic data are measured by mental health researchers. There is a proliferation of self-report measures (eg, over 280 for depression alone), meaning it is challenging for researchers to compare findings across different studies from different laboratories. To begin to address this issue, a consortium of mental health research funders and journals has launched the Common Measures in Mental Health Science Initiative. The purpose of this endeavour is to identify common measures for mental health conditions that funders and journals can require all researchers to collect, in addition to any other measures they require for their specific study. These measures would not necessarily capture the full range of experiences of a given condition but could be used to link and compare across studies with different designs in different contexts. This Health Policy outlines the rationale, objectives, and potential challenges of this initiative, which aims to enhance the rigour and comparability of mental health research by promoting the adoption of standardised measures.
Collapse
Affiliation(s)
| | | | - Danielle Kemmer
- Graham Boeckh Foundation, Montreal, QC, Canada; International Alliance of Mental Health Research Funders, Montreal, QC, Canada
| | - Rory White
- Graham Boeckh Foundation, Montreal, QC, Canada; International Alliance of Mental Health Research Funders, Montreal, QC, Canada
| |
Collapse
|
18
|
Yoon MS, Kwon G, Oh J, Ryu J, Lim J, Kang BK, Lee J, Han DK. Effect of Contrast Level and Image Format on a Deep Learning Algorithm for the Detection of Pneumothorax with Chest Radiography. J Digit Imaging 2023; 36:1237-1247. [PMID: 36698035 PMCID: PMC10287877 DOI: 10.1007/s10278-022-00772-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 01/26/2023] Open
Abstract
Under the black-box nature in the deep learning model, it is uncertain how the change in contrast level and format affects the performance. We aimed to investigate the effect of contrast level and image format on the effectiveness of deep learning for diagnosing pneumothorax on chest radiographs. We collected 3316 images (1016 pneumothorax and 2300 normal images), and all images were set to the standard contrast level (100%) and stored in the Digital Imaging and Communication in Medicine and Joint Photographic Experts Group (JPEG) formats. Data were randomly separated into 80% of training and 20% of test sets, and the contrast of images in the test set was changed to 5 levels (50%, 75%, 100%, 125%, and 150%). We trained the model to detect pneumothorax using ResNet-50 with 100% level images and tested with 5-level images in the two formats. While comparing the overall performance between each contrast level in the two formats, the area under the receiver-operating characteristic curve (AUC) was significantly different (all p < 0.001) except between 125 and 150% in JPEG format (p = 0.382). When comparing the two formats at same contrast levels, AUC was significantly different (all p < 0.001) except 50% and 100% (p = 0.079 and p = 0.082, respectively). The contrast level and format of medical images could influence the performance of the deep learning model. It is required to train with various contrast levels and formats of image, and further image processing for improvement and maintenance of the performance.
Collapse
Affiliation(s)
- Myeong Seong Yoon
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Department of Radiological Science, Eulji University, 553 Sanseong-daero, Seongnam-si, Gyeonggi Do, 13135, Republic of Korea
| | - Gitaek Kwon
- Department of Computer Science, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- VUNO, Inc, 479 Gangnam-daero, Seocho-gu, Seoul, 06541, Republic of Korea
| | - Jaehoon Oh
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea.
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea.
| | - Jongbin Ryu
- Department of Software and Computer Engineering, Ajou University, 206 World cup-ro, Suwon-si, Gyeonggi Do, 16499, Republic of Korea.
| | - Jongwoo Lim
- Department of Computer Science, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
| | - Bo-Kyeong Kang
- Machine Learning Research Center for Medical Data, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
- Department of Radiology, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
| | - Juncheol Lee
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-Ro, Seongdong-Gu, Seoul, 04763, Republic of Korea
| | - Dong-Kyoon Han
- Department of Radiological Science, Eulji University, 553 Sanseong-daero, Seongnam-si, Gyeonggi Do, 13135, Republic of Korea
| |
Collapse
|
19
|
Fournier G, Maret D, Telmon N, Savall F. An automated landmark method to describe geometric changes in the human mandible during growth. Arch Oral Biol 2023; 149:105663. [PMID: 36893681 DOI: 10.1016/j.archoralbio.2023.105663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 01/27/2023] [Accepted: 02/22/2023] [Indexed: 02/26/2023]
Abstract
OBJECTIVE The principal aim of this study was to assess an automatic landmarking approach to human mandibles based on the atlas method. The secondary aim was to identify the areas of greatest variation in the mandibles of middle-aged to older adults. DESIGN Our sample consisted of 160 mandibles from computed tomography scans of 80 men and 80 women aged between 40 and 79 years. Eleven anatomical landmarks were placed manually on mandibles. The automated landmarking through point cloud alignment and correspondence (ALPACA) method implemented in 3D Slicer was used to automatically place landmarks to all meshes. Euclidean distances, normalized centroid size, and Procrustes ANOVA were calculated for both methods. A pseudo-landmarks approach was followed using ALPACA to identify areas of changes among our sample. RESULTS The ALPACA method showed significant differences in Euclidean distances for all landmarks compared to the manual method. A mean Euclidean distance of 1.7 mm was found for the ALPACA method and 0.99 mm for the manual method. Both methods found that sex, age, and size had a significant effect on mandibular shape. The greatest variations were observed in the condyle, ramus, and symphysis regions. CONCLUSION The results obtained using the ALPACA method are acceptable and promising. This approach can automatically place landmarks with an average accuracy of less than 2 mm, which may be sufficient in most anthropometric analyses. In the light of our results, however, odontological application such as occlusal analysis is not recommended.
Collapse
Affiliation(s)
- G Fournier
- Faculté de Chirurgie Dentaire, Université Paul Sabatier, Centre Hospitalier Universitaire, Toulouse, France; Laboratory Centre for Anthropology and Genomics of Toulouse, Université Paul Sabatier, Toulouse, France.
| | - D Maret
- Faculté de Chirurgie Dentaire, Université Paul Sabatier, Centre Hospitalier Universitaire, Toulouse, France; Laboratory Centre for Anthropology and Genomics of Toulouse, Université Paul Sabatier, Toulouse, France
| | - N Telmon
- Laboratory Centre for Anthropology and Genomics of Toulouse, Université Paul Sabatier, Toulouse, France; Service de Médecine Légale, Hôpital de Rangueil, Toulouse, France
| | - F Savall
- Laboratory Centre for Anthropology and Genomics of Toulouse, Université Paul Sabatier, Toulouse, France; Service de Médecine Légale, Hôpital de Rangueil, Toulouse, France
| |
Collapse
|
20
|
G S, Appadurai JP, Kavin BP, C K, Lai WC. En-DeNet Based Segmentation and Gradational Modular Network Classification for Liver Cancer Diagnosis. Biomedicines 2023; 11:biomedicines11051309. [PMID: 37238979 DOI: 10.3390/biomedicines11051309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/23/2023] [Accepted: 04/25/2023] [Indexed: 05/28/2023] Open
Abstract
Liver cancer ranks as the sixth most prevalent cancer among all cancers globally. Computed tomography (CT) scanning is a non-invasive analytic imaging sensory system that provides greater insight into human structures than traditional X-rays, which are typically used to make the diagnosis. Often, the final product of a CT scan is a three-dimensional image constructed from a series of interlaced two-dimensional slices. Remember that not all slices deliver useful information for tumor detection. Recently, CT scan images of the liver and its tumors have been segmented using deep learning techniques. The primary goal of this study is to develop a deep learning-based system for automatically segmenting the liver and its tumors from CT scan pictures, and also reduce the amount of time and labor required by speeding up the process of diagnosing liver cancer. At its core, an Encoder-Decoder Network (En-DeNet) uses a deep neural network built on UNet to serve as an encoder, and a pre-trained EfficientNet to serve as a decoder. In order to improve liver segmentation, we developed specialized preprocessing techniques, such as the production of multichannel pictures, de-noising, contrast enhancement, ensemble, and the union of model predictions. Then, we proposed the Gradational modular network (GraMNet), which is a unique and estimated efficient deep learning technique. In GraMNet, smaller networks called SubNets are used to construct larger and more robust networks using a variety of alternative configurations. Only one new SubNet modules is updated for learning at each level. This helps in the optimization of the network and minimizes the amount of computational resources needed for training. The segmentation and classification performance of this study is compared to the Liver Tumor Segmentation Benchmark (LiTS) and 3D Image Rebuilding for Comparison of Algorithms Database (3DIRCADb01). By breaking down the components of deep learning, a state-of-the-art level of performance can be attained in the scenarios used in the evaluation. In comparison to more conventional deep learning architectures, the GraMNets generated here have a low computational difficulty. When associated with the benchmark study methods, the straight forward GraMNet is trained faster, consumes less memory, and processes images more rapidly.
Collapse
Affiliation(s)
- Suganeshwari G
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, Tamil Nadu, India
| | - Jothi Prabha Appadurai
- Computer Science and Engineering Department, Kakatiya Institute of Technology and Science, Warangal 506015, Telangana, India
| | - Balasubramanian Prabhu Kavin
- Department of Data Science and Business Systems, College of Engineering and Technology, SRM Institute of Science and Technology, SRM Nagar, Chengalpattu District, Kattankulathur 603203, Tamilnadu, India
| | - Kavitha C
- Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Jeppiaar Nagar, Rajiv Gandhi Salai, Chennai 600119, Tamil Nadu, India
| | - Wen-Cheng Lai
- Bachelor Program in Industrial Projects, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
- Department Electronic Engineering, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
| |
Collapse
|
21
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
22
|
Automatic placental and fetal volume estimation by a convolutional neural network. Placenta 2023; 134:23-29. [PMID: 36863128 DOI: 10.1016/j.placenta.2023.02.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 02/21/2023] [Accepted: 02/24/2023] [Indexed: 03/03/2023]
Abstract
INTRODUCTION We aimed to develop an artificial intelligence (AI) deep learning algorithm to efficiently estimate placental and fetal volumes from magnetic resonance (MR) scans. METHODS Manually annotated images from an MRI sequence was used as input to the neural network DenseVNet. We included data from 193 normal pregnancies at gestational week 27 and 37. The data were split into 163 scans for training, 10 scans for validation and 20 scans for testing. The neural network segmentations were compared to the manual annotation (ground truth) using the Dice Score Coefficient (DSC). RESULTS The mean ground truth placental volume at gestational week 27 and 37 was 571 cm3 (Standard Deviation (SD) 293 cm3) and 853 cm3 (SD 186 cm3), respectively. Mean fetal volume was 979 cm3 (SD 117 cm3) and 2715 cm3 (SD 360 cm3). The best fitting neural network model was attained at 22,000 training iterations with mean DSC 0.925 (SD 0.041). The neural network estimated mean placental volumes at gestational week 27-870 cm3 (SD 202 cm3) (DSC 0.887 (SD 0.034), and to 950 cm3 (SD 316 cm3) at gestational week 37 (DSC 0.896 (SD 0.030)). Mean fetal volumes were 1292 cm3 (SD 191 cm3) and 2712 cm3 (SD 540 cm3), with mean DSC of 0.952 (SD 0.008) and 0.970 (SD 0.040). The time spent for volume estimation was reduced from 60 to 90 min by manual annotation, to less than 10 s by the neural network. CONCLUSION The correctness of neural network volume estimation is comparable to human performance; the efficiency is substantially improved.
Collapse
|
23
|
Lo M, Mariconti E, Nakhaeizadeh S, Morgan RM. Preparing computed tomography images for machine learning in forensic and virtual anthropology. Forensic Sci Int Synerg 2023; 6:100319. [PMID: 36852172 PMCID: PMC9958428 DOI: 10.1016/j.fsisyn.2023.100319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 02/02/2023] [Accepted: 02/06/2023] [Indexed: 02/11/2023]
Affiliation(s)
- Martin Lo
- UCL Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK,UCL Centre for the Forensic Sciences, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK,Corresponding author. UCL Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK.
| | - Enrico Mariconti
- UCL Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK
| | - Sherry Nakhaeizadeh
- UCL Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK,UCL Centre for the Forensic Sciences, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK
| | - Ruth M. Morgan
- UCL Department of Security and Crime Science, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK,UCL Centre for the Forensic Sciences, University College London, 35 Tavistock Square, London, WC1H 9EZ, UK
| |
Collapse
|
24
|
Berger MF, Winter R, Tuca AC, Michelitsch B, Schenkenfelder B, Hartmann R, Giretzlehner M, Reishofer G, Kamolz LP, Lumenta DB. Workflow assessment of an augmented reality application for planning of perforator flaps in plastic reconstructive surgery: Game or game changer? Digit Health 2023; 9:20552076231173554. [PMID: 37179745 PMCID: PMC10170605 DOI: 10.1177/20552076231173554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 04/14/2023] [Indexed: 05/15/2023] Open
Abstract
Objective In contrast to the rising amount of financial investments for research and development in medical technology worldwide is the lack of usability and clinical readiness of the produced systems. We evaluated an augmented reality (AR) setup under development for preoperative perforator vessel mapping for elective autologous breast reconstruction. Methods In this grant-supported research pilot, we used magnetic resonance angiography data (MR-A) of the trunk to superimpose the scans on the corresponding patients with hands-free AR goggles to identify regions-of-interest for surgical planning. Perforator location was assessed using MR-A imaging (MR-A projection) and Doppler ultrasound data (3D distance) and confirmed intraoperatively in all cases. We evaluated usability (System Usability Scale, SUS), data transfer load and documented personnel hours for software development, correlation of image data, as well as processing duration to clinical readiness (time from MR-A to AR projections per scan). Results All perforator locations were confirmed intraoperatively, and we found a strong correlation between MR-A projection and 3D distance measurements (Spearman r = 0.894). The overall usability (SUS) was 67 ± 10 (=moderate to good). The presented setup for AR projections took 173 min to clinical readiness (=availability on AR device per patient). Conclusion In this pilot, we calculated development investments based on project-approved grant-funded personnel hours with a moderate to good usability outcome resulting from some limitations: assessment was based on one-time testing with no previous training, a time lag of AR visualizations on the body and difficulties in spatial AR orientation. The use of AR systems can provide new opportunities for future surgical planning, but has more potential for educational (e.g., patient information) or training purposes of medical under- and postgraduates (spatial recognition of imaging data associated with anatomical structures and operative planning). We expect future usability improvements with refined user interfaces, faster AR hardware and artificial intelligence-enhanced visualization techniques.
Collapse
Affiliation(s)
- Matthias Fabian Berger
- Research Unit for Digital Surgery, Division of Plastic, Aesthetic and Reconstructive Surgery, Department of Surgery, Medical University of Graz, Graz, Austria
| | - Raimund Winter
- Research Unit for Digital Surgery, Division of Plastic, Aesthetic and Reconstructive Surgery, Department of Surgery, Medical University of Graz, Graz, Austria
| | - Alexandru-Cristian Tuca
- Research Unit for Digital Surgery, Division of Plastic, Aesthetic and Reconstructive Surgery, Department of Surgery, Medical University of Graz, Graz, Austria
| | - Birgit Michelitsch
- Research Unit for Digital Surgery, Division of Plastic, Aesthetic and Reconstructive Surgery, Department of Surgery, Medical University of Graz, Graz, Austria
| | | | | | | | - Gernot Reishofer
- Radiology Lab, Department of Radiology, Medical University of Graz, Graz, Austria
| | - Lars-Peter Kamolz
- Research Unit for Digital Surgery, Division of Plastic, Aesthetic and Reconstructive Surgery, Department of Surgery, Medical University of Graz, Graz, Austria
| | - David Benjamin Lumenta
- Research Unit for Digital Surgery, Division of Plastic, Aesthetic and Reconstructive Surgery, Department of Surgery, Medical University of Graz, Graz, Austria
| |
Collapse
|
25
|
Ahuja S, Panigrahi BK, Dey N, Taneja A, Gandhi TK. McS-Net: Multi-class Siamese network for severity of COVID-19 infection classification from lung CT scan slices. Appl Soft Comput 2022; 131:109683. [PMID: 36277300 PMCID: PMC9573862 DOI: 10.1016/j.asoc.2022.109683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 08/25/2022] [Accepted: 09/22/2022] [Indexed: 11/29/2022]
Abstract
Worldwide COVID-19 is a highly infectious and rapidly spreading disease in almost all age groups. The Computed Tomography (CT) scans of lungs are found to be accurate for the timely diagnosis of COVID-19 infection. In the proposed work, a deep learning-based P-shot N-ways Siamese network along with prototypical nearest neighbor classifiers is implemented for the classification of COVID-19 infection from lung CT scan slices. For this, a Siamese network with an identical sub-network (weight sharing) is used for image classification with a limited dataset for each class. The feature vectors are obtained from the pre-trained sub-networks having weight sharing. The performance of the proposed methodology is evaluated on the benchmark MosMed dataset having categories zero (healthy control) and numerous COVID-19 infections. The proposed methodology is evaluated on (a) chest CT scans provided by medical hospitals in Moscow, Russia for 1110 patients, and (b) case study of low-dose CT scans of 42 patients provided by Avtaran healthcare in India. The deep learning-based Siamese network (15-shot 5-ways) obtained an accuracy of 98.07%, the sensitivity of 95.66%, specificity of 98.83%, and F1-score of 95.10%. The proposed work outperforms the COVID-19 infection severity classification with limited scans availability for numerous infection categories.
Collapse
Affiliation(s)
- Sakshi Ahuja
- Electrical Engineering Department, Indian Institute of Technology Delhi, New Delhi, 110016, India
| | - Bijaya Ketan Panigrahi
- Electrical Engineering Department, Indian Institute of Technology Delhi, New Delhi, 110016, India
| | - Nilanjan Dey
- Department of Computer Science and Engineering, Techno International New Town, Kolkata, 700156, India
| | - Arpit Taneja
- Department of Radiology, Avtaran Healthcare LLP, Kurukshetra, 136118, India
| | - Tapan Kumar Gandhi
- Electrical Engineering Department, Indian Institute of Technology Delhi, New Delhi, 110016, India
| |
Collapse
|
26
|
Bridge CP, Gorman C, Pieper S, Doyle SW, Lennerz JK, Kalpathy-Cramer J, Clunie DA, Fedorov AY, Herrmann MD. Highdicom: a Python Library for Standardized Encoding of Image Annotations and Machine Learning Model Outputs in Pathology and Radiology. J Digit Imaging 2022; 35:1719-1737. [PMID: 35995898 PMCID: PMC9712874 DOI: 10.1007/s10278-022-00683-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 05/20/2022] [Accepted: 05/26/2022] [Indexed: 10/15/2022] Open
Abstract
Machine learning (ML) is revolutionizing image-based diagnostics in pathology and radiology. ML models have shown promising results in research settings, but the lack of interoperability between ML systems and enterprise medical imaging systems has been a major barrier for clinical integration and evaluation. The DICOM® standard specifies information object definitions (IODs) and services for the representation and communication of digital images and related information, including image-derived annotations and analysis results. However, the complexity of the standard represents an obstacle for its adoption in the ML community and creates a need for software libraries and tools that simplify working with datasets in DICOM format. Here we present the highdicom library, which provides a high-level application programming interface (API) for the Python programming language that abstracts low-level details of the standard and enables encoding and decoding of image-derived information in DICOM format in a few lines of Python code. The highdicom library leverages NumPy arrays for efficient data representation and ties into the extensive Python ecosystem for image processing and machine learning. Simultaneously, by simplifying creation and parsing of DICOM-compliant files, highdicom achieves interoperability with the medical imaging systems that hold the data used to train and run ML models, and ultimately communicate and store model outputs for clinical use. We demonstrate through experiments with slide microscopy and computed tomography imaging, that, by bridging these two ecosystems, highdicom enables developers and researchers to train and evaluate state-of-the-art ML models in pathology and radiology while remaining compliant with the DICOM standard and interoperable with clinical systems at all stages. To promote standardization of ML research and streamline the ML model development and deployment process, we made the library available free and open-source at https://github.com/herrmannlab/highdicom .
Collapse
Affiliation(s)
- Christopher P Bridge
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
| | - Chris Gorman
- Computational Pathology, Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
| | | | - Sean W Doyle
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
| | - Jochen K Lennerz
- Center for Integrated Diagnostics, Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
- Department of Pathology, Harvard Medical School, Boston, MA, USA
| | - Jayashree Kalpathy-Cramer
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
- MGH & BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | | | - Andriy Y Fedorov
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Surgical Planning Laboratory, Department of Radiology, Brigham and Women's Hospital, Boston, MA, USA
| | - Markus D Herrmann
- Computational Pathology, Department of Pathology, Massachusetts General Hospital, Boston, MA, USA.
- Department of Pathology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
27
|
Munir K, Frezza F, Rizzi A. Deep Learning Hybrid Techniques for Brain Tumor Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 22:8201. [PMID: 36365900 PMCID: PMC9658353 DOI: 10.3390/s22218201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 10/17/2022] [Accepted: 10/19/2022] [Indexed: 06/16/2023]
Abstract
Medical images play an important role in medical diagnosis and treatment. Oncologists analyze images to determine the different characteristics of deadly diseases, plan the therapy, and observe the evolution of the disease. The objective of this paper is to propose a method for the detection of brain tumors. Brain tumors are identified from Magnetic Resonance (MR) images by performing suitable segmentation procedures. The latest technical literature concerning radiographic images of the brain shows that deep learning methods can be implemented to extract specific features of brain tumors, aiding clinical diagnosis. For this reason, most data scientists and AI researchers work on Machine Learning methods for designing automatic screening procedures. Indeed, an automated method would result in quicker segmentation findings, providing a robust output with respect to possible differences in data sources, mostly due to different procedures in data recording and storing, resulting in a more consistent identification of brain tumors. To improve the performance of the segmentation procedure, new architectures are proposed and tested in this paper. We propose deep neural networks for the detection of brain tumors, trained on the MRI scans of patients' brains. The proposed architectures are based on convolutional neural networks and inception modules for brain tumor segmentation. A comparison of these proposed architectures with the baseline reference ones shows very interesting results. MI-Unet showed a performance increase in comparison to baseline Unet architecture by 7.5% in dice score, 23.91% insensitivity, and 7.09% in specificity. Depth-wise separable MI-Unet showed a performance increase by 10.83% in dice score, 2.97% in sensitivity, and 12.72% in specificity as compared to the baseline Unet architecture. Hybrid Unet architecture achieved performance improvement of 9.71% in dice score, 3.56% in sensitivity, and 12.6% in specificity. Whereas the depth-wise separable hybrid Unet architecture outperformed the baseline architecture by 15.45% in dice score, 20.56% in sensitivity, and 12.22% in specificity.
Collapse
|
28
|
Safri AA, Nassir CMNCM, Iman IN, Mohd Taib NH, Achuthan A, Mustapha M. Diffusion tensor imaging pipeline measures of cerebral white matter integrity: An overview of recent advances and prospects. World J Clin Cases 2022; 10:8450-8462. [PMID: 36157806 PMCID: PMC9453345 DOI: 10.12998/wjcc.v10.i24.8450] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 06/20/2022] [Accepted: 07/17/2022] [Indexed: 02/05/2023] Open
Abstract
Cerebral small vessel disease (CSVD) is a leading cause of age-related microvascular cognitive decline, resulting in significant morbidity and decreased quality of life. Despite a progress on its key pathophysiological bases and general acceptance of key terms from neuroimaging findings as observed on the magnetic resonance imaging (MRI), key questions on CSVD remain elusive. Enhanced relationships and reliable lesion studies, such as white matter tractography using diffusion-based MRI (dMRI) are necessary in order to improve the assessment of white matter architecture and connectivity in CSVD. Diffusion tensor imaging (DTI) and tractography is an application of dMRI that provides data that can be used to non-invasively appraise the brain white matter connections via fiber tracking and enable visualization of individual patient-specific white matter fiber tracts to reflect the extent of CSVD-associated white matter damage. However, due to a lack of standardization on various sets of software or image pipeline processing utilized in this technique that driven mostly from research setting, interpreting the findings remain contentious, especially to inform an improved diagnosis and/or prognosis of CSVD for routine clinical use. In this minireview, we highlight the advances in DTI pipeline processing and the prospect of this DTI metrics as potential imaging biomarker for CSVD, even for subclinical CSVD in at-risk individuals.
Collapse
Affiliation(s)
- Amanina Ahmad Safri
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian 16150, Kelantan, Malaysia
| | - Che Mohd Nasril Che Mohd Nassir
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian 16150, Kelantan, Malaysia
| | - Ismail Nurul Iman
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian 16150, Kelantan, Malaysia
| | - Nur Hartini Mohd Taib
- Department of Radiology, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian 16150, Kelantan, Malaysia
| | - Anusha Achuthan
- School of Computer Sciences, Universiti Sains Malaysia, 11800 USM, Penang, Malaysia
| | - Muzaimi Mustapha
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian 16150, Kelantan, Malaysia
- Department of Neurosciences, Hospital Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| |
Collapse
|
29
|
Gupta A, Sampalli S. "From Kilobytes to Kilodaltons": A Novel Algorithm for Medical Image Encryption based on the Central Dogma of Molecular Biology. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:4434-4438. [PMID: 36085695 DOI: 10.1109/embc48229.2022.9871499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
With the continued integration of technology in medicine, large amounts of patient data are often vulnerable to cyber-attacks. Medical data must be secured, however traditional cryptographic algorithms are inapplicable to medical images due to factors such as bulk data capacity, strong correlation among adjacent pixels, and high redundancy. To address the need for new medical image encryption algorithms, a novel approach based on the central dogma of molecular biology is proposed. The resulting algorithm has a linear runtime complexity, and is resistant to brute force, differential and statistical attacks. The algorithm advances the state-of-the-art in DNA-based image encryption and surpasses recent approaches in medical image encryption in its defence against cyber-attacks. Clinical Relevance- Secure data transmission and storage is critical for patient privacy. This algorithm increases the security of patient imaging when compared to image encryption algorithms in literature.
Collapse
|
30
|
Easmin R, Nordio G, Giacomel A, Turkheimer F, Williams S, Veronese M. Bitbox: A Cloud-based data sharing solution for medical images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2712-2715. [PMID: 36083944 DOI: 10.1109/embc48229.2022.9871689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
With the modernization and digitisation of the healthcare system, the need for exchanging medical data has become increasingly compelling. Biomedical imaging has been no exception, where the gathering of medical imaging acquisitions from multi-site collaborations have enabled to reach data sizes never imaginable until few years ago. Usually, medical imaging data have very large volume and diverse complexity, requiring bespoken transfer systems that protect personal information as well as data integrity. Despite many digital innovations, there are still technical and regulatory bottlenecks that make biomedical imaging data exchange challenging. Here we present Bitbox, a web-based application which provides a reliable yet straightforward service to securely exchange medical imaging data. With Bitbox, both imaging and non-imaging data of any type can be transferred from any external and independent site into a centralized server. A showcase of the system will be illustrated for the COVID-19 Clinical Neuroscience Study (COVID-CNS) project, a UK-wide experimental medicine study to investigate the neurological and neuropsychiatric effects of COVID-19 infections in hundreds of patients.
Collapse
|
31
|
nnU-Net Deep Learning Method for Segmenting Parenchyma and Determining Liver Volume From Computed Tomography Images. ANNALS OF SURGERY OPEN 2022; 3. [PMID: 36275876 PMCID: PMC9585534 DOI: 10.1097/as9.0000000000000155] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Background Recipient donor matching in liver transplantation can require precise estimations of liver volume. Currently utilized demographic-based organ volume estimates are imprecise and nonspecific. Manual image organ annotation from medical imaging is effective; however, this process is cumbersome, often taking an undesirable length of time to complete. Additionally, manual organ segmentation and volume measurement incurs additional direct costs to payers for either a clinician or trained technician to complete. Deep learning-based image automatic segmentation tools are well positioned to address this clinical need. Objectives To build a deep learning model that could accurately estimate liver volumes and create 3D organ renderings from computed tomography (CT) medical images. Methods We trained a nnU-Net deep learning model to identify liver borders in images of the abdominal cavity. We used 151 publicly available CT scans. For each CT scan, a board-certified radiologist annotated the liver margins (ground truth annotations). We split our image dataset into training, validation, and test sets. We trained our nnU-Net model on these data to identify liver borders in 3D voxels and integrated these to reconstruct a total organ volume estimate. Results The nnU-Net model accurately identified the border of the liver with a mean overlap accuracy of 97.5% compared with ground truth annotations. Our calculated volume estimates achieved a mean percent error of 1.92% + 1.54% on the test set. Conclusions Precise volume estimation of livers from CT scans is accurate using a nnU-Net deep learning architecture. Appropriately deployed, a nnU-Net algorithm is accurate and quick, making it suitable for incorporation into the pretransplant clinical decision-making workflow.
Collapse
|
32
|
A pediatric wrist trauma X-ray dataset (GRAZPEDWRI-DX) for machine learning. Sci Data 2022; 9:222. [PMID: 35595759 PMCID: PMC9122976 DOI: 10.1038/s41597-022-01328-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 04/19/2022] [Indexed: 01/06/2023] Open
Abstract
Digital radiography is widely available and the standard modality in trauma imaging, often enabling to diagnose pediatric wrist fractures. However, image interpretation requires time-consuming specialized training. Due to astonishing progress in computer vision algorithms, automated fracture detection has become a topic of research interest. This paper presents the GRAZPEDWRI-DX dataset containing annotated pediatric trauma wrist radiographs of 6,091 patients, treated at the Department for Pediatric Surgery of the University Hospital Graz between 2008 and 2018. A total number of 10,643 studies (20,327 images) are made available, typically covering posteroanterior and lateral projections. The dataset is annotated with 74,459 image tags and features 67,771 labeled objects. We de-identified all radiographs and converted the DICOM pixel data to 16-Bit grayscale PNG images. The filenames and the accompanying text files provide basic patient information (age, sex). Several pediatric radiologists annotated dataset images by placing lines, bounding boxes, or polygons to mark pathologies like fractures or periosteal reactions. They also tagged general image characteristics. This dataset is publicly available to encourage computer vision research. Measurement(s) | wrist fracture • pronator quadratus sign • AO classifiction • soft tissue swelling • metal implant • osteopenia • plaster cast • bone Lesion • subperiosteal bone formation | Technology Type(s) | bone radiography |
Collapse
|
33
|
Wagenpfeil S, Mc Kevitt P, Cheddad A, Hemmje M. Explainable Multimedia Feature Fusion for Medical Applications. J Imaging 2022; 8:jimaging8040104. [PMID: 35448231 PMCID: PMC9032787 DOI: 10.3390/jimaging8040104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 04/04/2022] [Accepted: 04/05/2022] [Indexed: 02/04/2023] Open
Abstract
Due to the exponential growth of medical information in the form of, e.g., text, images, Electrocardiograms (ECGs), X-rays, and multimedia, the management of a patient’s data has become a huge challenge. In particular, the extraction of features from various different formats and their representation in a homogeneous way are areas of interest in medical applications. Multimedia Information Retrieval (MMIR) frameworks, like the Generic Multimedia Analysis Framework (GMAF), can contribute to solving this problem, when adapted to special requirements and modalities of medical applications. In this paper, we demonstrate how typical multimedia processing techniques can be extended and adapted to medical applications and how these applications benefit from employing a Multimedia Feature Graph (MMFG) and specialized, efficient indexing structures in the form of Graph Codes. These Graph Codes are transformed to feature relevant Graph Codes by employing a modified Term Frequency Inverse Document Frequency (TFIDF) algorithm, which further supports value ranges and Boolean operations required in the medical context. On this basis, various metrics for the calculation of similarity, recommendations, and automated inferencing and reasoning can be applied supporting the field of diagnostics. Finally, the presentation of these new facilities in the form of explainability is introduced and demonstrated. Thus, in this paper, we show how Graph Codes contribute new querying options for diagnosis and how Explainable Graph Codes can help to readily understand medical multimedia formats.
Collapse
Affiliation(s)
- Stefan Wagenpfeil
- Faculty of Mathematics and Computer Science, University of Hagen, Universitätsstrasse 1, 58097 Hagen, Germany;
- Correspondence:
| | - Paul Mc Kevitt
- Academy for International Science & Research (AISR), Derry BT48 7JL, UK;
| | - Abbas Cheddad
- Blekinge Institute of Technology, 371 79 Karlskrona, Sweden;
| | - Matthias Hemmje
- Faculty of Mathematics and Computer Science, University of Hagen, Universitätsstrasse 1, 58097 Hagen, Germany;
| |
Collapse
|
34
|
Albishri AA, Shah SJH, Kang SS, Lee Y. AM-UNet: automated mini 3D end-to-end U-net based network for brain claustrum segmentation. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:36171-36194. [PMID: 35035265 PMCID: PMC8742670 DOI: 10.1007/s11042-021-11568-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 09/08/2021] [Accepted: 09/20/2021] [Indexed: 06/14/2023]
Abstract
Recent advances in deep learning (DL) have provided promising solutions to medical image segmentation. Among existing segmentation approaches, the U-Net-based methods have been used widely. However, very few U-Net-based studies have been conducted on automatic segmentation of the human brain claustrum (CL). The CL segmentation is challenging due to its thin, sheet-like structure, heterogeneity of its image modalities and formats, imperfect labels, and data imbalance. We propose an automatic optimized U-Net-based 3D segmentation model, called AM-UNet, designed as an end-to-end process of the pre and post-process techniques and a U-Net model for CL segmentation. It is a lightweight and scalable solution which has achieved the state-of-the-art accuracy for automatic CL segmentation on 3D magnetic resonance images (MRI). On the T1/T2 combined MRI CL dataset, AM-UNet has obtained excellent results, including Dice, Intersection over Union (IoU), and Intraclass Correlation Coefficient (ICC) scores of 82%, 70%, and 90%, respectively. We have conducted the comparative evaluation of AM-UNet with other pre-existing models for segmentation on the MRI CL dataset. As a result, medical experts confirmed the superiority of the proposed AM-UNet model for automatic CL segmentation. The source code and model of the AM-UNet project is publicly available on GitHub: https://github.com/AhmedAlbishri/AM-UNET.
Collapse
Affiliation(s)
- Ahmed Awad Albishri
- School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 USA
- College of Computing and Informatics, Saudi Electronic University, Riyadh, Saudi Arabia
| | - Syed Jawad Hussain Shah
- School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 USA
| | - Seung Suk Kang
- Department of Psychiatry Biomedical Sciences, School of Medicine, University of Missouri-Kansas City, Kansas City, MO 64110 USA
| | - Yugyung Lee
- School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110 USA
| |
Collapse
|
35
|
A General Preprocessing Pipeline for Deep Learning on Radiology Images: A COVID-19 Case Study. PROGRESS IN ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1007/978-3-031-16474-3_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
36
|
Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, Collado-Mesa F. Towards a better understanding of annotation tools for medical imaging: a survey. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25877-25911. [PMID: 35350630 PMCID: PMC8948453 DOI: 10.1007/s11042-022-12100-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/04/2021] [Accepted: 01/03/2022] [Indexed: 05/07/2023]
Abstract
Medical imaging refers to several different technologies that are used to view the human body to diagnose, monitor, or treat medical conditions. It requires significant expertise to efficiently and correctly interpret the images generated by each of these technologies, which among others include radiography, ultrasound, and magnetic resonance imaging. Deep learning and machine learning techniques provide different solutions for medical image interpretation including those associated with detection and diagnosis. Despite the huge success of deep learning algorithms in image analysis, training algorithms to reach human-level performance in these tasks depends on the availability of large amounts of high-quality training data, including high-quality annotations to serve as ground-truth. Different annotation tools have been developed to assist with the annotation process. In this survey, we present the currently available annotation tools for medical imaging, including descriptions of graphical user interfaces (GUI) and supporting instruments. The main contribution of this study is to provide an intensive review of the popular annotation tools and show their successful usage in annotating medical imaging dataset to guide researchers in this area.
Collapse
Affiliation(s)
- Manar Aljabri
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlAmir
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlGhamdi
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | | | - Fernando Collado-Mesa
- Department of Radiology, University of Miami Miller School of Medicine, Florida, FL USA
| |
Collapse
|
37
|
Gaonkar B, Cook K, Yoo B, Salehi B, Macyszyn L. Imaging Biomarker Development for Lower Back Pain Using Machine Learning: How Image Analysis Can Help Back Pain. Methods Mol Biol 2022; 2393:623-640. [PMID: 34837203 DOI: 10.1007/978-1-0716-1803-5_33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
State-of-the-art diagnosis of radiculopathy relies on "highly subjective" radiologist interpretation of magnetic resonance imaging of the lower back. Currently, the treatment of lumbar radiculopathy and associated lower back pain lacks coherence due to an absence of reliable, objective diagnostic biomarkers. Using emerging machine learning techniques, the subjectivity of interpretation may be replaced by the objectivity of automated analysis. However, training computer vision methods requires a curated database of imaging data containing anatomical delineations vetted by a team of human experts. In this chapter, we outline our efforts to develop such a database of curated imaging data alongside the required delineations. We detail the processes involved in data acquisition and subsequent annotation. Then we explain how the resulting database can be utilized to develop a machine learning-based objective imaging biomarker. Finally, we present an explanation of how we validate our machine learning-based anatomy delineation algorithms. Ultimately, we hope to allow validated machine learning models to be used to generate objective biomarkers from imaging data-for clinical use to diagnose lumbar radiculopathy and guide associated treatment plans.
Collapse
Affiliation(s)
- Bilwaj Gaonkar
- Department of Neurosurgery, University of California, Los Angeles, Los Angeles, CA, USA.
| | - Kirstin Cook
- Department of Neurosurgery, University of California, Los Angeles, Los Angeles, CA, USA
| | - Bryan Yoo
- Department of Radiological Sciences, University of California, Los Angeles, Los Angeles, CA, USA
| | - Banafsheh Salehi
- Department of Radiological Sciences, University of California, Los Angeles, Los Angeles, CA, USA
| | - Luke Macyszyn
- Department of Neurosurgery, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
38
|
Ashraf GM, Chatzichronis S, Alexiou A, Kyriakopoulos N, Alghamdi BSA, Tayeb HO, Alghamdi JS, Khan W, Jalal MB, Atta HM. BrainFD: Measuring the Intracranial Brain Volume With Fractal Dimension. Front Aging Neurosci 2021; 13:765185. [PMID: 34899274 PMCID: PMC8662626 DOI: 10.3389/fnagi.2021.765185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 10/22/2021] [Indexed: 11/16/2022] Open
Abstract
A few methods and tools are available for the quantitative measurement of the brain volume targeting mainly brain volume loss. However, several factors, such as the clinical conditions, the time of the day, the type of MRI machine, the brain volume artifacts, the pseudoatrophy, and the variations among the protocols, produce extreme variations leading to misdiagnosis of brain atrophy. While brain white matter loss is a characteristic lesion during neurodegeneration, the main objective of this study was to create a computational tool for high precision measuring structural brain changes using the fractal dimension (FD) definition. The validation of the BrainFD software is based on T1-weighted MRI images from the Open Access Series of Imaging Studies (OASIS)-3 brain database, where each participant has multiple MRI scan sessions. The software is based on the Python and JAVA programming languages with the main functionality of the FD calculation using the box-counting algorithm, for different subjects on the same brain regions, with high accuracy and resolution, offering the ability to compare brain data regions from different subjects and on multiple sessions, creating different imaging profiles based on the Clinical Dementia Rating (CDR) scores of the participants. Two experiments were executed. The first was a cross-sectional study where the data were separated into two CDR classes. In the second experiment, a model on multiple heterogeneous data was trained, and the FD calculation for each participant of the OASIS-3 database through multiple sessions was evaluated. The results suggest that the FD variation efficiently describes the structural complexity of the brain and the related cognitive decline. Additionally, the FD efficiently discriminates the two classes achieving 100% accuracy. It is shown that this classification outperforms the currently existing methods in terms of accuracy and the size of the dataset. Therefore, the FD calculation for identifying intracranial brain volume loss could be applied as a potential low-cost personalized imaging biomarker. Furthermore, the possibilities measuring different brain areas and subregions could give robust evidence of the slightest variations to imaging data obtained from repetitive measurements to Physicians and Radiologists.
Collapse
Affiliation(s)
- Ghulam Md Ashraf
- Pre-Clinical Research Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah, Saudi Arabia.,Department of Medical Laboratory Technology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Stylianos Chatzichronis
- Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens, Greece.,Department of Science and Engineering, Novel Global Community Educational Foundation, Hebersham, NSW, Australia
| | - Athanasios Alexiou
- Department of Science and Engineering, Novel Global Community Educational Foundation, Hebersham, NSW, Australia.,AFNP Med Austria, Vienna, Austria
| | | | - Badrah Saeed Ali Alghamdi
- Pre-Clinical Research Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah, Saudi Arabia.,Department of Physiology, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia.,The Neuroscience Research Unit, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Haythum Osama Tayeb
- The Neuroscience Research Unit, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia.,Division of Neurology, Department of Internal Medicine, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Jamaan Salem Alghamdi
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Waseem Khan
- Department of Radiology, King Abdulaziz University Hospital, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Manal Ben Jalal
- Department of Radiology, King Abdulaziz University Hospital, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Hazem Mahmoud Atta
- Department of Clinical Biochemistry, Faculty of Medicine, King Abdulaziz University, Rabigh, Saudi Arabia
| |
Collapse
|
39
|
Gao RZ, Wen R, Wen DY, Huang J, Qin H, Li X, Wang XR, He Y, Yang H. Radiomics Analysis Based on Ultrasound Images to Distinguish the Tumor Stage and Pathological Grade of Bladder Cancer. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2021; 40:2685-2697. [PMID: 33615528 DOI: 10.1002/jum.15659] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 01/21/2021] [Accepted: 01/31/2021] [Indexed: 05/28/2023]
Abstract
OBJECTIVES To identify the clinical value of ultrasound radiomic features in the preoperative prediction of tumor stage and pathological grade of bladder cancer (BLCA) patients. METHODS We retrospectively collected patients who had been diagnosed with BLCA by pathology. Ultrasound-based radiomic features were extracted from manually segmented regions of interest. Participants were randomly assigned to a training cohort and a validation cohort at a ratio of 7:3. Radiomic features were Z-score normalized and submitted to dimensional reduction analysis (including Spearman's correlation coefficient analysis, the random forest algorithm, and statistical testing) for core feature selection. Classifiers for tumor stage and pathological grade prediction were then constructed. Prediction performance was estimated by the area under the curve (AUC) of the receiver operating characteristic curve and was verified by the validation cohort. RESULTS A total of 5936 radiomic features were extracted from each of the ultrasound images obtained from 157 patients. The BLCA tumor stage and pathological grade prediction models were developed based on 30 and 35 features, respectively. Both models showed good predictive ability. For the tumor stage prediction model, the AUC was 0.94 in the training cohort and 0.84 in the validation cohort. For the pathological grade model, the AUCs obtained were 0.84 in the training cohort and 0.75 in the validation cohort. CONCLUSIONS The ultrasound-based radiomics models performed well in the preoperative tumor staging and pathological grading of BLCA. These findings should be applied clinically to optimize treatment and to assess prognoses for BLCA.
Collapse
Affiliation(s)
- Rui-Zhi Gao
- Department of Medical Ultrasound, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Rong Wen
- Department of Medical Ultrasound, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Dong-Yue Wen
- Department of Medical Ultrasound, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Jing Huang
- Department of Medical Ultrasound, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Hui Qin
- Department of Medical Ultrasound, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Xin Li
- GE Healthcare, Shanghai, China
| | | | - Yun He
- Department of Medical Ultrasound, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Hong Yang
- Department of Medical Ultrasound, The First Affiliated Hospital of Guangxi Medical University, Nanning, China
| |
Collapse
|
40
|
Towards an Architecture of a Multi-purpose, User-Extendable Reference Human Brain Atlas. Neuroinformatics 2021; 20:405-426. [PMID: 34825350 PMCID: PMC9546954 DOI: 10.1007/s12021-021-09555-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2021] [Indexed: 11/29/2022]
Abstract
Human brain atlas development is predominantly research-oriented and the use of atlases in clinical practice is limited. Here I introduce a new definition of a reference human brain atlas that serves education, research and clinical applications, and is extendable by its user. Subsequently, an architecture of a multi-purpose, user-extendable reference human brain atlas is proposed and its implementation discussed. The human brain atlas is defined as a vehicle to gather, present, use, share, and discover knowledge about the human brain with highly organized content, tools enabling a wide range of its applications, massive and heterogeneous knowledge database, and means for content and knowledge growing by its users. The proposed architecture determines major components of the atlas, their mutual relationships, and functional roles. It contains four functional units, core cerebral models, knowledge database, research and clinical data input and conversion, and toolkit (supporting processing, content extension, atlas individualization, navigation, exploration, and display), all united by a user interface. Each unit is described in terms of its function, component modules and sub-modules, data handling, and implementation aspects. This novel architecture supports brain knowledge gathering, presentation, use, sharing, and discovery and is broadly applicable and useful in student- and educator-oriented neuroeducation for knowledge presentation and communication, research for knowledge acquisition, aggregation and discovery, and clinical applications in decision making support for prevention, diagnosis, treatment, monitoring, and prediction. It establishes a backbone for designing and developing new, multi-purpose and user-extendable brain atlas platforms, serving as a potential standard across labs, hospitals, and medical schools.
Collapse
|
41
|
Aiello M, Esposito G, Pagliari G, Borrelli P, Brancato V, Salvatore M. How does DICOM support big data management? Investigating its use in medical imaging community. Insights Imaging 2021; 12:164. [PMID: 34748101 PMCID: PMC8574146 DOI: 10.1186/s13244-021-01081-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 08/25/2021] [Indexed: 12/15/2022] Open
Abstract
The diagnostic imaging field is experiencing considerable growth, followed by increasing production of massive amounts of data. The lack of standardization and privacy concerns are considered the main barriers to big data capitalization. This work aims to verify whether the advanced features of the DICOM standard, beyond imaging data storage, are effectively used in research practice. This issue will be analyzed by investigating the publicly shared medical imaging databases and assessing how much the most common medical imaging software tools support DICOM in all its potential. Therefore, 100 public databases and ten medical imaging software tools were selected and examined using a systematic approach. In particular, the DICOM fields related to privacy, segmentation and reporting have been assessed in the selected database; software tools have been evaluated for reading and writing the same DICOM fields. From our analysis, less than a third of the databases examined use the DICOM format to record meaningful information to manage the images. Regarding software, the vast majority does not allow the management, reading and writing of some or all the DICOM fields. Surprisingly, if we observe chest computed tomography data sharing to address the COVID-19 emergency, there are only two datasets out of 12 released in DICOM format. Our work shows how the DICOM can potentially fully support big data management; however, further efforts are still needed from the scientific and technological community to promote the use of the existing standard, encouraging data sharing and interoperability for a concrete development of big data analytics.
Collapse
Affiliation(s)
- Marco Aiello
- IRCCS SDN, Via Emanuele Gianturco 113, 80143, Naples, Italy.
| | | | | | | | | | | |
Collapse
|
42
|
Spanakis EG, Sfakianakis S, Bonomi S, Ciccotelli C, Magalini S, Sakkalis V. Emerging and Established Trends to Support Secure Health Information Exchange. Front Digit Health 2021; 3:636082. [PMID: 34713107 PMCID: PMC8521812 DOI: 10.3389/fdgth.2021.636082] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 02/22/2021] [Indexed: 11/13/2022] Open
Abstract
This work aims to provide information, guidelines, established practices and standards, and an extensive evaluation on new and promising technologies for the implementation of a secure information sharing platform for health-related data. We focus strictly on the technical aspects and specifically on the sharing of health information, studying innovative techniques for secure information sharing within the health-care domain, and we describe our solution and evaluate the use of blockchain methodologically for integrating within our implementation. To do so, we analyze health information sharing within the concept of the PANACEA project that facilitates the design, implementation, and deployment of a relevant platform. The research presented in this paper provides evidence and argumentation toward advanced and novel implementation strategies for a state-of-the-art information sharing environment; a description of high-level requirements for the transfer of data between different health-care organizations or cross-border; technologies to support the secure interconnectivity and trust between information technology (IT) systems participating in a sharing-data "community"; standards, guidelines, and interoperability specifications for implementing a common understanding and integration in the sharing of clinical information; and the use of cloud computing and prospectively more advanced technologies such as blockchain. The technologies described and the possible implementation approaches are presented in the design of an innovative secure information sharing platform in the health-care domain.
Collapse
Affiliation(s)
- Emmanouil G Spanakis
- Computational Biomedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, Heraklion, Greece
| | - Stelios Sfakianakis
- Computational Biomedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, Heraklion, Greece
| | - Silvia Bonomi
- Department of Computer, Control, and Management Engineering Antonio Ruberti, Università degli Studi di Roma La Sapienza, Rome, Italy
| | - Claudio Ciccotelli
- Department of Computer, Control, and Management Engineering Antonio Ruberti, Università degli Studi di Roma La Sapienza, Rome, Italy
| | - Sabina Magalini
- Emergency and Trauma Surgery Unit, Fondazione Policlinico Universitario Agostino Gemelli, Rome, Italy
| | - Vangelis Sakkalis
- Computational Biomedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology-Hellas, Heraklion, Greece
| |
Collapse
|
43
|
Saratxaga CL, Moya I, Picón A, Acosta M, Moreno-Fernandez-de-Leceta A, Garrote E, Bereciartua-Perez A. MRI Deep Learning-Based Solution for Alzheimer's Disease Prediction. J Pers Med 2021; 11:902. [PMID: 34575679 PMCID: PMC8466762 DOI: 10.3390/jpm11090902] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/31/2021] [Accepted: 09/02/2021] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND Alzheimer's is a degenerative dementing disorder that starts with a mild memory impairment and progresses to a total loss of mental and physical faculties. The sooner the diagnosis is made, the better for the patient, as preventive actions and treatment can be started. Although tests such as the Mini-Mental State Tests Examination are usually used for early identification, diagnosis relies on magnetic resonance imaging (MRI) brain analysis. METHODS Public initiatives such as the OASIS (Open Access Series of Imaging Studies) collection provide neuroimaging datasets openly available for research purposes. In this work, a new method based on deep learning and image processing techniques for MRI-based Alzheimer's diagnosis is proposed and compared with previous literature works. RESULTS Our method achieves a balance accuracy (BAC) up to 0.93 for image-based automated diagnosis of the disease, and a BAC of 0.88 for the establishment of the disease stage (healthy tissue, very mild and severe stage). CONCLUSIONS Results obtained surpassed the state-of-the-art proposals using the OASIS collection. This demonstrates that deep learning-based strategies are an effective tool for building a robust solution for Alzheimer's-assisted diagnosis based on MRI data.
Collapse
Affiliation(s)
- Cristina L. Saratxaga
- TECNALIA, Basque Research and Technology Alliance (BRTA), Parque Tecnológico de Bizkaia, C/Geldo. Edificio 700, 48160 Derio, Spain; (A.P.); (E.G.); (A.B.-P.)
| | - Iratxe Moya
- Instituto Ibermática de Innovación, Unidad de Inteligencia Artificial Avenida de los Huetos, Edificio Azucarera, 01010 Vitoria, Spain; (I.M.); (M.A.); (A.M.-F.-d.-L.)
| | - Artzai Picón
- TECNALIA, Basque Research and Technology Alliance (BRTA), Parque Tecnológico de Bizkaia, C/Geldo. Edificio 700, 48160 Derio, Spain; (A.P.); (E.G.); (A.B.-P.)
| | - Marina Acosta
- Instituto Ibermática de Innovación, Unidad de Inteligencia Artificial Avenida de los Huetos, Edificio Azucarera, 01010 Vitoria, Spain; (I.M.); (M.A.); (A.M.-F.-d.-L.)
| | - Aitor Moreno-Fernandez-de-Leceta
- Instituto Ibermática de Innovación, Unidad de Inteligencia Artificial Avenida de los Huetos, Edificio Azucarera, 01010 Vitoria, Spain; (I.M.); (M.A.); (A.M.-F.-d.-L.)
| | - Estibaliz Garrote
- TECNALIA, Basque Research and Technology Alliance (BRTA), Parque Tecnológico de Bizkaia, C/Geldo. Edificio 700, 48160 Derio, Spain; (A.P.); (E.G.); (A.B.-P.)
- Department of Cell Biology and Histology, Faculty of Medicine and Dentistry, University of the Basque Country, 48940 Leioa, Spain
| | - Arantza Bereciartua-Perez
- TECNALIA, Basque Research and Technology Alliance (BRTA), Parque Tecnológico de Bizkaia, C/Geldo. Edificio 700, 48160 Derio, Spain; (A.P.); (E.G.); (A.B.-P.)
| |
Collapse
|
44
|
Chen WF, Ou HY, Pan CT, Liao CC, Huang W, Lin HY, Cheng YF, Wei CP. Recognition Rate Advancement and Data Error Improvement of Pathology Cutting with H-DenseUNet for Hepatocellular Carcinoma Image. Diagnostics (Basel) 2021; 11:diagnostics11091599. [PMID: 34573941 PMCID: PMC8470617 DOI: 10.3390/diagnostics11091599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 08/29/2021] [Accepted: 08/29/2021] [Indexed: 11/16/2022] Open
Abstract
Due to the fact that previous studies have rarely investigated the recognition rate discrepancy and pathology data error when applied to different databases, the purpose of this study is to investigate the improvement of recognition rate via deep learning-based liver lesion segmentation with the incorporation of hospital data. The recognition model used in this study is H-DenseUNet, which is applied to the segmentation of the liver and lesions, and a mixture of 2D/3D Hybrid-DenseUNet is used to reduce the recognition time and system memory requirements. Differences in recognition results were determined by comparing the training files of the standard LiTS competition data set with the training set after mixing in an additional 30 patients. The average error value of 9.6% was obtained by comparing the data discrepancy between the actual pathology data and the pathology data after the analysis of the identified images imported from Kaohsiung Chang Gung Memorial Hospital. The average error rate of the recognition output after mixing the LiTS database with hospital data for training was 1%. In the recognition part, the Dice coefficient was 0.52 after training 50 epochs using the standard LiTS database, while the Dice coefficient was increased to 0.61 after adding 30 hospital data to the training. After importing 3D Slice and ITK-Snap software, a 3D image of the lesion and liver segmentation can be developed. It is hoped that this method could be used to stimulate more research in addition to the general public standard database in the future, as well as to study the applicability of hospital data and improve the generality of the database.
Collapse
Affiliation(s)
- Wen-Fan Chen
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 80424, Taiwan;
| | - Hsin-You Ou
- Liver Transplantation Program and Departments of Diagnostic Radiology, Surgery Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 833401, Taiwan; (H.-Y.O.); (C.-C.L.)
| | - Cheng-Tang Pan
- Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan; (C.-T.P.); (W.H.); (H.-Y.L.)
| | - Chien-Chang Liao
- Liver Transplantation Program and Departments of Diagnostic Radiology, Surgery Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 833401, Taiwan; (H.-Y.O.); (C.-C.L.)
| | - Wen Huang
- Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan; (C.-T.P.); (W.H.); (H.-Y.L.)
| | - Han-Yu Lin
- Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan; (C.-T.P.); (W.H.); (H.-Y.L.)
| | - Yu-Fan Cheng
- Liver Transplantation Program and Departments of Diagnostic Radiology, Surgery Kaohsiung Chang Gung Memorial Hospital, Chang Gung University College of Medicine, Kaohsiung 833401, Taiwan; (H.-Y.O.); (C.-C.L.)
- Correspondence: (Y.-F.C.); (C.-P.W.); Tel.: +886-773-17123-3027 (Y.-F.C.); +886-752-52000-4189 (C.-P.W.)
| | - Chia-Po Wei
- Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
- Correspondence: (Y.-F.C.); (C.-P.W.); Tel.: +886-773-17123-3027 (Y.-F.C.); +886-752-52000-4189 (C.-P.W.)
| |
Collapse
|
45
|
Pérez-García F, Sparks R, Ourselin S. TorchIO: A Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106236. [PMID: 34311413 DOI: 10.5281/zenodo.4296288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 06/09/2021] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE Processing of medical images such as MRI or CT presents different challenges compared to RGB images typically used in computer vision. These include a lack of labels for large datasets, high computational costs, and the need of metadata to describe the physical properties of voxels. Data augmentation is used to artificially increase the size of the training datasets. Training with image subvolumes or patches decreases the need for computational power. Spatial metadata needs to be carefully taken into account in order to ensure a correct alignment and orientation of volumes. METHODS We present TorchIO, an open-source Python library to enable efficient loading, preprocessing, augmentation and patch-based sampling of medical images for deep learning. TorchIO follows the style of PyTorch and integrates standard medical image processing libraries to efficiently process images during training of neural networks. TorchIO transforms can be easily composed, reproduced, traced and extended. Most transforms can be inverted, making the library suitable for test-time augmentation and estimation of aleatoric uncertainty in the context of segmentation. We provide multiple generic preprocessing and augmentation operations as well as simulation of MRI-specific artifacts. RESULTS Source code, comprehensive tutorials and extensive documentation for TorchIO can be found at http://torchio.rtfd.io/. The package can be installed from the Python Package Index (PyPI) running pip install torchio. It includes a command-line interface which allows users to apply transforms to image files without using Python. Additionally, we provide a graphical user interface within a TorchIO extension in 3D Slicer to visualize the effects of transforms. CONCLUSION TorchIO was developed to help researchers standardize medical image processing pipelines and allow them to focus on the deep learning experiments. It encourages good open-science practices, as it supports experiment reproducibility and is version-controlled so that the software can be cited precisely. Due to its modularity, the library is compatible with other frameworks for deep learning with medical images.
Collapse
Affiliation(s)
- Fernando Pérez-García
- Department of Medical Physics and Biomedical Engineering, University College London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK; School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK.
| | - Rachel Sparks
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| |
Collapse
|
46
|
Pérez-García F, Sparks R, Ourselin S. TorchIO: A Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106236. [PMID: 34311413 PMCID: PMC8542803 DOI: 10.1016/j.cmpb.2021.106236] [Citation(s) in RCA: 140] [Impact Index Per Article: 46.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 06/09/2021] [Indexed: 05/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Processing of medical images such as MRI or CT presents different challenges compared to RGB images typically used in computer vision. These include a lack of labels for large datasets, high computational costs, and the need of metadata to describe the physical properties of voxels. Data augmentation is used to artificially increase the size of the training datasets. Training with image subvolumes or patches decreases the need for computational power. Spatial metadata needs to be carefully taken into account in order to ensure a correct alignment and orientation of volumes. METHODS We present TorchIO, an open-source Python library to enable efficient loading, preprocessing, augmentation and patch-based sampling of medical images for deep learning. TorchIO follows the style of PyTorch and integrates standard medical image processing libraries to efficiently process images during training of neural networks. TorchIO transforms can be easily composed, reproduced, traced and extended. Most transforms can be inverted, making the library suitable for test-time augmentation and estimation of aleatoric uncertainty in the context of segmentation. We provide multiple generic preprocessing and augmentation operations as well as simulation of MRI-specific artifacts. RESULTS Source code, comprehensive tutorials and extensive documentation for TorchIO can be found at http://torchio.rtfd.io/. The package can be installed from the Python Package Index (PyPI) running pip install torchio. It includes a command-line interface which allows users to apply transforms to image files without using Python. Additionally, we provide a graphical user interface within a TorchIO extension in 3D Slicer to visualize the effects of transforms. CONCLUSION TorchIO was developed to help researchers standardize medical image processing pipelines and allow them to focus on the deep learning experiments. It encourages good open-science practices, as it supports experiment reproducibility and is version-controlled so that the software can be cited precisely. Due to its modularity, the library is compatible with other frameworks for deep learning with medical images.
Collapse
Affiliation(s)
- Fernando Pérez-García
- Department of Medical Physics and Biomedical Engineering, University College London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK; School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK.
| | - Rachel Sparks
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| |
Collapse
|
47
|
Classification of Alzheimer’s Disease Patients Using Texture Analysis and Machine Learning. APPLIED SYSTEM INNOVATION 2021. [DOI: 10.3390/asi4030049] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Alzheimer’s disease (AD) has been studied extensively to understand the nature of this complex disease and address the many research gaps concerning prognosis and diagnosis. Several studies based on structural and textural characteristics have already been conducted to aid in identifying AD patients. In this work, an image processing methodology was used to extract textural information and classify the patients into two groups: AD and Cognitively Normal (CN). The Gray Level Co-occurrence Matrix (GLCM) was employed since it is a strong foundation for texture classification. Various textural parameters derived from the GLCM aided in deciphering the characteristics of a Magnetic Resonance Imaging (MRI) region of interest (ROI). Several commonly used image classification algorithms were employed. MATLAB was used to successfully derive 20 features based on the GLCM of the MRI dataset. Based on the data analysis, 8 of the 20 features were determined as significant elements. Ensemble (90.2%), Decision Trees (88.5%), and Support Vector Machine (SVM) (87.2%) were the best performing classifiers. It was observed in GLCM that as the distance (d) between pixels increased, the classification accuracy decreased. The best result was observed for GLCM with d = 1 and direction (d, d, −d) with age and structural data.
Collapse
|
48
|
Diagnosis of Dementia Using a Generative Deep Convolution Neural Network. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021. [DOI: 10.1007/s13369-021-05982-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
49
|
Turk F, Luy M, Barisci N. Renal Segmentation Using an Improved U-Net3D Model. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Hundreds of thousands of people worldwide are diagnosed with kidney cancer each year, and this disease is more common in developed societies. Approximately 30% of patients with kidney cancer are recognized at the metastatic stage. Segmentation is an important process in the computer-aided
treatment planning of kidney diseases. For this reason, more importance should be given to studies focused on segmentation as accurate segmentation is of high importance in the medical sense. This paper focuses on an improved version of the existing U-Net3D models. The aim is to assist physicians
struggling with kidney segmentation. The improved U-Net3D model showed better performance than U-Net, U-Net+ResNet, and U-Net++ models, with 97.89% accurate segmentation.
Collapse
Affiliation(s)
- Fuat Turk
- Department of Computer Engineering, Kirikkale University, Kirikkale 71450, Turkey
| | - Murat Luy
- Department of Electrical & Electronics Engineering, Kirikkale University, Kirikkale 71450, Turkey
| | - Necaattin Barisci
- Department of Computer Engineering, Faculty of Technology, Gazi University 06560, Turkey
| |
Collapse
|
50
|
Beers A, Brown J, Chang K, Hoebel K, Patel J, Ly KI, Tolaney SM, Brastianos P, Rosen B, Gerstner ER, Kalpathy-Cramer J. DeepNeuro: an open-source deep learning toolbox for neuroimaging. Neuroinformatics 2021; 19:127-140. [PMID: 32578020 DOI: 10.1007/s12021-020-09477-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Translating deep learning research from theory into clinical practice has unique challenges, specifically in the field of neuroimaging. In this paper, we present DeepNeuro, a Python-based deep learning framework that puts deep neural networks for neuroimaging into practical usage with a minimum of friction during implementation. We show how this framework can be used to design deep learning pipelines that can load and preprocess data, design and train various neural network architectures, and evaluate and visualize the results of trained networks on evaluation data. We present a way of reproducibly packaging data pre- and postprocessing functions common in the neuroimaging community, which facilitates consistent performance of networks across variable users, institutions, and scanners. We show how deep learning pipelines created with DeepNeuro can be concisely packaged into shareable Docker and Singularity containers with user-friendly command-line interfaces.
Collapse
Affiliation(s)
- Andrew Beers
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - James Brown
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Ken Chang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Katharina Hoebel
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Jay Patel
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - K Ina Ly
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Division of Neuro-Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Sara M Tolaney
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Priscilla Brastianos
- Division of Neuro-Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Bruce Rosen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Elizabeth R Gerstner
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Division of Neuro-Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.
| |
Collapse
|