1
|
Fukuda N, Konda S, Umehara J, Hirashima M. Efficient musculoskeletal annotation using free-form deformation. Sci Rep 2024; 14:16077. [PMID: 38992241 PMCID: PMC11239816 DOI: 10.1038/s41598-024-67125-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 07/08/2024] [Indexed: 07/13/2024] Open
Abstract
Traditionally, constructing training datasets for automatic muscle segmentation from medical images involved skilled operators, leading to high labor costs and limited scalability. To address this issue, we developed a tool that enables efficient annotation by non-experts and assessed its effectiveness for training an automatic segmentation network. Our system allows users to deform a template three-dimensional (3D) anatomical model to fit a target magnetic-resonance image using free-form deformation with independent control points for axial, sagittal, and coronal directions. This method simplifies the annotation process by allowing non-experts to intuitively adjust the model, enabling simultaneous annotation of all muscles in the template. We evaluated the quality of the tool-assisted segmentation performed by non-experts, which achieved a Dice coefficient greater than 0.75 compared to expert segmentation, without significant errors such as mislabeling adjacent muscles or omitting musculature. An automatic segmentation network trained with datasets created using this tool demonstrated performance comparable to or superior to that of networks trained with expert-generated datasets. This innovative tool significantly reduces the time and labor costs associated with dataset creation for automatic muscle segmentation, potentially revolutionizing medical image annotation and accelerating the development of deep learning-based segmentation networks in various clinical applications.
Collapse
Affiliation(s)
- Norio Fukuda
- Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology (NICT), 1-4 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Shoji Konda
- Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology (NICT), 1-4 Yamadaoka, Suita, Osaka, 565-0871, Japan
- Department of Health and Sport Sciences, Graduate School of Medicine, Osaka University, 1-17 Machikaneyama-Cho, Toyonaka, Osaka, 560-0043, Japan
| | - Jun Umehara
- Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology (NICT), 1-4 Yamadaoka, Suita, Osaka, 565-0871, Japan
- Faculty of Rehabilitation, Kansai Medical University, 18-89 Uyama-Higashi, Hirakata, Osaka, 573-1136, Japan
| | - Masaya Hirashima
- Center for Information and Neural Networks (CiNet), Advanced ICT Research Institute, National Institute of Information and Communications Technology (NICT), 1-4 Yamadaoka, Suita, Osaka, 565-0871, Japan.
- Graduate School of Frontier Biosciences, Osaka University, 1-3 Yamadaoka, Suita, Osaka, 565-0871, Japan.
| |
Collapse
|
2
|
Wang J, Yin L. CNN-based glioma detection in MRI: A deep learning approach. Technol Health Care 2024:THC240158. [PMID: 39031408 DOI: 10.3233/thc-240158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/22/2024]
Abstract
BACKGROUND More than a million people are affected by brain tumors each year; high-grade gliomas (HGGs) and low-grade gliomas (LGGs) present serious diagnostic and treatment hurdles, resulting in shortened life expectancies. Glioma segmentation is still a significant difficulty in clinical settings, despite improvements in Magnetic Resonance Imaging (MRI) and diagnostic tools. Convolutional neural networks (CNNs) have seen recent advancements that offer promise for increasing segmentation accuracy, addressing the pressing need for improved diagnostic and therapeutic approaches. OBJECTIVE The study intended to develop an automated glioma segmentation algorithm using CNN to accurately identify tumor components in MRI images. The goal was to match the accuracy of experienced radiologists with commercial instruments, hence improving diagnostic precision and quantification. METHODS 285 MRI scans of high-grade gliomas (HGGs) and low-grade gliomas (LGGs) were analyzed in the study. T1-weighted sequences were utilised for segmentation both pre-and post-contrast agent administration, along with T2-weighted sequences (with and without Fluid Attenuation by Inversion Recovery [FAIRE]). The segmentation performance was assessed with a U-Net network, renowned for its efficacy in medical image segmentation. DICE coefficients were computed for the tumour core with contrast enhancement, the entire tumour, and the tumour nucleus without contrast enhancement. RESULTS The U-Net network produced DICE values of 0.7331 for the tumour core with contrast enhancement, 0.8624 for the total tumour, and 0.7267 for the tumour nucleus without contrast enhancement. The results align with previous studies, demonstrating segmentation accuracy on par with professional radiologists and commercially accessible segmentation tools. CONCLUSION The study developed a CNN-based automated segmentation system for gliomas, achieving high accuracy in recognising glioma components in MRI images. The results confirm the ability of CNNs to enhance the accuracy of brain tumour diagnoses, suggesting a promising avenue for future research in medical imaging and diagnostics. This advancement is expected to improve diagnostic processes for clinicians and patients by providing more precise and quantitative results.
Collapse
|
3
|
Chen W, Lim LJR, Lim RQR, Yi Z, Huang J, He J, Yang G, Liu B. Artificial intelligence powered advancements in upper extremity joint MRI: A review. Heliyon 2024; 10:e28731. [PMID: 38596104 PMCID: PMC11002577 DOI: 10.1016/j.heliyon.2024.e28731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 04/11/2024] Open
Abstract
Magnetic resonance imaging (MRI) is an indispensable medical imaging examination technique in musculoskeletal medicine. Modern MRI techniques achieve superior high-quality multiplanar imaging of soft tissue and skeletal pathologies without the harmful effects of ionizing radiation. Some current limitations of MRI include long acquisition times, artifacts, and noise. In addition, it is often challenging to distinguish abutting or closely applied soft tissue structures with similar signal characteristics. In the past decade, Artificial Intelligence (AI) has been widely employed in musculoskeletal MRI to help reduce the image acquisition time and improve image quality. Apart from being able to reduce medical costs, AI can assist clinicians in diagnosing diseases more accurately. This will effectively help formulate appropriate treatment plans and ultimately improve patient care. This review article intends to summarize AI's current research and application in musculoskeletal MRI, particularly the advancement of DL in identifying the structure and lesions of upper extremity joints in MRI images.
Collapse
Affiliation(s)
- Wei Chen
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| | - Lincoln Jian Rong Lim
- Department of Medical Imaging, Western Health, Footscray Hospital, Victoria, Australia
- Department of Surgery, The University of Melbourne, Victoria, Australia
| | - Rebecca Qian Ru Lim
- Department of Hand & Reconstructive Microsurgery, Singapore General Hospital, Singapore
| | - Zhe Yi
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| | - Jiaxing Huang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jia He
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Ge Yang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Bo Liu
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
4
|
Conze PH, Andrade-Miranda G, Le Meur Y, Cornec-Le Gall E, Rousseau F. Dual-task kidney MR segmentation with transformers in autosomal-dominant polycystic kidney disease. Comput Med Imaging Graph 2024; 113:102349. [PMID: 38330635 DOI: 10.1016/j.compmedimag.2024.102349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 01/21/2024] [Accepted: 02/01/2024] [Indexed: 02/10/2024]
Abstract
Autosomal-dominant polycystic kidney disease is a prevalent genetic disorder characterized by the development of renal cysts, leading to kidney enlargement and renal failure. Accurate measurement of total kidney volume through polycystic kidney segmentation is crucial to assess disease severity, predict progression and evaluate treatment effects. Traditional manual segmentation suffers from intra- and inter-expert variability, prompting the exploration of automated approaches. In recent years, convolutional neural networks have been employed for polycystic kidney segmentation from magnetic resonance images. However, the use of Transformer-based models, which have shown remarkable performance in a wide range of computer vision and medical image analysis tasks, remains unexplored in this area. With their self-attention mechanism, Transformers excel in capturing global context information, which is crucial for accurate organ delineations. In this paper, we evaluate and compare various convolutional-based, Transformers-based, and hybrid convolutional/Transformers-based networks for polycystic kidney segmentation. Additionally, we propose a dual-task learning scheme, where a common feature extractor is followed by per-kidney decoders, towards better generalizability and efficiency. We extensively evaluate various architectures and learning schemes on a heterogeneous magnetic resonance imaging dataset collected from 112 patients with polycystic kidney disease. Our results highlight the effectiveness of Transformer-based models for polycystic kidney segmentation and the relevancy of exploiting dual-task learning to improve segmentation accuracy and mitigate data scarcity issues. A promising ability in accurately delineating polycystic kidneys is especially shown in the presence of heterogeneous cyst distributions and adjacent cyst-containing organs. This work contribute to the advancement of reliable delineation methods in nephrology, paving the way for a broad spectrum of clinical applications.
Collapse
Affiliation(s)
- Pierre-Henri Conze
- IMT Atlantique, LaTIM UMR 1101, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, IBRBS, 22 rue Camille Desmoulins, 29200 Brest, France.
| | | | - Yannick Le Meur
- Department of Nephrology, University Hospital of Brest, bd Tanguy Prigent, 29200 Brest, France; LBAI UMR 1227, Inserm, 9 rue Félix le Dantec, 29200 Brest, France
| | - Emilie Cornec-Le Gall
- Department of Nephrology, University Hospital of Brest, bd Tanguy Prigent, 29200 Brest, France; UMR 1078, Inserm, IBRBS, 22 rue Camille Desmoulins, 29238 Brest, France
| | - François Rousseau
- IMT Atlantique, LaTIM UMR 1101, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, IBRBS, 22 rue Camille Desmoulins, 29200 Brest, France
| |
Collapse
|
5
|
Sadikine A, Badic B, Tasu JP, Noblet V, Ballet P, Visvikis D, Conze PH. Improving abdominal image segmentation with overcomplete shape priors. Comput Med Imaging Graph 2024; 113:102356. [PMID: 38340573 DOI: 10.1016/j.compmedimag.2024.102356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/11/2023] [Accepted: 02/06/2024] [Indexed: 02/12/2024]
Abstract
The extraction of abdominal structures using deep learning has recently experienced a widespread interest in medical image analysis. Automatic abdominal organ and vessel segmentation is highly desirable to guide clinicians in computer-assisted diagnosis, therapy, or surgical planning. Despite a good ability to extract large organs, the capacity of U-Net inspired architectures to automatically delineate smaller structures remains a major issue, especially given the increase in receptive field size as we go deeper into the network. To deal with various abdominal structure sizes while exploiting efficient geometric constraints, we present a novel approach that integrates into deep segmentation shape priors from a semi-overcomplete convolutional auto-encoder (S-OCAE) embedding. Compared to standard convolutional auto-encoders (CAE), it exploits an over-complete branch that projects data onto higher dimensions to better characterize anatomical structures with a small spatial extent. Experiments on abdominal organs and vessel delineation performed on various publicly available datasets highlight the effectiveness of our method compared to state-of-the-art, including U-Net trained without and with shape priors from a traditional CAE. Exploiting a semi-overcomplete convolutional auto-encoder embedding as shape priors improves the ability of deep segmentation models to provide realistic and accurate abdominal structure contours.
Collapse
Affiliation(s)
- Amine Sadikine
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University of Western Brittany, Brest, 29200, France
| | - Bogdan Badic
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University Hospital of Brest, Brest, 29200, France
| | - Jean-Pierre Tasu
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University Hospital of Poitiers, Poitiers, 86000, France
| | | | - Pascal Ballet
- LaTIM UMR 1101, Inserm, Brest, 29200, France; University of Western Brittany, Brest, 29200, France
| | | | - Pierre-Henri Conze
- LaTIM UMR 1101, Inserm, Brest, 29200, France; IMT Atlantique, Brest, 29200, France.
| |
Collapse
|
6
|
Yang S, Liang Y, Wu S, Sun P, Chen Z. SADSNet: A robust 3D synchronous segmentation network for liver and liver tumors based on spatial attention mechanism and deep supervision. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:707-723. [PMID: 38552134 DOI: 10.3233/xst-230312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2024]
Abstract
Highlights • Introduce a data augmentation strategy to expand the required different morphological data during the training and learning phase, and improve the algorithm's feature learning ability for complex and diverse tumor morphology CT images.• Design attention mechanisms for encoding and decoding paths to extract fine pixel level features, improve feature extraction capabilities, and achieve efficient spatial channel feature fusion.• The deep supervision layer is used to correct and decode the final image data to provide high accuracy of results.• The effectiveness of this method has been affirmed through validation on the LITS, 3DIRCADb, and SLIVER datasets. BACKGROUND Accurately extracting liver and liver tumors from medical images is an important step in lesion localization and diagnosis, surgical planning, and postoperative monitoring. However, the limited number of radiation therapists and a great number of images make this work time-consuming. OBJECTIVE This study designs a spatial attention deep supervised network (SADSNet) for simultaneous automatic segmentation of liver and tumors. METHOD Firstly, self-designed spatial attention modules are introduced at each layer of the encoder and decoder to extract image features at different scales and resolutions, helping the model better capture liver tumors and fine structures. The designed spatial attention module is implemented through two gate signals related to liver and tumors, as well as changing the size of convolutional kernels; Secondly, deep supervision is added behind the three layers of the decoder to assist the backbone network in feature learning and improve gradient propagation, enhancing robustness. RESULTS The method was testing on LITS, 3DIRCADb, and SLIVER datasets. For the liver, it obtained dice similarity coefficients of 97.03%, 96.11%, and 97.40%, surface dice of 81.98%, 82.53%, and 86.29%, 95% hausdorff distances of 8.96 mm, 8.26 mm, and 3.79 mm, and average surface distances of 1.54 mm, 1.19 mm, and 0.81 mm. Additionally, it also achieved precise tumor segmentation, which with dice scores of 87.81% and 87.50%, surface dice of 89.63% and 84.26%, 95% hausdorff distance of 12.96 mm and 16.55 mm, and average surface distances of 1.11 mm and 3.04 mm on LITS and 3DIRCADb, respectively. CONCLUSION The experimental results show that the proposed method is effective and superior to some other methods. Therefore, this method can provide technical support for liver and liver tumor segmentation in clinical practice.
Collapse
Affiliation(s)
- Sijing Yang
- School of Life and Environmental Science, Guilin University of Electronic Technology, Guilin, China
| | - Yongbo Liang
- School of Life and Environmental Science, Guilin University of Electronic Technology, Guilin, China
| | - Shang Wu
- School of Life and Environmental Science, Guilin University of Electronic Technology, Guilin, China
| | - Peng Sun
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Zhencheng Chen
- School of Life and Environmental Science, Guilin University of Electronic Technology, Guilin, China
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
- Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin, China
- Guangxi Engineering Technology Research Center of Human Physiological Information Noninvasive Detection, Guilin, China
| |
Collapse
|
7
|
Decaux N, Conze PH, Ropars J, He X, Sheehan FT, Pons C, Salem DB, Brochard S, Rousseau F. Semi-automatic muscle segmentation in MR images using deep registration-based label propagation. PATTERN RECOGNITION 2023; 140:109529. [PMID: 37383565 PMCID: PMC10299801 DOI: 10.1016/j.patcog.2023.109529] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
Fully automated approaches based on convolutional neural networks have shown promising performances on muscle segmentation from magnetic resonance (MR) images, but still rely on an extensive amount of training data to achieve valuable results. Muscle segmentation for pediatric and rare diseases cohorts is therefore still often done manually. Producing dense delineations over 3D volumes remains a time-consuming and tedious task, with significant redundancy between successive slices. In this work, we propose a segmentation method relying on registration-based label propagation, which provides 3D muscle delineations from a limited number of annotated 2D slices. Based on an unsupervised deep registration scheme, our approach ensures the preservation of anatomical structures by penalizing deformation compositions that do not produce consistent segmentation from one annotated slice to another. Evaluation is performed on MR data from lower leg and shoulder joints. Results demonstrate that the proposed few-shot multi-label segmentation model outperforms state-of-the-art techniques.
Collapse
Affiliation(s)
- Nathan Decaux
- LaTIM UMR 1101, Inserm, Brest, France
- IMT Atlantique, Brest, France
| | | | - Juliette Ropars
- LaTIM UMR 1101, Inserm, Brest, France
- University Hospital of Brest, Brest, France
| | | | | | - Christelle Pons
- LaTIM UMR 1101, Inserm, Brest, France
- University Hospital of Brest, Brest, France
- Fondation ILDYS, Brest, France
| | - Douraied Ben Salem
- LaTIM UMR 1101, Inserm, Brest, France
- University Hospital of Brest, Brest, France
| | - Sylvain Brochard
- LaTIM UMR 1101, Inserm, Brest, France
- University Hospital of Brest, Brest, France
| | | |
Collapse
|
8
|
Sui G, Zhang Z, Liu S, Chen S, Liu X. Pulmonary nodules segmentation based on domain adaptation. Phys Med Biol 2023; 68:155015. [PMID: 37406634 DOI: 10.1088/1361-6560/ace498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 07/05/2023] [Indexed: 07/07/2023]
Abstract
With the development of deep learning, the methods based on transfer learning have promoted the progress of medical image segmentation. However, the domain shift and complex background information of medical images limit the further improvement of the segmentation accuracy. Domain adaptation can compensate for the sample shortage by learning important information from a similar source dataset. Therefore, a segmentation method based on adversarial domain adaptation with background mask (ADAB) is proposed in this paper. Firstly, two ADAB networks are built for the source and target data segmentation, respectively. Next, to extract the foreground features that are the input of the discriminators, the background masks are generated according to the region growth algorithm. Then, to update the parameters in the target network without being affected by the conflict between the distinguishing differences of the discriminator and the domain shift reduction of the adversarial domain adaptation, a gradient reversal layer propagation is embedded in the ADAB model for the target data. Finally, an enhanced boundaries loss is deduced to make the target network sensitive to the edge of the area to be segmented. The performance of the proposed method is evaluated in the segmentation of pulmonary nodules in computed tomography images. Experimental results show that the proposed approach has a potential prospect in medical image processing.
Collapse
Affiliation(s)
- Guozheng Sui
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, People's Republic of China
| | - Zaixian Zhang
- Radiology Department, The Affiliated Hospital of Qingdao University, People's Republic of China
| | - Shunli Liu
- Radiology Department, The Affiliated Hospital of Qingdao University, People's Republic of China
| | - Shuang Chen
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, People's Republic of China
| | - Xuefeng Liu
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, People's Republic of China
| |
Collapse
|
9
|
Messaoudi H, Belaid A, Ben Salem D, Conze PH. Cross-dimensional transfer learning in medical image segmentation with deep learning. Med Image Anal 2023; 88:102868. [PMID: 37384952 DOI: 10.1016/j.media.2023.102868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/06/2023] [Accepted: 06/08/2023] [Indexed: 07/01/2023]
Abstract
Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation.
Collapse
Affiliation(s)
- Hicham Messaoudi
- Laboratory of Medical Informatics (LIMED), Faculty of Technology, University of Bejaia, 06000 Bejaia, Algeria.
| | - Ahror Belaid
- Laboratory of Medical Informatics (LIMED), Faculty of Exact Sciences, University of Bejaia, 06000 Bejaia, Algeria; Data Science & Applications Research Unit - CERIST, 06000, Bejaia, Algeria
| | - Douraied Ben Salem
- Laboratory of Medical Information Processing (LaTIM) UMR 1101, Inserm, 29200, Brest, France; Neuroradiology Department, University Hospital of Brest, 29200, Brest, France
| | - Pierre-Henri Conze
- Laboratory of Medical Information Processing (LaTIM) UMR 1101, Inserm, 29200, Brest, France; IMT Atlantique, 29200, Brest, France
| |
Collapse
|
10
|
Bonaldi L, Pretto A, Pirri C, Uccheddu F, Fontanella CG, Stecco C. Deep Learning-Based Medical Images Segmentation of Musculoskeletal Anatomical Structures: A Survey of Bottlenecks and Strategies. Bioengineering (Basel) 2023; 10:bioengineering10020137. [PMID: 36829631 PMCID: PMC9952222 DOI: 10.3390/bioengineering10020137] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
By leveraging the recent development of artificial intelligence algorithms, several medical sectors have benefited from using automatic segmentation tools from bioimaging to segment anatomical structures. Segmentation of the musculoskeletal system is key for studying alterations in anatomical tissue and supporting medical interventions. The clinical use of such tools requires an understanding of the proper method for interpreting data and evaluating their performance. The current systematic review aims to present the common bottlenecks for musculoskeletal structures analysis (e.g., small sample size, data inhomogeneity) and the related strategies utilized by different authors. A search was performed using the PUBMED database with the following keywords: deep learning, musculoskeletal system, segmentation. A total of 140 articles published up until February 2022 were obtained and analyzed according to the PRISMA framework in terms of anatomical structures, bioimaging techniques, pre/post-processing operations, training/validation/testing subset creation, network architecture, loss functions, performance indicators and so on. Several common trends emerged from this survey; however, the different methods need to be compared and discussed based on each specific case study (anatomical region, medical imaging acquisition setting, study population, etc.). These findings can be used to guide clinicians (as end users) to better understand the potential benefits and limitations of these tools.
Collapse
Affiliation(s)
- Lorenza Bonaldi
- Department of Civil, Environmental and Architectural Engineering, University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Andrea Pretto
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
| | - Carmelo Pirri
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
| | - Francesca Uccheddu
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Chiara Giulia Fontanella
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
- Correspondence: ; Tel.: +39-049-8276754
| | - Carla Stecco
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| |
Collapse
|
11
|
Boutillon A, Borotikar B, Burdin V, Conze PH. Multi-structure bone segmentation in pediatric MR images with combined regularization from shape priors and adversarial network. Artif Intell Med 2022; 132:102364. [DOI: 10.1016/j.artmed.2022.102364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 05/13/2022] [Accepted: 07/10/2022] [Indexed: 11/02/2022]
|
12
|
Yao J, Chepelev L, Nisha Y, Sathiadoss P, Rybicki FJ, Sheikh AM. Evaluation of a deep learning method for the automated detection of supraspinatus tears on MRI. Skeletal Radiol 2022; 51:1765-1775. [PMID: 35190850 DOI: 10.1007/s00256-022-04008-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 01/30/2022] [Accepted: 01/30/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To evaluate if deep learning is a feasible approach for automated detection of supraspinatus tears on MRI. MATERIALS AND METHODS A total of 200 shoulder MRI studies performed between 2015 and 2019 were retrospectively obtained from our institutional database using a balanced random sampling of studies containing a full-thickness tear, partial-thickness tear, or intact supraspinatus tendon. A 3-stage pipeline was developed comprised of a slice selection network based on a pre-trained residual neural network (ResNet); a segmentation network based on an encoder-decoder network (U-Net); and a custom multi-input convolutional neural network (CNN) classifier. Binary reference labels were created following review of radiologist reports and images by a radiology fellow and consensus validation by two musculoskeletal radiologists. Twenty percent of the data was reserved as a holdout test set with the remaining 80% used for training and optimization under a fivefold cross-validation strategy. Classification and segmentation accuracy were evaluated using area under the receiver operating characteristic curve (AUROC) and Dice similarity coefficient, respectively. Baseline characteristics in correctly versus incorrectly classified cases were compared using independent sample t-test and chi-squared. RESULTS Test sensitivity and specificity of the classifier at the optimal Youden's index were 85.0% (95% CI: 62.1-96.8%) and 85.0% (95% CI: 62.1-96.8%), respectively. AUROC was 0.943 (95% CI: 0.820-0.991). Dice segmentation accuracy was 0.814 (95% CI: 0.805-0.826). There was no significant difference in AUROC between 1.5 T and 3.0 T studies. Sub-analysis showed superior sensitivity on full-thickness (100%) versus partial-thickness (72.5%) subgroups. DATA CONCLUSION Deep learning is a feasible approach to detect supraspinatus tears on MRI.
Collapse
Affiliation(s)
- Jason Yao
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada.
| | - Leonid Chepelev
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Yashmin Nisha
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Paul Sathiadoss
- Department of Radiology, University of Ottawa Faculty of Medicine, 501 Smyth Road, Box 232, Ottawa, ON, K1H 8L6, Canada
| | - Frank J Rybicki
- Department of Radiology, University of Cincinnati College of Medicine, 234 Goodman Street, Box 670761, Cincinnati, OH, 45267-0761, USA
| | - Adnan M Sheikh
- Department of Radiology, The University of British Columbia Faculty of Medicine, 2775 Laurel Street, Vancouver, BC, V5Z 1M9, Canada
| |
Collapse
|
13
|
Boutillon A, Conze PH, Pons C, Burdin V, Borotikar B. Generalizable multi-task, multi-domain deep segmentation of sparse pediatric imaging datasets via multi-scale contrastive regularization and multi-joint anatomical priors. Med Image Anal 2022; 81:102556. [DOI: 10.1016/j.media.2022.102556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 06/13/2022] [Accepted: 07/22/2022] [Indexed: 11/28/2022]
|
14
|
Godoy IRB, Silva RP, Rodrigues TC, Skaf AY, de Castro Pochini A, Yamada AF. Automatic MRI segmentation of pectoralis major muscle using deep learning. Sci Rep 2022; 12:5300. [PMID: 35351924 PMCID: PMC8964724 DOI: 10.1038/s41598-022-09280-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 03/21/2022] [Indexed: 11/30/2022] Open
Abstract
To develop and validate a deep convolutional neural network (CNN) method capable of selecting the greatest Pectoralis Major Cross-Sectional Area (PMM-CSA) and automatically segmenting PMM on an axial Magnetic Resonance Imaging (MRI). We hypothesized a CNN technique can accurately perform both tasks compared with manual reference standards. Our method is based on two steps: (A) segmentation model, (B) PMM-CSA selection. In step A, we manually segmented the PMM on 134 axial T1-weighted PM MRIs. The segmentation model was trained from scratch (MONAI/Pytorch SegResNet, 4 mini-batch, 1000 epochs, dropout 0.20, Adam, learning rate 0.0005, cosine annealing, softmax). Mean-dice score determined the segmentation score on 8 internal axial T1-weighted PM MRIs. In step B, we used the OpenCV2 (version 4.5.1, https://opencv.org) framework to calculate the PMM-CSA of the model predictions and ground truth. Then, we selected the top-3 slices with the largest cross-sectional area and compared them with the ground truth. If one of the selected was in the top-3 from the ground truth, then we considered it to be a success. A top-3 accuracy evaluated this method on 8 axial T1-weighted PM MRIs internal test cases. The segmentation model (Step A) produced an accurate pectoralis muscle segmentation with a Mean Dice score of 0.94 ± 0.01. The results of Step B showed top-3 accuracy > 98% to select an appropriate axial image with the greatest PMM-CSA. Our results show an overall accurate selection of PMM-CSA and automated PM muscle segmentation using a combination of deep CNN algorithms.
Collapse
Affiliation(s)
- Ivan Rodrigues Barros Godoy
- Department of Radiology, Hospital Do Coração (HCor) and Teleimagem, São Paulo, SP, Brazil. .,Department of Diagnostic Imaging, Universidade Federal de São Paulo - UNIFESP, Rua Napoleão de Barros, 800, São Paulo, SP, 04024-002, Brazil.
| | | | | | - Abdalla Youssef Skaf
- Department of Radiology, Hospital Do Coração (HCor) and Teleimagem, São Paulo, SP, Brazil.,ALTA Diagnostic Center (DASA Group), São Paulo, Brazil
| | - Alberto de Castro Pochini
- Department of Orthopedics and Traumatology, Universidade Federal de São Paulo (UNIFESP), São Paulo, SP, Brazil
| | - André Fukunishi Yamada
- Department of Radiology, Hospital Do Coração (HCor) and Teleimagem, São Paulo, SP, Brazil.,Department of Diagnostic Imaging, Universidade Federal de São Paulo - UNIFESP, Rua Napoleão de Barros, 800, São Paulo, SP, 04024-002, Brazil.,ALTA Diagnostic Center (DASA Group), São Paulo, Brazil
| |
Collapse
|
15
|
Ge Y, Zhang Q, Sun Y, Shen Y, Wang X. Grayscale medical image segmentation method based on 2D&3D object detection with deep learning. BMC Med Imaging 2022; 22:33. [PMID: 35220942 PMCID: PMC8883636 DOI: 10.1186/s12880-022-00760-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 02/22/2022] [Indexed: 12/22/2022] Open
Abstract
Background Grayscale medical image segmentation is the key step in clinical computer-aided diagnosis. Model-driven and data-driven image segmentation methods are widely used for their less computational complexity and more accurate feature extraction. However, model-driven methods like thresholding usually suffer from wrong segmentation and noises regions because different grayscale images have distinct intensity distribution property thus pre-processing is always demanded. While data-driven methods with deep learning like encoder-decoder networks always are always accompanied by complex architectures which require amounts of training data. Methods Combining thresholding method and deep learning, this paper presents a novel method by using 2D&3D object detection technologies. First, interest regions contain segmented object are determined with fine-tuning 2D object detection network. Then, pixels in cropped images are turned as point cloud according to their positions and grayscale values. Finally, 3D object detection network is applied to obtain bounding boxes with target points and boxes’ bottoms and tops represent thresholding values for segmentation. After projecting to 2D images, these target points could composite the segmented object. Results Three groups of grayscale medical images are used to evaluate the proposed image segmentation method. We obtain the IoU (DSC) scores of 0.92 (0.96), 0.88 (0.94) and 0.94 (0.94) for segmentation accuracy on different datasets respectively. Also, compared with five state of the arts and clinically performed well models, our method achieves higher scores and better performance. Conclusions The prominent segmentation results demonstrate that the built method based on 2D&3D object detection with deep learning is workable and promising for segmentation task of grayscale medical images.
Collapse
|
16
|
Calivà F, Namiri NK, Dubreuil M, Pedoia V, Ozhinsky E, Majumdar S. Studying osteoarthritis with artificial intelligence applied to magnetic resonance imaging. Nat Rev Rheumatol 2022; 18:112-121. [PMID: 34848883 DOI: 10.1038/s41584-021-00719-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2021] [Indexed: 02/08/2023]
Abstract
The 3D nature and soft-tissue contrast of MRI makes it an invaluable tool for osteoarthritis research, by facilitating the elucidation of disease pathogenesis and progression. The recent increasing employment of MRI has certainly been stimulated by major advances that are due to considerable investment in research, particularly related to artificial intelligence (AI). These AI-related advances are revolutionizing the use of MRI in clinical research by augmenting activities ranging from image acquisition to post-processing. Automation is key to reducing the long acquisition times of MRI, conducting large-scale longitudinal studies and quantitatively defining morphometric and other important clinical features of both soft and hard tissues in various anatomical joints. Deep learning methods have been used recently for multiple applications in the musculoskeletal field to improve understanding of osteoarthritis. Compared with labour-intensive human efforts, AI-based methods have advantages and potential in all stages of imaging, as well as post-processing steps, including aiding diagnosis and prognosis. However, AI-based methods also have limitations, including the arguably limited interpretability of AI models. Given that the AI community is highly invested in uncovering uncertainties associated with model predictions and improving their interpretability, we envision future clinical translation and progressive increase in the use of AI algorithms to support clinicians in optimizing patient care.
Collapse
Affiliation(s)
- Francesco Calivà
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Nikan K Namiri
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Maureen Dubreuil
- Section of Rheumatology, Department of Medicine, Boston University School of Medicine, Boston, MA, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Eugene Ozhinsky
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
17
|
Zhang J, Gu L, Han G, Liu X. AttR2U-Net: A Fully Automated Model for MRI Nasopharyngeal Carcinoma Segmentation Based on Spatial Attention and Residual Recurrent Convolution. Front Oncol 2022; 11:816672. [PMID: 35155206 PMCID: PMC8832031 DOI: 10.3389/fonc.2021.816672] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 12/17/2021] [Indexed: 11/13/2022] Open
Abstract
Radiotherapy is an essential method for treating nasopharyngeal carcinoma (NPC), and the segmentation of NPC is a crucial process affecting the treatment. However, manual segmentation of NPC is inefficient. Besides, the segmentation results of different doctors might vary considerably. To improve the efficiency and the consistency of NPC segmentation, we propose a novel AttR2U-Net model which automatically and accurately segments nasopharyngeal carcinoma from MRI images. This model is based on the classic U-Net and incorporates advanced mechanisms such as spatial attention, residual connection, recurrent convolution, and normalization to improve the segmentation performance. Our model features recurrent convolution and residual connections in each layer to improve its ability to extract details. Moreover, spatial attention is fused into the network by skip connections to pinpoint cancer areas more accurately. Our model achieves a DSC value of 0.816 on the NPC segmentation task and obtains the best performance compared with six other state-of-the-art image segmentation models.
Collapse
Affiliation(s)
- Jiajing Zhang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Lin Gu
- RIKEN Center for Advanced Intelligence Project (AIP), Tokyo, Japan
| | - Guanghui Han
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou, China
- *Correspondence: Xiujian Liu, ; Guanghui Han,
| | - Xiujian Liu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
- *Correspondence: Xiujian Liu, ; Guanghui Han,
| |
Collapse
|
18
|
Chen S, Zhong X, Dorn S, Ravikumar N, Tao Q, Huang X, Lell M, Kachelriess M, Maier A. Improving Generalization Capability of Multiorgan Segmentation Models Using Dual-Energy CT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3055199] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
19
|
Fritz B, Fritz J. Artificial intelligence for MRI diagnosis of joints: a scoping review of the current state-of-the-art of deep learning-based approaches. Skeletal Radiol 2022; 51:315-329. [PMID: 34467424 PMCID: PMC8692303 DOI: 10.1007/s00256-021-03830-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 05/17/2021] [Accepted: 05/23/2021] [Indexed: 02/02/2023]
Abstract
Deep learning-based MRI diagnosis of internal joint derangement is an emerging field of artificial intelligence, which offers many exciting possibilities for musculoskeletal radiology. A variety of investigational deep learning algorithms have been developed to detect anterior cruciate ligament tears, meniscus tears, and rotator cuff disorders. Additional deep learning-based MRI algorithms have been investigated to detect Achilles tendon tears, recurrence prediction of musculoskeletal neoplasms, and complex segmentation of nerves, bones, and muscles. Proof-of-concept studies suggest that deep learning algorithms may achieve similar diagnostic performances when compared to human readers in meta-analyses; however, musculoskeletal radiologists outperformed most deep learning algorithms in studies including a direct comparison. Earlier investigations and developments of deep learning algorithms focused on the binary classification of the presence or absence of an abnormality, whereas more advanced deep learning algorithms start to include features for characterization and severity grading. While many studies have focused on comparing deep learning algorithms against human readers, there is a paucity of data on the performance differences of radiologists interpreting musculoskeletal MRI studies without and with artificial intelligence support. Similarly, studies demonstrating the generalizability and clinical applicability of deep learning algorithms using realistic clinical settings with workflow-integrated deep learning algorithms are sparse. Contingent upon future studies showing the clinical utility of deep learning algorithms, artificial intelligence may eventually translate into clinical practice to assist detection and characterization of various conditions on musculoskeletal MRI exams.
Collapse
Affiliation(s)
- Benjamin Fritz
- Department of Radiology, Balgrist University Hospital, Forchstrasse 340, CH-8008 Zurich, Switzerland ,Faculty of Medicine, University of Zurich, Zurich, Switzerland
| | - Jan Fritz
- New York University Grossman School of Medicine, New York University, New York, NY 10016 USA
| |
Collapse
|
20
|
Cheng R, Crouzier M, Hug F, Tucker K, Juneau P, McCreedy E, Gandler W, McAuliffe MJ, Sheehan FT. Automatic quadriceps and patellae segmentation of MRI with cascaded U 2 -Net and SASSNet deep learning model. Med Phys 2022; 49:443-460. [PMID: 34755359 PMCID: PMC8758556 DOI: 10.1002/mp.15335] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 10/18/2021] [Accepted: 10/20/2021] [Indexed: 01/03/2023] Open
Abstract
PURPOSE Automatic muscle segmentation is critical for advancing our understanding of human physiology, biomechanics, and musculoskeletal pathologies, as it allows for timely exploration of large multi-dimensional image sets. Segmentation models are rarely developed/validated for the pediatric model. As such, autosegmentation is not available to explore how muscle architectural changes during development and how disease/pathology affects the developing musculoskeletal system. Thus, we aimed to develop and validate an end-to-end, fully automated, deep learning model for accurate segmentation of the rectus femoris and vastus lateral, medialis, and intermedialis using a pediatric database. METHODS We developed a two-stage cascaded deep learning model in a coarse-to-fine manner. In the first stage, the U2 -Net roughly detects the muscle subcompartment region. Then, in the second stage, the shape-aware 3D semantic segmentation method SASSNet refines the cropped target regions to generate the more finer and accurate segmentation masks. We utilized multifeature image maps in both stages to stabilize performance and validated their use with an ablation study. The second-stage SASSNet was independently run and evaluated with three different cropped region resolutions: the original image resolution, and images downsampled 2× and 4× (high, mid, and low). The relationship between image resolution and segmentation accuracy was explored. In addition, the patella was included as a comparator to past work. We evaluated segmentation accuracy using leave-one-out testing on a database of 3D MR images (0.43 × 0.43 × 2 mm) from 40 pediatric participants (age 15.3 ± 1.9 years, 55.8 ± 11.8 kg, 164.2 ± 7.9 cm, 38F/2 M). RESULTS The mid-resolution second stage produced the best results for the vastus medialis, rectus femoris, and patella (Dice similarity coefficient = 95.0%, 95.1%, 93.7%), whereas the low-resolution second stage produced the best results for the vastus lateralis and vastus intermedialis (DSC = 94.5% and 93.7%). In comparing the low- to mid-resolution cases, the vasti intermedialis, vastus medialis, rectus femoris, and patella produced significant differences (p = 0.0015, p = 0.0101, p < 0.0001, p = 0.0003) and the vasti lateralis did not (p = 0.2177). The high-resolution stage 2 had significantly lower accuracy (1.0 to 4.4 dice percentage points) compared to both the mid- and low-resolution routines (p value ranged from < 0.001 to 0.04). The one exception was the rectus femoris, where there was no difference between the low- and high-resolution cases. The ablation study demonstrated that the multifeature is more reliable than the single feature. CONCLUSIONS Our successful implementation of this two-stage segmentation pipeline provides a critical tool for expanding pediatric muscle physiology and clinical research. With a relatively small and variable dataset, our fully automatic segmentation technique produces accuracies that matched or exceeded the current state of the art. The two-stage segmentation avoids memory issues and excessive run times by using a first stage focused on cropping out unnecessary data. The excellent Dice similarity coefficients improve upon previous template-based automatic and semiautomatic methodologies targeting the leg musculature. More importantly, with a naturally variable dataset (size, shape, etc.), the proposed model demonstrates slightly improved accuracies, compared to previous neural networks methods.
Collapse
Affiliation(s)
- Ruida Cheng
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Marion Crouzier
- University of Nantes, Movement, Interactions, Performance, MIP, EA 4334, F-44000 Nantes, France,The University of Queensland, School of Biomedical Sciences, Brisbane
| | - François Hug
- Institut Universitaire de France (IUF), Paris, France,Université Côte d’Azur, LAMHESS, Nice, France
| | - Kylie Tucker
- The University of Queensland, School of Biomedical Sciences, Brisbane
| | - Paul Juneau
- NIH Library, Office of Research Services, National Institutes of Health, Bethesda, MD, USA
| | - Evan McCreedy
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - William Gandler
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Matthew J. McAuliffe
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Frances T. Sheehan
- Rehabilitation Medicine Department, National Institutes of Health Clinical Center, Bethesda, MD, USA
| |
Collapse
|
21
|
Werthel JD, Boux de Casson F, Walch G, Gaudin P, Moroder P, Sanchez-Sotelo J, Chaoui J, Burdin V. Three-dimensional muscle loss assessment: a novel computed tomography-based quantitative method to evaluate rotator cuff muscle fatty infiltration. J Shoulder Elbow Surg 2022; 31:165-174. [PMID: 34478865 DOI: 10.1016/j.jse.2021.07.029] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/15/2021] [Accepted: 07/26/2021] [Indexed: 02/01/2023]
Abstract
BACKGROUND Rotator cuff fatty infiltration (FI) is one of the most important parameters to predict the outcome of certain shoulder conditions. The primary objective of this study was to define a new computed tomography (CT)-based quantitative 3-dimensional (3D) measure of muscle loss (3DML) based on the rationale of the 2-dimensional (2D) qualitative Goutallier score. The secondary objective of this study was to compare this new measurement method to traditional 2D qualitative assessment of FI according to Goutallier et al and to a 3D quantitative measurement of fatty infiltration (3DFI). MATERIALS AND METHODS 102 CT scans from healthy shoulders (46) and shoulders with cuff tear arthropathy (21), irreparable rotator cuff tears (18), and primary osteoarthritis (17) were analyzed by 3 experienced shoulder surgeons for subjective grading of fatty infiltration according to Goutallier, and their rotator cuff muscles were manually segmented. Quantitative 3D measurements of fatty infiltration (3DFI) were completed. The volume of muscle fibers without intramuscular fat was then calculated for each rotator cuff muscle and normalized to the patient's scapular volume to account for the effect of body size (NVfibers). 3D muscle mass (3DMM) was calculated by dividing the NVfibers value of a given muscle by the mean expected volume in healthy shoulders. 3D muscle loss (3DML) was defined as 1 - (3DMM). The correlation between Goutallier grading, 3DFI, and 3DML was compared using a Spearman rank correlation. RESULTS Interobserver reliability for the traditional 2D Goutallier grading was moderate for the infraspinatus (ISP, 0.42) and fair for the supraspinatus (SSP, 0.38), subscapularis (SSC, 0.27) and teres minor (TM, 0.27). 2D Goutallier grading was found to be significantly and highly correlated with 3DFI (SSP, 0.79; ISP, 0.83; SSC, 0.69; TM, 0.45) and 3DML (SSP, 0.87; ISP, 0.85; SSC, 0.69; TM, 0.46) for all 4 rotator cuff muscles (P < .0001). This correlation was significantly higher for 3DML than for the 3DFI for SSP only (P = .01). The mean values of 3DFI and 3DML were 0.9% and 5.3% for Goutallier 0, 2.9% and 25.6% for Goutallier 1, 11.4% and 49.5% for Goutallier 2, 20.7% and 59.7% for Goutallier 3, and 29.3% and 70.2% for Goutallier 4, respectively. CONCLUSION The Goutallier score has been helping surgeons by using 2D CT scan slices. However, this grading is associated with suboptimal interobserver agreement. The new measures we propose provide a more consistent assessment that correlates well with Goutallier's principles. As 3DML measurements incorporate atrophy and fatty infiltration, they could become a very reliable index for assessing shoulder muscle function. Future algorithms capable of automatically calculating the 3DML of the cuff could help in the decision process for cuff repair and the choice of anatomic or reverse shoulder arthroplasty.
Collapse
Affiliation(s)
- Jean-David Werthel
- Hôpital Ambroise Paré, Boulogne-Billancourt, France; IMT Atlantique, LaTIM INSERM U1101, Brest, France.
| | | | - Gilles Walch
- Centre Orthopédique Santy, Lyon, France; Ramsay Générale de Santé, Hôpital Privé Jean Mermoz, Lyon, France
| | | | | | | | | | | |
Collapse
|
22
|
Deep learning-based quantification of temporalis muscle has prognostic value in patients with glioblastoma. Br J Cancer 2021; 126:196-203. [PMID: 34848854 PMCID: PMC8770629 DOI: 10.1038/s41416-021-01590-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 08/25/2021] [Accepted: 10/06/2021] [Indexed: 01/19/2023] Open
Abstract
Background Glioblastoma is the commonest malignant brain tumour. Sarcopenia is associated with worse cancer survival, but manually quantifying muscle on imaging is time-consuming. We present a deep learning-based system for quantification of temporalis muscle, a surrogate for skeletal muscle mass, and assess its prognostic value in glioblastoma. Methods A neural network for temporalis segmentation was trained with 366 MRI head images from 132 patients from 4 different glioblastoma data sets and used to quantify muscle cross-sectional area (CSA). Association between temporalis CSA and survival was determined in 96 glioblastoma patients from internal and external data sets. Results The model achieved high segmentation accuracy (Dice coefficient 0.893). Median age was 55 and 58 years and 75.6 and 64.7% were males in the in-house and TCGA-GBM data sets, respectively. CSA was an independently significant predictor for survival in both the in-house and TCGA-GBM data sets (HR 0.464, 95% CI 0.218–0.988, p = 0.046; HR 0.466, 95% CI 0.235–0.925, p = 0.029, respectively). Conclusions Temporalis CSA is a prognostic marker in patients with glioblastoma, rapidly and accurately assessable with deep learning. We are the first to show that a head/neck muscle-derived sarcopenia metric generated using deep learning is associated with oncological outcomes and one of the first to show deep learning-based muscle quantification has prognostic value in cancer.
Collapse
|
23
|
Azimbagirad M, Dardenne G, Salem DB, Remy-Neris O, Burdin V. Towards the definition of a patient-specific rehabilitation program for TKA: A new MRI-based approach for the easy volumetric analysis of thigh muscles . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3141-3144. [PMID: 34891907 DOI: 10.1109/embc46164.2021.9630726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
After Total Knee Arthroplasty (TKA), a global post-operative rehabilitation programme is commonly performed. However, this current program is not always adapted to every patient and it could be improved by deeply reinforcing weaker thigh muscles. To do this, a muscle volume estimation coupled with force evaluation is required to therefore adapt the rehabilitation as a specific patient exercise plan. In this paper, we presented an MRI protocol allowing the acquisition of the whole thigh as well as a semi-automated pipeline to segment two main groups of thigh muscles, i.e., the quadriceps femoris and the hamstrings muscles. The pipeline is based on a few cross-sections manually labelled and a 3D-spline interpolation using directed graphs corresponding points. The seven muscles of ten thighs (70 muscles in total) were segmented and reconstructed in 3D. To assess this pipeline, three types of metrics (volumetric similarity, surface distance, and classical measures) were employed. Furthermore, the inter-muscle overlapping was calculated as an additional metric. The results showed mean DICE was 99.6% (±0.1), Hausdorff Distance was 4.9 mm (±1.8) and Absolute Volume Difference was 2.97 cm3 (±1.94) in comparison to the manual ground truth. The average overlap was 2.05% (±0.54).Clinical Relevance- The proposed segmentation method is fast, accurate and possible to integrate in the clinical workflow of TKA.
Collapse
|
24
|
Werthel JD, Boux de Casson F, Burdin V, Athwal GS, Favard L, Chaoui J, Walch G. CT-based volumetric assessment of rotator cuff muscle in shoulder arthroplasty preoperative planning. Bone Jt Open 2021; 2:552-561. [PMID: 34315280 PMCID: PMC8329519 DOI: 10.1302/2633-1462.27.bjo-2021-0081.r1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Aims The aim of this study was to describe a quantitative 3D CT method to measure rotator cuff muscle volume, atrophy, and balance in healthy controls and in three pathological shoulder cohorts. Methods In all, 102 CT scans were included in the analysis: 46 healthy, 21 cuff tear arthropathy (CTA), 18 irreparable rotator cuff tear (IRCT), and 17 primary osteoarthritis (OA). The four rotator cuff muscles were manually segmented and their volume, including intramuscular fat, was calculated. The normalized volume (NV) of each muscle was calculated by dividing muscle volume to the patient’s scapular bone volume. Muscle volume and percentage of muscle atrophy were compared between muscles and between cohorts. Results Rotator cuff muscle volume was significantly decreased in patients with OA, CTA, and IRCT compared to healthy patients (p < 0.0001). Atrophy was comparable for all muscles between CTA, IRCT, and OA patients, except for the supraspinatus, which was significantly more atrophied in CTA and IRCT (p = 0.002). In healthy shoulders, the anterior cuff represented 45% of the entire cuff, while the posterior cuff represented 40%. A similar partition between anterior and posterior cuff was also found in both CTA and IRCT patients. However, in OA patients, the relative volume of the anterior (42%) and posterior cuff (45%) were similar. Conclusion This study shows that rotator cuff muscle volume is significantly decreased in patients with OA, CTA, or IRCT compared to healthy patients, but that only minimal differences can be observed between the different pathological groups. This suggests that the influence of rotator cuff muscle volume and atrophy (including intramuscular fat) as an independent factor of outcome may be overestimated. Cite this article: Bone Jt Open 2021;2(7):552–561.
Collapse
Affiliation(s)
- Jean-David Werthel
- Hôpital Ambroise Paré, Boulogne-Billancourt, France.,Laboratory of Medical Information Processing, Brest, France
| | | | - Valérie Burdin
- Laboratory of Medical Information Processing, Brest, France
| | - George S Athwal
- Roth McFarlane Hand and Upper Limb Center, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | | | - Jean Chaoui
- Wright Medical, Montbonnot, France.,Tornier, Montbonnot, France.,Imascap, Plouzané, France.,Stryker, Kalamazoo, Michigan, USA
| | - Gilles Walch
- Centre Orthopédique Santy, Lyon, France.,Ramsay Générale de Santé, Hôpital Privé Jean Mermoz Lyon, Lyon, France
| |
Collapse
|
25
|
Conze PH, Kavur AE, Cornec-Le Gall E, Gezer NS, Le Meur Y, Selver MA, Rousseau F. Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks. Artif Intell Med 2021; 117:102109. [PMID: 34127239 DOI: 10.1016/j.artmed.2021.102109] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 01/24/2021] [Accepted: 05/06/2021] [Indexed: 02/05/2023]
Abstract
Abdominal anatomy segmentation is crucial for numerous applications from computer-assisted diagnosis to image-guided surgery. In this context, we address fully-automated multi-organ segmentation from abdominal CT and MR images using deep learning. The proposed model extends standard conditional generative adversarial networks. Additionally to the discriminator which enforces the model to create realistic organ delineations, it embeds cascaded partially pre-trained convolutional encoder-decoders as generator. Encoder fine-tuning from a large amount of non-medical images alleviates data scarcity limitations. The network is trained end-to-end to benefit from simultaneous multi-level segmentation refinements using auto-context. Employed for healthy liver, kidneys and spleen segmentation, our pipeline provides promising results by outperforming state-of-the-art encoder-decoder schemes. Followed for the Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge organized in conjunction with the IEEE International Symposium on Biomedical Imaging 2019, it gave us the first rank for three competition categories: liver CT, liver MR and multi-organ MR segmentation. Combining cascaded convolutional and adversarial networks strengthens the ability of deep learning pipelines to automatically delineate multiple abdominal organs, with good generalization capability. The comprehensive evaluation provided suggests that better guidance could be achieved to help clinicians in abdominal image interpretation and clinical decision making.
Collapse
Affiliation(s)
- Pierre-Henri Conze
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France.
| | - Ali Emre Kavur
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Emilie Cornec-Le Gall
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; UMR 1078, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| | - Naciye Sinem Gezer
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey; Department of Radiology, Faculty of Medicine, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Yannick Le Meur
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; LBAI UMR 1227, Inserm, 5 avenue Foch, 29609 Brest, France
| | - M Alper Selver
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - François Rousseau
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| |
Collapse
|
26
|
Two-stage multi-scale breast mass segmentation for full mammogram analysis without user intervention. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.03.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
27
|
Ogier AC, Hostin MA, Bellemare ME, Bendahan D. Overview of MR Image Segmentation Strategies in Neuromuscular Disorders. Front Neurol 2021; 12:625308. [PMID: 33841299 PMCID: PMC8027248 DOI: 10.3389/fneur.2021.625308] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 02/08/2021] [Indexed: 01/10/2023] Open
Abstract
Neuromuscular disorders are rare diseases for which few therapeutic strategies currently exist. Assessment of therapeutic strategies efficiency is limited by the lack of biomarkers sensitive to the slow progression of neuromuscular diseases (NMD). Magnetic resonance imaging (MRI) has emerged as a tool of choice for the development of qualitative scores for the study of NMD. The recent emergence of quantitative MRI has enabled to provide quantitative biomarkers more sensitive to the evaluation of pathological changes in muscle tissue. However, in order to extract these biomarkers from specific regions of interest, muscle segmentation is mandatory. The time-consuming aspect of manual segmentation has limited the evaluation of these biomarkers on large cohorts. In recent years, several methods have been proposed to make the segmentation step automatic or semi-automatic. The purpose of this study was to review these methods and discuss their reliability, reproducibility, and limitations in the context of NMD. A particular attention has been paid to recent deep learning methods, as they have emerged as an effective method of image segmentation in many other clinical contexts.
Collapse
Affiliation(s)
- Augustin C Ogier
- Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France
| | - Marc-Adrien Hostin
- Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France.,Aix Marseille Univ, CNRS, CRMBM, UMR 7339, Marseille, France
| | | | - David Bendahan
- Aix Marseille Univ, CNRS, CRMBM, UMR 7339, Marseille, France
| |
Collapse
|
28
|
Kavur AE, Gezer NS, Barış M, Aslan S, Conze PH, Groza V, Pham DD, Chatterjee S, Ernst P, Özkan S, Baydar B, Lachinov D, Han S, Pauli J, Isensee F, Perkonigg M, Sathish R, Rajan R, Sheet D, Dovletov G, Speck O, Nürnberger A, Maier-Hein KH, Bozdağı Akar G, Ünal G, Dicle O, Selver MA. CHAOS Challenge - combined (CT-MR) healthy abdominal organ segmentation. Med Image Anal 2020; 69:101950. [PMID: 33421920 DOI: 10.1016/j.media.2020.101950] [Citation(s) in RCA: 140] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 10/26/2020] [Accepted: 12/16/2020] [Indexed: 12/11/2022]
Abstract
Segmentation of abdominal organs has been a comprehensive, yet unresolved, research field for many years. In the last decade, intensive developments in deep learning (DL) introduced new state-of-the-art segmentation systems. Despite outperforming the overall accuracy of existing systems, the effects of DL model properties and parameters on the performance are hard to interpret. This makes comparative analysis a necessary tool towards interpretable studies and systems. Moreover, the performance of DL for emerging learning approaches such as cross-modality and multi-modal semantic segmentation tasks has been rarely discussed. In order to expand the knowledge on these topics, the CHAOS - Combined (CT-MR) Healthy Abdominal Organ Segmentation challenge was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI), 2019, in Venice, Italy. Abdominal organ segmentation from routine acquisitions plays an important role in several clinical applications, such as pre-surgical planning or morphological and volumetric follow-ups for various diseases. These applications require a certain level of performance on a diverse set of metrics such as maximum symmetric surface distance (MSSD) to determine surgical error-margin or overlap errors for tracking size and shape differences. Previous abdomen related challenges are mainly focused on tumor/lesion detection and/or classification with a single modality. Conversely, CHAOS provides both abdominal CT and MR data from healthy subjects for single and multiple abdominal organ segmentation. Five different but complementary tasks were designed to analyze the capabilities of participating approaches from multiple perspectives. The results were investigated thoroughly, compared with manual annotations and interactive methods. The analysis shows that the performance of DL models for single modality (CT / MR) can show reliable volumetric analysis performance (DICE: 0.98 ± 0.00 / 0.95 ± 0.01), but the best MSSD performance remains limited (21.89 ± 13.94 / 20.85 ± 10.63 mm). The performances of participating models decrease dramatically for cross-modality tasks both for the liver (DICE: 0.88 ± 0.15 MSSD: 36.33 ± 21.97 mm). Despite contrary examples on different applications, multi-tasking DL models designed to segment all organs are observed to perform worse compared to organ-specific ones (performance drop around 5%). Nevertheless, some of the successful models show better performance with their multi-organ versions. We conclude that the exploration of those pros and cons in both single vs multi-organ and cross-modality segmentations is poised to have an impact on further research for developing effective algorithms that would support real-world clinical applications. Finally, having more than 1500 participants and receiving more than 550 submissions, another important contribution of this study is the analysis on shortcomings of challenge organizations such as the effects of multiple submissions and peeking phenomenon.
Collapse
Affiliation(s)
- A Emre Kavur
- Graduate School of Natural and Applied Sciences, Dokuz Eylul University, Izmir, Turkey
| | - N Sinem Gezer
- Department of Radiology, Faculty Of Medicine, Dokuz Eylul University, Izmir, Turkey
| | - Mustafa Barış
- Department of Radiology, Faculty Of Medicine, Dokuz Eylul University, Izmir, Turkey
| | - Sinem Aslan
- Ca' Foscari University of Venice, ECLT and DAIS, Venice, Italy; Ege University, International Computer Institute, Izmir, Turkey
| | | | | | - Duc Duy Pham
- Intelligent Systems, Faculty of Engineering, University of Duisburg-Essen, Germany
| | - Soumick Chatterjee
- Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany; Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany
| | - Philipp Ernst
- Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany
| | - Savaş Özkan
- Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey
| | - Bora Baydar
- Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey
| | - Dmitry Lachinov
- Department of Ophthalmology and Optometry, Medical Uni. of Vienna, Austria
| | - Shuo Han
- Johns Hopkins University, Baltimore, USA
| | - Josef Pauli
- Intelligent Systems, Faculty of Engineering, University of Duisburg-Essen, Germany
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Matthias Perkonigg
- CIR Lab Dept of Biomedical Imaging and Image-guided Therapy Medical Uni. of Vienna, Austria
| | - Rachana Sathish
- Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, India
| | - Ronnie Rajan
- School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, India
| | - Debdoot Sheet
- Department of Electrical Engineering, Indian Institute of Technology, Kharagpur, India
| | - Gurbandurdy Dovletov
- Intelligent Systems, Faculty of Engineering, University of Duisburg-Essen, Germany
| | - Oliver Speck
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany
| | - Andreas Nürnberger
- Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Gözde Bozdağı Akar
- Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara, Turkey
| | - Gözde Ünal
- Faculty of Computer and Informatics Engineering, İstanbul Technical University, İstanbul, Turkey
| | - Oğuz Dicle
- Department of Radiology, Faculty Of Medicine, Dokuz Eylul University, Izmir, Turkey
| | - M Alper Selver
- Department of Electrical and Electronics Engineering, Dokuz Eylul University, Izmir, Turkey.
| |
Collapse
|