1
|
Chen C, Miao J, Wu D, Zhong A, Yan Z, Kim S, Hu J, Liu Z, Sun L, Li X, Liu T, Heng PA, Li Q. MA-SAM: Modality-agnostic SAM adaptation for 3D medical image segmentation. Med Image Anal 2024; 98:103310. [PMID: 39182302 PMCID: PMC11381141 DOI: 10.1016/j.media.2024.103310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Revised: 08/13/2024] [Accepted: 08/16/2024] [Indexed: 08/27/2024]
Abstract
The Segment Anything Model (SAM), a foundation model for general image segmentation, has demonstrated impressive zero-shot performance across numerous natural image segmentation tasks. However, SAM's performance significantly declines when applied to medical images, primarily due to the substantial disparity between natural and medical image domains. To effectively adapt SAM to medical images, it is important to incorporate critical third-dimensional information, i.e., volumetric or temporal knowledge, during fine-tuning. Simultaneously, we aim to harness SAM's pre-trained weights within its original 2D backbone to the fullest extent. In this paper, we introduce a modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable to various volumetric and video medical data. Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments while preserving the majority of SAM's pre-trained weights. By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data. We comprehensively evaluate our method on five medical image segmentation tasks, by using 11 public datasets across CT, MRI, and surgical video data. Remarkably, without using any prompt, our method consistently outperforms various state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical scene segmentation respectively. Our model also demonstrates strong generalization, and excels in challenging tumor segmentation when prompts are used. Our code is available at: https://github.com/cchen-cc/MA-SAM.
Collapse
Affiliation(s)
- Cheng Chen
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Juzheng Miao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Dufan Wu
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Aoxiao Zhong
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA; Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
| | - Zhiling Yan
- Department of Computer Science and Engineering, Lehigh University, Bethlehem, PA 18015, USA
| | - Sekeun Kim
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Jiang Hu
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Zhengliang Liu
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA; School of Computing, The University of Georgia, Athens, GA 30602, USA
| | - Lichao Sun
- Department of Computer Science and Engineering, Lehigh University, Bethlehem, PA 18015, USA
| | - Xiang Li
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA.
| | - Tianming Liu
- School of Computing, The University of Georgia, Athens, GA 30602, USA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Quanzheng Li
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| |
Collapse
|
2
|
Couchoux T, Jaouen T, Melodelima-Gonindard C, Baseilhac P, Branchu A, Arfi N, Aziza R, Barry Delongchamps N, Bladou F, Bratan F, Brunelle S, Colin P, Correas JM, Cornud F, Descotes JL, Eschwege P, Fiard G, Guillaume B, Grange R, Grenier N, Lang H, Lefèvre F, Malavaud B, Marcelin C, Moldovan PC, Mottet N, Mozer P, Potiron E, Portalez D, Puech P, Renard-Penna R, Roumiguié M, Roy C, Timsit MO, Tricard T, Villers A, Walz J, Debeer S, Mansuy A, Mège-Lechevallier F, Decaussin-Petrucci M, Badet L, Colombel M, Ruffion A, Crouzet S, Rabilloud M, Souchon R, Rouvière O. Performance of a Region of Interest-based Algorithm in Diagnosing International Society of Urological Pathology Grade Group ≥2 Prostate Cancer on the MRI-FIRST Database-CAD-FIRST Study. Eur Urol Oncol 2024; 7:1113-1122. [PMID: 38493072 DOI: 10.1016/j.euo.2024.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 03/01/2024] [Indexed: 03/18/2024]
Abstract
BACKGROUND AND OBJECTIVE Prostate multiparametric magnetic resonance imaging (MRI) shows high sensitivity for International Society of Urological Pathology grade group (GG) ≥2 cancers. Many artificial intelligence algorithms have shown promising results in diagnosing clinically significant prostate cancer on MRI. To assess a region-of-interest-based machine-learning algorithm aimed at characterising GG ≥2 prostate cancer on multiparametric MRI. METHODS The lesions targeted at biopsy in the MRI-FIRST dataset were retrospectively delineated and assessed using a previously developed algorithm. The Prostate Imaging-Reporting and Data System version 2 (PI-RADSv2) score assigned prospectively before biopsy and the algorithm score calculated retrospectively in the regions of interest were compared for diagnosing GG ≥2 cancer, using the areas under the curve (AUCs), and sensitivities and specificities calculated with predefined thresholds (PIRADSv2 scores ≥3 and ≥4; algorithm scores yielding 90% sensitivity in the training database). Ten predefined biopsy strategies were assessed retrospectively. KEY FINDINGS AND LIMITATIONS After excluding 19 patients, we analysed 232 patients imaged on 16 different scanners; 85 had GG ≥2 cancer at biopsy. At patient level, AUCs of the algorithm and PI-RADSv2 were 77% (95% confidence interval [CI]: 70-82) and 80% (CI: 74-85; p = 0.36), respectively. The algorithm's sensitivity and specificity were 86% (CI: 76-93) and 65% (CI: 54-73), respectively. PI-RADSv2 sensitivities and specificities were 95% (CI: 89-100) and 38% (CI: 26-47), and 89% (CI: 79-96) and 47% (CI: 35-57) for thresholds of ≥3 and ≥4, respectively. Using the PI-RADSv2 score to trigger a biopsy would have avoided 26-34% of biopsies while missing 5-11% of GG ≥2 cancers. Combining prostate-specific antigen density, the PI-RADSv2 and algorithm's scores would have avoided 44-47% of biopsies while missing 6-9% of GG ≥2 cancers. Limitations include the retrospective nature of the study and a lack of PI-RADS version 2.1 assessment. CONCLUSIONS AND CLINICAL IMPLICATIONS The algorithm provided robust results in the multicentre multiscanner MRI-FIRST database and could help select patients for biopsy. PATIENT SUMMARY An artificial intelligence-based algorithm aimed at diagnosing aggressive cancers on prostate magnetic resonance imaging showed results similar to expert human assessment in a prospectively acquired multicentre test database.
Collapse
Affiliation(s)
- Thibaut Couchoux
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | | | | | - Pierre Baseilhac
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | - Arthur Branchu
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | - Nicolas Arfi
- Department of Urology, Hôpital Saint Joseph Saint Luc, Lyon, France
| | - Richard Aziza
- Department of Radiology, Institut Universitaire du Cancer de Toulouse, Toulouse, France
| | | | - Franck Bladou
- Department of Urology, Centre Hospitalier Universitaire de Bordeaux, Bordeaux, France
| | - Flavie Bratan
- Department of Diagnostic and Interventional Imaging, Hôpital Saint Joseph Saint Luc, Lyon, France
| | - Serge Brunelle
- Department of Radiology and Medical Imaging, Institut Paoli-Calmettes Cancer Center, Marseille, France
| | - Pierre Colin
- Department of Urology, Hôpital privé La Louvrière, Lille, France
| | - Jean-Michel Correas
- Department of Radiology, Hôpital Necker, Assistance Publique-Hôpitaux de Paris, Paris, France
| | - François Cornud
- Department of Radiology, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, Paris, France
| | - Jean-Luc Descotes
- Université Grenoble Alpes, Grenoble, France; Department of Urology, Centre Hospitalier Universitaire de Grenoble, Grenoble, France
| | - Pascal Eschwege
- Department of Urology, Centre Hospitalier Régional et Universitaire de Nancy, Vandoeuvre, France
| | - Gaelle Fiard
- Université Grenoble Alpes, Grenoble, France; Department of Urology, Centre Hospitalier Universitaire de Grenoble, Grenoble, France
| | - Bénédicte Guillaume
- Department of Radiology, Centre Hospitalier Universitaire de Grenoble, Université Grenoble Apes, Grenoble, France
| | - Rémi Grange
- Department of Radiology, University Hospital of Saint-Etienne, Saint-Priest-en-Jarez, France
| | - Nicolas Grenier
- Department of Radiology, Centre Hospitalier Universitaire de Bordeaux, Hôpital Pellegrin, Bordeaux, France
| | - Hervé Lang
- Department of Urology, Centre Hospitalier Universitaire de Strasbourg, Nouvel Hôpital Civil, Strasbourg, France
| | - Frédéric Lefèvre
- Department of Radiology, Centre Hospitalier Régional et Universitaire de Nancy, Vandoeuvre, France
| | - Bernard Malavaud
- Department of Urology, Institut Universitaire du Cancer de Toulouse, Toulouse, France
| | - Clément Marcelin
- Department of Radiology, Centre Hospitalier Universitaire de Bordeaux, Hôpital Pellegrin, Bordeaux, France
| | - Paul C Moldovan
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | - Nicolas Mottet
- Department of Urology, University Hospital of Saint-Etienne, Saint-Priest-en-Jarez, France
| | - Pierre Mozer
- Department of Urology, Hôpital Pitié-Salpêtrière, Assistance Publique-Hôpitaux de Paris, Paris, France
| | - Eric Potiron
- Clinique Urologique de Nantes, Saint-Herblain, France
| | - Daniel Portalez
- Department of Radiology, Institut Universitaire du Cancer de Toulouse, Toulouse, France
| | - Philippe Puech
- Department of Radiology, Centre Hospitalier Régional et Universitaire de Lille, Lille, France
| | - Raphaele Renard-Penna
- Department of Radiology, Hôpital Pitié-Salpêtrière, Assistance Publique-Hôpitaux de Paris, Paris, France; GRC no 5, ONCOTYPE-URO, Sorbonne Universités, Paris, France
| | - Matthieu Roumiguié
- Department of Urology, Toulouse-Rangueil University Hospital, Toulouse France
| | - Catherine Roy
- Department of Radiology B, Centre Hospitalier Universitaire de Strasbourg, Nouvel Hôpital Civil, Strasbourg, France
| | - Marc-Olivier Timsit
- Department of Urology, Hôpital Européen Georges Pompidou, Assistance Publique-Hôpitaux de Paris, Paris, France
| | - Thibault Tricard
- Department of Urology, Centre Hospitalier Universitaire de Strasbourg, Nouvel Hôpital Civil, Strasbourg, France
| | - Arnauld Villers
- Department of Urology, Univ. Lille, CHU Lille, Lille, France
| | - Jochen Walz
- Department of Urology, Institut Paoli-Calmettes Cancer Center, Marseille, France
| | - Sabine Debeer
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | - Adeline Mansuy
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | | | | | - Lionel Badet
- Department of Urology, University Hospital of Saint-Etienne, Saint-Priest-en-Jarez, France; Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France; Université Lyon 1, Université de Lyon, Lyon, France
| | - Marc Colombel
- Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France; Université Lyon 1, Université de Lyon, Lyon, France
| | - Alain Ruffion
- Université Lyon 1, Université de Lyon, Lyon, France; Department of Urology, Centre Hospitalier Lyon Sud, Hospices Cibvils de Lyon, Pierre-Bénite, France
| | - Sébastien Crouzet
- LabTau, INSERM Unit 1032, Lyon, France; Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France; Université Lyon 1, Université de Lyon, Lyon, France
| | - Muriel Rabilloud
- Université Lyon 1, Université de Lyon, Lyon, France; Pôle Santé Publique, Service de Biostatistique et Bioinformatique, Hospices Civils de Lyon, Lyon, France; CNRS, UMR 5558, Laboratoire de Biométrie et Biologie Évolutive, Équipe Biostatistique-Santé, Villeurbanne, France
| | | | - Olivier Rouvière
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France; LabTau, INSERM Unit 1032, Lyon, France; Université Lyon 1, Université de Lyon, Lyon, France.
| |
Collapse
|
3
|
Wu X, Xu Z, Tong RKY. Continual learning in medical image analysis: A survey. Comput Biol Med 2024; 182:109206. [PMID: 39332115 DOI: 10.1016/j.compbiomed.2024.109206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 06/24/2024] [Accepted: 09/22/2024] [Indexed: 09/29/2024]
Abstract
In the dynamic realm of practical clinical scenarios, Continual Learning (CL) has gained increasing interest in medical image analysis due to its potential to address major challenges associated with data privacy, model adaptability, memory inefficiency, prediction robustness and detection accuracy. In general, the primary challenge in adapting and advancing CL remains catastrophic forgetting. Beyond this challenge, recent years have witnessed a growing body of work that expands our comprehension and application of continual learning in the medical domain, highlighting its practical significance and intricacy. In this paper, we present an in-depth and up-to-date review of the application of CL in medical image analysis. Our discussion delves into the strategies employed to address specific tasks within the medical domain, categorizing existing CL methods into three settings: Task-Incremental Learning, Class-Incremental Learning, and Domain-Incremental Learning. These settings are further subdivided based on representative learning strategies, allowing us to assess their strengths and weaknesses in the context of various medical scenarios. By establishing a correlation between each medical challenge and the corresponding insights provided by CL, we provide a comprehensive understanding of the potential impact of these techniques. To enhance the utility of our review, we provide an overview of the commonly used benchmark medical datasets and evaluation metrics in the field. Through a comprehensive comparison, we discuss promising future directions for the application of CL in medical image analysis. A comprehensive list of studies is being continuously updated at https://github.com/xw1519/Continual-Learning-Medical-Adaptation.
Collapse
Affiliation(s)
- Xinyao Wu
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China.
| | - Zhe Xu
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China; Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Raymond Kai-Yu Tong
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China.
| |
Collapse
|
4
|
Tattersall AG, Goatman KA, Kershaw LE, Semple SIK, Dahdouh S. TIST-Net: style transfer in dynamic contrast enhanced MRI using spatial and temporal information. Phys Med Biol 2024; 69:115035. [PMID: 38648788 DOI: 10.1088/1361-6560/ad4193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 04/22/2024] [Indexed: 04/25/2024]
Abstract
Objective.Training deep learning models for image registration or segmentation of dynamic contrast enhanced (DCE) MRI data is challenging. This is mainly due to the wide variations in contrast enhancement within and between patients. To train a model effectively, a large dataset is needed, but acquiring it is expensive and time consuming. Instead, style transfer can be used to generate new images from existing images. In this study, our objective is to develop a style transfer method that incorporates spatio-temporal information to either add or remove contrast enhancement from an existing image.Approach.We propose a temporal image-to-image style transfer network (TIST-Net), consisting of an auto-encoder combined with convolutional long short-term memory networks. This enables disentanglement of the content and style latent spaces of the time series data, using spatio-temporal information to learn and predict key structures. To generate new images, we use deformable and adaptive convolutions which allow fine grained control over the combination of the content and style latent spaces. We evaluate our method, using popular metrics and a previously proposed contrast weighted structural similarity index measure. We also perform a clinical evaluation, where experts are asked to rank images generated by multiple methods.Main Results.Our model achieves state-of-the-art performance on three datasets (kidney, prostate and uterus) achieving an SSIM of 0.91 ± 0.03, 0.73 ± 0.04, 0.88 ± 0.04 respectively when performing style transfer between a non-enhanced image and a contrast-enhanced image. Similarly, SSIM results for style transfer from a contrast-enhanced image to a non-enhanced image were 0.89 ± 0.03, 0.82 ± 0.03, 0.87 ± 0.03. In the clinical evaluation, our method was ranked consistently higher than other approaches.Significance.TIST-Net can be used to generate new DCE-MRI data from existing images. In future, this may improve models for tasks such as image registration or segmentation by allowing small training datasets to be expanded.
Collapse
Affiliation(s)
- Adam G Tattersall
- University of Edinburgh, Edinburgh, United Kingdom
- Canon Medical Research Europe, Edinburgh, United Kingdom
| | | | | | | | - Sonia Dahdouh
- Canon Medical Research Europe, Edinburgh, United Kingdom
| |
Collapse
|
5
|
Cheng Z, Wang S, Gao Y, Zhu Z, Yan C. Invariant Content Representation for Generalizable Medical Image Segmentation. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01088-9. [PMID: 38758420 DOI: 10.1007/s10278-024-01088-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/20/2024] [Accepted: 02/09/2024] [Indexed: 05/18/2024]
Abstract
Domain generalization (DG) for medical image segmentation due to privacy preservation prefers learning from a single-source domain and expects good robustness on unseen target domains. To achieve this goal, previous methods mainly use data augmentation to expand the distribution of samples and learn invariant content from them. However, most of these methods commonly perform global augmentation, leading to limited augmented sample diversity. In addition, the style of the augmented image is more scattered than the source domain, which may cause the model to overfit the style of the source domain. To address the above issues, we propose an invariant content representation network (ICRN) to enhance the learning of invariant content and suppress the learning of variability styles. Specifically, we first design a gamma correction-based local style augmentation (LSA) to expand the distribution of samples by augmenting foreground and background styles, respectively. Then, based on the augmented samples, we introduce invariant content learning (ICL) to learn generalizable invariant content from both augmented and source-domain samples. Finally, we design domain-specific batch normalization (DSBN) based style adversarial learning (SAL) to suppress the learning of preferences for source-domain styles. Experimental results show that our proposed method improves by 8.74% and 11.33% in overall dice coefficient (Dice) and reduces 15.88 mm and 3.87 mm in overall average surface distance (ASD) on two publicly available cross-domain datasets, Fundus and Prostate, compared to the state-of-the-art DG methods. The code is available at https://github.com/ZMC-IIIM/ICRN-DG .
Collapse
Affiliation(s)
- Zhiming Cheng
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Shuai Wang
- School of Cyberspace, Hangzhou Dianzi University, Hangzhou, 310018, China.
- Suzhou Research Institute of Shandong University, SuZhou, 215123, China.
| | - Yuhan Gao
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, China
- Lishui Institute of Hangzhou Dianzi Universitu, Lishui, 323010, China
| | - Zunjie Zhu
- Lishui Institute of Hangzhou Dianzi Universitu, Lishui, 323010, China
- School of Communication, Engineering, Hangzhou Dianzi Universitu, Hangzhou, 310018, China
| | - Chenggang Yan
- School of Communication, Engineering, Hangzhou Dianzi Universitu, Hangzhou, 310018, China
| |
Collapse
|
6
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
7
|
Zhu Z, Ma X, Wang W, Dong S, Wang K, Wu L, Luo G, Wang G, Li S. Boosting knowledge diversity, accuracy, and stability via tri-enhanced distillation for domain continual medical image segmentation. Med Image Anal 2024; 94:103112. [PMID: 38401270 DOI: 10.1016/j.media.2024.103112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 01/10/2024] [Accepted: 02/20/2024] [Indexed: 02/26/2024]
Abstract
Domain continual medical image segmentation plays a crucial role in clinical settings. This approach enables segmentation models to continually learn from a sequential data stream across multiple domains. However, it faces the challenge of catastrophic forgetting. Existing methods based on knowledge distillation show potential to address this challenge via a three-stage process: distillation, transfer, and fusion. Yet, each stage presents its unique issues that, collectively, amplify the problem of catastrophic forgetting. To address these issues at each stage, we propose a tri-enhanced distillation framework. (1) Stochastic Knowledge Augmentation reduces redundancy in knowledge, thereby increasing both the diversity and volume of knowledge derived from the old network. (2) Adaptive Knowledge Transfer selectively captures critical information from the old knowledge, facilitating a more accurate knowledge transfer. (3) Global Uncertainty-Guided Fusion introduces a global uncertainty view of the dataset to fuse the old and new knowledge with reduced bias, promoting a more stable knowledge fusion. Our experimental results not only validate the feasibility of our approach, but also demonstrate its superior performance compared to state-of-the-art methods. We suggest that our innovative tri-enhanced distillation framework may establish a robust benchmark for domain continual medical image segmentation.
Collapse
Affiliation(s)
- Zhanshi Zhu
- Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Xinghua Ma
- Faculty of Computing, Harbin Institute of Technology, Harbin, China
| | - Wei Wang
- Faculty of Computing, Harbin Institute of Technology, Shenzhen, China.
| | - Suyu Dong
- College of Computer and Control Engineering, Northeast Forestry University, Harbin, China
| | - Kuanquan Wang
- Faculty of Computing, Harbin Institute of Technology, Harbin, China.
| | - Lianming Wu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Gongning Luo
- Faculty of Computing, Harbin Institute of Technology, Harbin, China.
| | - Guohua Wang
- College of Computer and Control Engineering, Northeast Forestry University, Harbin, China
| | - Shuo Li
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
8
|
Xu Z, Lu D, Luo J, Zheng Y, Tong RKY. Separated collaborative learning for semi-supervised prostate segmentation with multi-site heterogeneous unlabeled MRI data. Med Image Anal 2024; 93:103095. [PMID: 38310678 DOI: 10.1016/j.media.2024.103095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 09/11/2023] [Accepted: 01/24/2024] [Indexed: 02/06/2024]
Abstract
Segmenting prostate from magnetic resonance imaging (MRI) is a critical procedure in prostate cancer staging and treatment planning. Considering the nature of labeled data scarcity for medical images, semi-supervised learning (SSL) becomes an appealing solution since it can simultaneously exploit limited labeled data and a large amount of unlabeled data. However, SSL relies on the assumption that the unlabeled images are abundant, which may not be satisfied when the local institute has limited image collection capabilities. An intuitive solution is to seek support from other centers to enrich the unlabeled image pool. However, this further introduces data heterogeneity, which can impede SSL that works under identical data distribution with certain model assumptions. Aiming at this under-explored yet valuable scenario, in this work, we propose a separated collaborative learning (SCL) framework for semi-supervised prostate segmentation with multi-site unlabeled MRI data. Specifically, on top of the teacher-student framework, SCL exploits multi-site unlabeled data by: (i) Local learning, which advocates local distribution fitting, including the pseudo label learning that reinforces confirmation of low-entropy easy regions and the cyclic propagated real label learning that leverages class prototypes to regularize the distribution of intra-class features; (ii) External multi-site learning, which aims to robustly mine informative clues from external data, mainly including the local-support category mutual dependence learning, which takes the spirit that mutual information can effectively measure the amount of information shared by two variables even from different domains, and the stability learning under strong adversarial perturbations to enhance robustness to heterogeneity. Extensive experiments on prostate MRI data from six different clinical centers show that our method can effectively generalize SSL on multi-site unlabeled data and significantly outperform other semi-supervised segmentation methods. Besides, we validate the extensibility of our method on the multi-class cardiac MRI segmentation task with data from four different clinical centers.
Collapse
Affiliation(s)
- Zhe Xu
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China.
| | - Donghuan Lu
- Tencent Jarvis Research Center, Youtu Lab, Shenzhen, China.
| | - Jie Luo
- Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Yefeng Zheng
- Tencent Jarvis Research Center, Youtu Lab, Shenzhen, China
| | - Raymond Kai-Yu Tong
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Shatin, NT, Hong Kong, China.
| |
Collapse
|
9
|
Huang Y, Yang X, Liu L, Zhou H, Chang A, Zhou X, Chen R, Yu J, Chen J, Chen C, Liu S, Chi H, Hu X, Yue K, Li L, Grau V, Fan DP, Dong F, Ni D. Segment anything model for medical images? Med Image Anal 2024; 92:103061. [PMID: 38086235 DOI: 10.1016/j.media.2023.103061] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/28/2023] [Accepted: 12/05/2023] [Indexed: 01/12/2024]
Abstract
The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: (1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. (2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. (3) SAM performed better with manual hints, especially box, than the Everything mode. (4) SAM could help human annotation with high labeling quality and less time. (5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. (6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. (7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. (8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. Codes and models are available at: https://github.com/yuhoo0302/Segment-Anything-Model-for-Medical-Images. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.
Collapse
Affiliation(s)
- Yuhao Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Lian Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Han Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Ao Chang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xinrui Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Rusi Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Junxuan Yu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Jiongquan Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Chaoyu Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Sijing Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | | | - Xindi Hu
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, China
| | - Kejuan Yue
- Hunan First Normal University, Changsha, China
| | - Lei Li
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Vicente Grau
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Deng-Ping Fan
- Computer Vision Lab (CVL), ETH Zurich, Zurich, Switzerland
| | - Fajin Dong
- Ultrasound Department, the Second Clinical Medical College, Jinan University, China; First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China.
| |
Collapse
|
10
|
Hanzlikova P, Vilimek D, Vilimkova Kahankova R, Ladrova M, Skopelidou V, Ruzickova Z, Martinek R, Cvek J. Longitudinal analysis of T2 relaxation time variations following radiotherapy for prostate cancer. Heliyon 2024; 10:e24557. [PMID: 38298676 PMCID: PMC10828070 DOI: 10.1016/j.heliyon.2024.e24557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 12/02/2023] [Accepted: 01/10/2024] [Indexed: 02/02/2024] Open
Abstract
Aim of this paper is to evaluate short and long-term changes in T 2 relaxation times after radiotherapy in patients with low and intermediate risk localized prostate cancer. A total of 24 patients were selected for this retrospective study. Each participant underwent 1.5T magnetic resonance imaging on seven separate occasions: initially after the implantation of gold fiducials, the required step for Cyberknife therapy guidance, followed by MRI scans two weeks post-therapy and monthly thereafter. As part of each MRI scan, the prostate region was manually delineated, and the T 2 relaxation times were calculated for quantitative analysis. The T 2 relaxation times between individual follow-ups were analyzed using Repeated Measures Analysis of Variance that revealed a significant difference across all measurements (F (6, 120) = 0.611, p << 0.001). A Bonferroni post hoc test revealed significant differences in median T 2 values between the baseline and subsequent measurements, particularly between pre-therapy (M 0 ) and two weeks post-therapy (M 1 ), as well as during the monthly interval checks (M 2 - M 6 ). Some cases showed a delayed decrease in relaxation times, indicating the prolonged effects of therapy. The changes in T 2 values during the course of radiotherapy can help in monitoring radiotherapy response in unconfirmed patients, quantifying the scarring process, and recognizing the therapy failure.
Collapse
Affiliation(s)
- Pavla Hanzlikova
- Department of Radiology, University Hospital Ostrava, Czech Republic
- Department of Imaging Methods, Faculty of Medicine, University of Ostrava, Ostrava, Czech Republic
| | - Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, 17. listopadu 15, Ostrava – Poruba, 708 00, Czech Republic
| | - Radana Vilimkova Kahankova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, 17. listopadu 15, Ostrava – Poruba, 708 00, Czech Republic
| | - Martina Ladrova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, 17. listopadu 15, Ostrava – Poruba, 708 00, Czech Republic
| | - Valeria Skopelidou
- Institute of Molecular and Clinical Pathology and Medical Genetics, University Hospital Ostrava, 70852, Ostrava, Czech Republic
- Institute of Molecular and Clinical Pathology and Medical Genetics, Faculty of Medicine, University of Ostrava, 70300, Ostrava, Czech Republic
| | - Zuzana Ruzickova
- Faculty of Medicine, University of Ostrava, 70300 Ostrava, Czech Republic
- Department of Oncology, University Hospital Ostrava, 70852 Ostrava, Czech Republic
| | - Radek Martinek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, 17. listopadu 15, Ostrava – Poruba, 708 00, Czech Republic
| | - Jakub Cvek
- Faculty of Medicine, University of Ostrava, 70300 Ostrava, Czech Republic
- Department of Oncology, University Hospital Ostrava, 70852 Ostrava, Czech Republic
| |
Collapse
|
11
|
Mehmood M, Abbasi SH, Aurangzeb K, Majeed MF, Anwar MS, Alhussein M. A classifier model for prostate cancer diagnosis using CNNs and transfer learning with multi-parametric MRI. Front Oncol 2023; 13:1225490. [PMID: 38023149 PMCID: PMC10666634 DOI: 10.3389/fonc.2023.1225490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Prostate cancer (PCa) is a major global concern, particularly for men, emphasizing the urgency of early detection to reduce mortality. As the second leading cause of cancer-related male deaths worldwide, precise and efficient diagnostic methods are crucial. Due to high and multiresolution MRI in PCa, computer-aided diagnostic (CAD) methods have emerged to assist radiologists in identifying anomalies. However, the rapid advancement of medical technology has led to the adoption of deep learning methods. These techniques enhance diagnostic efficiency, reduce observer variability, and consistently outperform traditional approaches. Resource constraints that can distinguish whether a cancer is aggressive or not is a significant problem in PCa treatment. This study aims to identify PCa using MRI images by combining deep learning and transfer learning (TL). Researchers have explored numerous CNN-based Deep Learning methods for classifying MRI images related to PCa. In this study, we have developed an approach for the classification of PCa using transfer learning on a limited number of images to achieve high performance and help radiologists instantly identify PCa. The proposed methodology adopts the EfficientNet architecture, pre-trained on the ImageNet dataset, and incorporates three branches for feature extraction from different MRI sequences. The extracted features are then combined, significantly enhancing the model's ability to distinguish MRI images accurately. Our model demonstrated remarkable results in classifying prostate cancer, achieving an accuracy rate of 88.89%. Furthermore, comparative results indicate that our approach achieve higher accuracy than both traditional hand-crafted feature techniques and existing deep learning techniques in PCa classification. The proposed methodology can learn more distinctive features in prostate images and correctly identify cancer.
Collapse
Affiliation(s)
- Mubashar Mehmood
- Department of Computer Science, COMSATS Institute of Information Technology, Islamabad, Pakistan
| | | | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | | | | | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
12
|
Li Z, Kamnitsas K, Dou Q, Qin C, Glocker B. Joint Optimization of Class-Specific Training- and Test-Time Data Augmentation in Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3323-3335. [PMID: 37276115 DOI: 10.1109/tmi.2023.3282728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper presents an effective and general data augmentation framework for medical image segmentation. We adopt a computationally efficient and data-efficient gradient-based meta-learning scheme to explicitly align the distribution of training and validation data which is used as a proxy for unseen test data. We improve the current data augmentation strategies with two core designs. First, we learn class-specific training-time data augmentation (TRA) effectively increasing the heterogeneity within the training subsets and tackling the class imbalance common in segmentation. Second, we jointly optimize TRA and test-time data augmentation (TEA), which are closely connected as both aim to align the training and test data distribution but were so far considered separately in previous works. We demonstrate the effectiveness of our method on four medical image segmentation tasks across different scenarios with two state-of-the-art segmentation models, DeepMedic and nnU-Net. Extensive experimentation shows that the proposed data augmentation framework can significantly and consistently improve the segmentation performance when compared to existing solutions. Code is publicly available at https://github.com/ZerojumpLine/JCSAugment.
Collapse
|
13
|
Chen B, Thandiackal K, Pati P, Goksel O. Generative appearance replay for continual unsupervised domain adaptation. Med Image Anal 2023; 89:102924. [PMID: 37597316 DOI: 10.1016/j.media.2023.102924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 06/19/2023] [Accepted: 08/01/2023] [Indexed: 08/21/2023]
Abstract
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on three datasets with different organs and modalities, where it substantially outperforms existing techniques. Our code is available at: https://github.com/histocartography/generative-appearance-replay.
Collapse
Affiliation(s)
- Boqi Chen
- ETH AI Center, Zurich, Switzerland; Department of Computer Science, ETH Zurich, Switzerland
| | - Kevin Thandiackal
- IBM Research Europe, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland.
| | | | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
14
|
Qiu L, Cheng J, Gao H, Xiong W, Ren H. Federated Semi-Supervised Learning for Medical Image Segmentation via Pseudo-Label Denoising. IEEE J Biomed Health Inform 2023; 27:4672-4683. [PMID: 37155394 DOI: 10.1109/jbhi.2023.3274498] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Distributed big data and digital healthcare technologies have great potential to promote medical services, but challenges arise when it comes to learning predictive model from diverse and complex e-health datasets. Federated Learning (FL), as a collaborative machine learning technique, aims to address the challenges by learning a joint predictive model across multi-site clients, especially for distributed medical institutions or hospitals. However, most existing FL methods assume that clients possess fully labeled data for training, which is often not the case in e-health datasets due to high labeling costs or expertise requirement. Therefore, this work proposes a novel and feasible approach to learn a Federated Semi-Supervised Learning (FSSL) model from distributed medical image domains, where a federated pseudo-labeling strategy for unlabeled clients is developed based on the embedded knowledge learned from labeled clients. This greatly mitigates the annotation deficiency at unlabeled clients and leads to a cost-effective and efficient medical image analysis tool. We demonstrated the effectiveness of our method by achieving significant improvements compared to the state-of-the-art in both fundus image and prostate MRI segmentation tasks, resulting in the highest Dice scores of 89.23% and 91.95% respectively even with only a few labeled clients participating in model training. This reveals the superiority of our method for practical deployment, ultimately facilitating the wider use of FL in healthcare and leading to better patient outcomes.
Collapse
|
15
|
Gao S, Zhou H, Gao Y, Zhuang X. BayeSeg: Bayesian modeling for medical image segmentation with interpretable generalizability. Med Image Anal 2023; 89:102889. [PMID: 37467643 DOI: 10.1016/j.media.2023.102889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 06/27/2023] [Accepted: 06/29/2023] [Indexed: 07/21/2023]
Abstract
Due to the cross-domain distribution shift aroused from diverse medical imaging systems, many deep learning segmentation methods fail to perform well on unseen data, which limits their real-world applicability. Recent works have shown the benefits of extracting domain-invariant representations on domain generalization. However, the interpretability of domain-invariant features remains a great challenge. To address this problem, we propose an interpretable Bayesian framework (BayeSeg) through Bayesian modeling of image and label statistics to enhance model generalizability for medical image segmentation. Specifically, we first decompose an image into a spatial-correlated variable and a spatial-variant variable, assigning hierarchical Bayesian priors to explicitly force them to model the domain-stable shape and domain-specific appearance information respectively. Then, we model the segmentation as a locally smooth variable only related to the shape. Finally, we develop a variational Bayesian framework to infer the posterior distributions of these explainable variables. The framework is implemented with neural networks, and thus is referred to as deep Bayesian segmentation. Quantitative and qualitative experimental results on prostate segmentation and cardiac segmentation tasks have shown the effectiveness of our proposed method. Moreover, we investigated the interpretability of BayeSeg by explaining the posteriors and analyzed certain factors that affect the generalization ability through further ablation studies. Our code is released via https://zmiclab.github.io/projects.html.
Collapse
Affiliation(s)
- Shangqi Gao
- School of Data Science, Fudan University, Shanghai, 200433, China
| | - Hangqi Zhou
- School of Data Science, Fudan University, Shanghai, 200433, China
| | - Yibo Gao
- School of Data Science, Fudan University, Shanghai, 200433, China
| | - Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, 200433, China. https://www.sdspeople.fudan.edu.cn/zhuangxiahai/
| |
Collapse
|
16
|
Mazurowski MA, Dong H, Gu H, Yang J, Konz N, Zhang Y. Segment anything model for medical image analysis: An experimental study. Med Image Anal 2023; 89:102918. [PMID: 37595404 PMCID: PMC10528428 DOI: 10.1016/j.media.2023.102918] [Citation(s) in RCA: 31] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Revised: 07/03/2023] [Accepted: 07/31/2023] [Indexed: 08/20/2023]
Abstract
Training segmentation models for medical images continues to be challenging due to the limited availability of data annotations. Segment Anything Model (SAM) is a foundation model trained on over 1 billion annotations, predominantly for natural images, that is intended to segment user-defined objects of interest in an interactive manner. While the model performance on natural images is impressive, medical image domains pose their own set of challenges. Here, we perform an extensive evaluation of SAM's ability to segment medical images on a collection of 19 medical imaging datasets from various modalities and anatomies. In our experiments, we generated point and box prompts for SAM using a standard method that simulates interactive segmentation. We report the following findings: (1) SAM's performance based on single prompts highly varies depending on the dataset and the task, from IoU=0.1135 for spine MRI to IoU=0.8650 for hip X-ray. (2) Segmentation performance appears to be better for well-circumscribed objects with prompts with less ambiguity such as the segmentation of organs in computed tomography and poorer in various other scenarios such as the segmentation of brain tumors. (3) SAM performs notably better with box prompts than with point prompts. (4) SAM outperforms similar methods RITM, SimpleClick, and FocalClick in almost all single-point prompt settings. (5) When multiple-point prompts are provided iteratively, SAM's performance generally improves only slightly while other methods' performance improves to the level that surpasses SAM's point-based performance. We also provide several illustrations for SAM's performance on all tested datasets, iterative segmentation, and SAM's behavior given prompt ambiguity. We conclude that SAM shows impressive zero-shot segmentation performance for certain medical imaging datasets, but moderate to poor performance for others. SAM has the potential to make a significant impact in automated medical image segmentation in medical imaging, but appropriate care needs to be applied when using it. Code for evaluation SAM is made publicly available at https://github.com/mazurowski-lab/segment-anything-medical-evaluation.
Collapse
Affiliation(s)
- Maciej A Mazurowski
- Department of Radiology, Duke University, Durham, NC, 27708, USA; Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA; Department of Computer Science, Duke University, Durham, NC, 27708, USA; Department of Biostatistics & Bioinformatics, Duke University, Durham, NC, 27708, USA
| | - Haoyu Dong
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA.
| | - Hanxue Gu
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA
| | - Jichen Yang
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA
| | - Nicholas Konz
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA
| | - Yixin Zhang
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27708, USA
| |
Collapse
|
17
|
Jaouen T, Souchon R, Moldovan PC, Bratan F, Duran A, Hoang-Dinh A, Di Franco F, Debeer S, Dubreuil-Chambardel M, Arfi N, Ruffion A, Colombel M, Crouzet S, Gonindard-Melodelima C, Rouvière O. Characterization of high-grade prostate cancer at multiparametric MRI using a radiomic-based computer-aided diagnosis system as standalone and second reader. Diagn Interv Imaging 2023; 104:465-476. [PMID: 37345961 DOI: 10.1016/j.diii.2023.04.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 04/16/2023] [Accepted: 04/18/2023] [Indexed: 06/23/2023]
Abstract
PURPOSE The purpose of this study was to develop and test across various scanners a zone-specific region-of-interest (ROI)-based computer-aided diagnosis system (CAD) aimed at characterizing, on MRI, International Society of Urological Pathology (ISUP) grade≥2 prostate cancers. MATERIALS AND METHODS ROI-based quantitative models were selected in multi-vendor training (265 pre-prostatectomy MRIs) and pre-test (112 pre-biopsy MRIs) datasets. The best peripheral and transition zone models were combined and retrospectively assessed in internal (158 pre-biopsy MRIs) and external (104 pre-biopsy MRIs) test datasets. Two radiologists (R1/R2) retrospectively delineated the lesions targeted at biopsy in test datasets. The CAD area under the receiver operating characteristic curve (AUC) for characterizing ISUP≥2 cancers was compared to that of the Prostate Imaging-Reporting and Data System version2 (PI-RADSv2) score prospectively assigned to targeted lesions. RESULTS The best models used the 25th apparent diffusion coefficient (ADC) percentile in transition zone and the 2nd ADC percentile and normalized wash-in rate in peripheral zone. The PI-RADSv2 AUCs were 82% (95% confidence interval [CI]: 74-87) and 86% (95% CI: 81-91) in the internal and external test datasets respectively. They were not different from the CAD AUCs obtained with R1 and R2 delineations, in the internal (82% [95% CI: 76-89], P = 0.95 and 85% [95% CI: 78-91], P = 0.55) and external (82% [95% CI: 74-91], P = 0.41 and 86% [95% CI:78-95], P = 0.98) test datasets. The CAD yielded sensitivities of 86-89% and 90-91%, and specificities of 64-65% and 69-75% in the internal and external test datasets respectively. CONCLUSION The CAD performance for characterizing ISUP grade≥2 prostate cancers on MRI is not different from that of PI-RADSv2 score across two test datasets.
Collapse
Affiliation(s)
| | | | - Paul C Moldovan
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon, 69003, France
| | - Flavie Bratan
- Hôpital Saint Joseph Saint Luc, Department of Radiology, Lyon, 69007, France
| | - Audrey Duran
- Univ Lyon, CNRS, Inserm, INSA Lyon, UCBL, CREATIS, UMR5220, U1294, Villeurbanne, 69100, France
| | - Au Hoang-Dinh
- INSERM, LabTAU, U1032, Lyon, 69003, France; Hanoi Medical University, Department of Radiology, Hanoi, 116001, Vietnam
| | - Florian Di Franco
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon, 69003, France
| | - Sabine Debeer
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon, 69003, France
| | - Marine Dubreuil-Chambardel
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon, 69003, France
| | - Nicolas Arfi
- Hôpital Saint Joseph Saint Luc, Department of Urology, Lyon, 69007, France
| | - Alain Ruffion
- Hospices Civils de Lyon, Centre Hospitalier Lyon Sud, Department of Urology, Pierre-Bénite, 69310, France; Equipe 2 - Centre d'Innovation en Cancérologie de Lyon (EA 3738 CICLY), Pierre-Bénite, 69310, France; Université de Lyon, Lyon, 69003, France; Université Lyon 1, Lyon, 69003, France; Faculté de Médecine Lyon Sud, Pierre-Bénite, 69310, France
| | - Marc Colombel
- Université de Lyon, Lyon, 69003, France; Université Lyon 1, Lyon, 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Urology, Lyon, 69003, France; Faculté de Médecine Lyon Est, Lyon, 69003, France
| | - Sébastien Crouzet
- INSERM, LabTAU, U1032, Lyon, 69003, France; Université de Lyon, Lyon, 69003, France; Université Lyon 1, Lyon, 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Urology, Lyon, 69003, France; Faculté de Médecine Lyon Est, Lyon, 69003, France
| | - Christelle Gonindard-Melodelima
- Université Grenoble Alpes, Laboratoire d'Ecologie Alpine, BP 53, Grenoble 38041, France; CNRS, UMR 5553, BP 53, Grenoble, 38041, France
| | - Olivier Rouvière
- INSERM, LabTAU, U1032, Lyon, 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon, 69003, France; Université de Lyon, Lyon, 69003, France; Université Lyon 1, Lyon, 69003, France; Faculté de Médecine Lyon Est, Lyon, 69003, France.
| |
Collapse
|
18
|
Zhang J, Gu R, Xue P, Liu M, Zheng H, Zheng Y, Ma L, Wang G, Gu L. S 3R: Shape and Semantics-Based Selective Regularization for Explainable Continual Segmentation Across Multiple Sites. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2539-2551. [PMID: 37030841 DOI: 10.1109/tmi.2023.3260974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In clinical practice, it is desirable for medical image segmentation models to be able to continually learn on a sequential data stream from multiple sites, rather than a consolidated dataset, due to storage cost and privacy restrictions. However, when learning on a new site, existing methods struggle with a weak memorizability for previous sites with complex shape and semantic information, and a poor explainability for the memory consolidation process. In this work, we propose a novel Shape and Semantics-based Selective Regularization ( [Formula: see text]) method for explainable cross-site continual segmentation to maintain both shape and semantic knowledge of previously learned sites. Specifically, [Formula: see text] method adopts a selective regularization scheme to penalize changes of parameters with high Joint Shape and Semantics-based Importance (JSSI) weights, which are estimated based on the parameter sensitivity to shape properties and reliable semantics of the segmentation object. This helps to prevent the related shape and semantic knowledge from being forgotten. Moreover, we propose an Importance Activation Mapping (IAM) method for memory interpretation, which indicates the spatial support for important parameters to visualize the memorized content. We have extensively evaluated our method on prostate segmentation and optic cup and disc segmentation tasks. Our method outperforms other comparison methods in reducing model forgetting and increasing explainability. Our code is available at https://github.com/jingyzhang/S3R.
Collapse
|
19
|
Sánchez Iglesias Á, Morillo Macías V, Picó Peris A, Fuster-Matanzo A, Nogué Infante A, Muelas Soria R, Bellvís Bataller F, Domingo Pomar M, Casillas Meléndez C, Yébana Huertas R, Ferrer Albiach C. Prostate Region-Wise Imaging Biomarker Profiles for Risk Stratification and Biochemical Recurrence Prediction. Cancers (Basel) 2023; 15:4163. [PMID: 37627191 PMCID: PMC10453281 DOI: 10.3390/cancers15164163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/10/2023] [Accepted: 08/13/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND Identifying prostate cancer (PCa) patients with a worse prognosis and a higher risk of biochemical recurrence (BCR) is essential to guide treatment choices. Here, we aimed to identify possible imaging biomarker (perfusion/diffusion + radiomic features) profiles extracted from MRIs that were able to discriminate patients according to their risk or the occurrence of BCR 10 years after diagnosis, as well as to evaluate their predictive value with or without clinical data. METHODS Patients with localized PCa receiving neoadjuvant androgen deprivation therapy and radiotherapy were retrospectively evaluated. Imaging features were extracted from MRIs for each prostate region or for the whole gland. Univariate and multivariate analyses were conducted. RESULTS 128 patients (mean [range] age, 71 [50-83] years) were included. Prostate region-wise imaging biomarker profiles mainly composed of radiomic features allowed discriminating risk groups and patients experiencing BCR. Heterogeneity-related radiomic features were increased in patients with worse prognosis and with BCR. Overall, imaging biomarkers profiles retained good predictive ability (AUC values superior to 0.725 in most cases), which generally improved when clinical data were included (particularly evident for the prediction of the BCR, with AUC values ranging from 0.841 to 0.877 for combined models and sensitivity values above 0.960) and when models were built per prostate region vs. the whole gland. CONCLUSIONS Prostate region-aware imaging profiles enable identification of patients with worse prognosis and with a higher risk of BCR, retaining higher predictive values when combined with clinical variables.
Collapse
Affiliation(s)
- Ángel Sánchez Iglesias
- Radiation Oncology Department, Hospital Provincial de Castellón, 12002 Castellón, Spain; (Á.S.I.); (V.M.M.); (R.M.S.)
| | - Virginia Morillo Macías
- Radiation Oncology Department, Hospital Provincial de Castellón, 12002 Castellón, Spain; (Á.S.I.); (V.M.M.); (R.M.S.)
| | - Alfonso Picó Peris
- Quantitative Imaging Biomarkers in Medicine (Quibim), 46021 Valencia, Spain; (A.P.P.); (A.F.-M.); (A.N.I.); (F.B.B.); (M.D.P.); (R.Y.H.)
| | - Almudena Fuster-Matanzo
- Quantitative Imaging Biomarkers in Medicine (Quibim), 46021 Valencia, Spain; (A.P.P.); (A.F.-M.); (A.N.I.); (F.B.B.); (M.D.P.); (R.Y.H.)
| | - Anna Nogué Infante
- Quantitative Imaging Biomarkers in Medicine (Quibim), 46021 Valencia, Spain; (A.P.P.); (A.F.-M.); (A.N.I.); (F.B.B.); (M.D.P.); (R.Y.H.)
| | - Rodrigo Muelas Soria
- Radiation Oncology Department, Hospital Provincial de Castellón, 12002 Castellón, Spain; (Á.S.I.); (V.M.M.); (R.M.S.)
| | - Fuensanta Bellvís Bataller
- Quantitative Imaging Biomarkers in Medicine (Quibim), 46021 Valencia, Spain; (A.P.P.); (A.F.-M.); (A.N.I.); (F.B.B.); (M.D.P.); (R.Y.H.)
| | - Marcos Domingo Pomar
- Quantitative Imaging Biomarkers in Medicine (Quibim), 46021 Valencia, Spain; (A.P.P.); (A.F.-M.); (A.N.I.); (F.B.B.); (M.D.P.); (R.Y.H.)
| | | | - Raúl Yébana Huertas
- Quantitative Imaging Biomarkers in Medicine (Quibim), 46021 Valencia, Spain; (A.P.P.); (A.F.-M.); (A.N.I.); (F.B.B.); (M.D.P.); (R.Y.H.)
| | - Carlos Ferrer Albiach
- Radiation Oncology Department, Hospital Provincial de Castellón, 12002 Castellón, Spain; (Á.S.I.); (V.M.M.); (R.M.S.)
| |
Collapse
|
20
|
González C, Ranem A, Pinto Dos Santos D, Othman A, Mukhopadhyay A. Lifelong nnU-Net: a framework for standardized medical continual learning. Sci Rep 2023; 13:9381. [PMID: 37296233 PMCID: PMC10256748 DOI: 10.1038/s41598-023-34484-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 05/02/2023] [Indexed: 06/12/2023] Open
Abstract
As the enthusiasm surrounding Deep Learning grows, both medical practitioners and regulatory bodies are exploring ways to safely introduce image segmentation in clinical practice. One frontier to overcome when translating promising research into the clinical open world is the shift from static to continual learning. Continual learning, the practice of training models throughout their lifecycle, is seeing growing interest but is still in its infancy in healthcare. We present Lifelong nnU-Net, a standardized framework that places continual segmentation at the hands of researchers and clinicians. Built on top of the nnU-Net-widely regarded as the best-performing segmenter for multiple medical applications-and equipped with all necessary modules for training and testing models sequentially, we ensure broad applicability and lower the barrier to evaluating new methods in a continual fashion. Our benchmark results across three medical segmentation use cases and five continual learning methods give a comprehensive outlook on the current state of the field and signify a first reproducible benchmark.
Collapse
Affiliation(s)
- Camila González
- Technical University of Darmstadt, Karolinenpl. 5, 64289, Darmstadt, Germany.
| | - Amin Ranem
- Technical University of Darmstadt, Karolinenpl. 5, 64289, Darmstadt, Germany
| | - Daniel Pinto Dos Santos
- University Hospital Cologne, Kerpener Str. 62, 50937, Cologne, Germany
- University Hospital Frankfurt, Theodor-Stern-Kai 7, 60590, Frankfurt, Germany
| | - Ahmed Othman
- University Medical Center Mainz, Langenbeckstraße 1, 55131, Mainz, Germany
| | | |
Collapse
|
21
|
Panic J, Defeudis A, Balestra G, Giannini V, Rosati S. Normalization Strategies in Multi-Center Radiomics Abdominal MRI: Systematic Review and Meta-Analyses. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2023; 4:67-76. [PMID: 37283773 PMCID: PMC10241248 DOI: 10.1109/ojemb.2023.3271455] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/18/2023] [Accepted: 04/25/2023] [Indexed: 06/08/2023] Open
Abstract
Goal: Artificial intelligence applied to medical image analysis has been extensively used to develop non-invasive diagnostic and prognostic signatures. However, these imaging biomarkers should be largely validated on multi-center datasets to prove their robustness before they can be introduced into clinical practice. The main challenge is represented by the great and unavoidable image variability which is usually addressed using different pre-processing techniques including spatial, intensity and feature normalization. The purpose of this study is to systematically summarize normalization methods and to evaluate their correlation with the radiomics model performances through meta-analyses. This review is carried out according to the PRISMA statement: 4777 papers were collected, but only 74 were included. Two meta-analyses were carried out according to two clinical aims: characterization and prediction of response. Findings of this review demonstrated that there are some commonly used normalization approaches, but not a commonly agreed pipeline that can allow to improve performance and to bridge the gap between bench and bedside.
Collapse
Affiliation(s)
- Jovana Panic
- Department of Surgical Science, and Polytechnic of Turin, Department of Electronics and TelecommunicationsUniversity of Turin10129TurinItaly
| | - Arianna Defeudis
- Department of Surgical ScienceUniversity of Turin10129TurinItaly
- Candiolo Cancer InstituteFPO-IRCCS10060CandioloItaly
| | - Gabriella Balestra
- Department of Electronics and TelecommunicationsPolytechnic of Turin10129TurinItaly
| | - Valentina Giannini
- Department of Surgical ScienceUniversity of Turin10129TurinItaly
- Candiolo Cancer InstituteFPO-IRCCS10060CandioloItaly
| | - Samanta Rosati
- Department of Electronics and TelecommunicationsPolytechnic of Turin10129TurinItaly
| |
Collapse
|
22
|
Ouyang C, Chen C, Li S, Li Z, Qin C, Bai W, Rueckert D. Causality-Inspired Single-Source Domain Generalization for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1095-1106. [PMID: 36417741 DOI: 10.1109/tmi.2022.3224067] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Deep learning models usually suffer from the domain shift issue, where models trained on one source domain do not generalize well to other unseen domains. In this work, we investigate the single-source domain generalization problem: training a deep network that is robust to unseen domains, under the condition that training data are only available from one source domain, which is common in medical imaging applications. We tackle this problem in the context of cross-domain medical image segmentation. In this scenario, domain shifts are mainly caused by different acquisition processes. We propose a simple causality-inspired data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples. Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks. They augment training images using diverse appearance transformations. 2) Further we show that spurious correlations among objects in an image are detrimental to domain robustness. These correlations might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains. We remove these spurious correlations via causal intervention. This is achieved by resampling the appearances of potentially correlated objects independently. The proposed approach is validated on three cross-domain segmentation scenarios: cross-modality (CT-MRI) abdominal image segmentation, cross-sequence (bSSFP-LGE) cardiac MRI segmentation, and cross-site prostate MRI segmentation. The proposed approach yields consistent performance gains compared with competitive methods when tested on unseen domains.
Collapse
|
23
|
Kumar GV, Bellary MI, Reddy TB. Prostate cancer classification with MRI using Taylor-Bird Squirrel Optimization based Deep Recurrent Neural Network. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2165242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
Affiliation(s)
- Goddumarri Vijay Kumar
- Dept. of Computer Science and Technology, Sri Krishnadevaraya University, Ananthapuram, A.P., India
| | - Mohammed Ismail Bellary
- Department of Artificial Intelligence & Machine Learning, P.A. College of Engineering, Managalore, Affiliated to Visvesvaraya Technological University, Belagavi, K.A., India
| | - Thota Bhaskara Reddy
- Dept. of Computer Science and Technology, Sri Krishnadevaraya University, Ananthapuram, A.P., India
| |
Collapse
|
24
|
Hu S, Liao Z, Zhang J, Xia Y. Domain and Content Adaptive Convolution Based Multi-Source Domain Generalization for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:233-244. [PMID: 36155434 DOI: 10.1109/tmi.2022.3210133] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The domain gap caused mainly by variable medical image quality renders a major obstacle on the path between training a segmentation model in the lab and applying the trained model to unseen clinical data. To address this issue, domain generalization methods have been proposed, which however usually use static convolutions and are less flexible. In this paper, we propose a multi-source domain generalization model based on the domain and content adaptive convolution (DCAC) for the segmentation of medical images across different modalities. Specifically, we design the domain adaptive convolution (DAC) module and content adaptive convolution (CAC) module and incorporate both into an encoder-decoder backbone. In the DAC module, a dynamic convolutional head is conditioned on the predicted domain code of the input to make our model adapt to the unseen target domain. In the CAC module, a dynamic convolutional head is conditioned on the global image features to make our model adapt to the test image. We evaluated the DCAC model against the baseline and four state-of-the-art domain generalization methods on the prostate segmentation, COVID-19 lesion segmentation, and optic cup/optic disc segmentation tasks. Our results not only indicate that the proposed DCAC model outperforms all competing methods on each segmentation task but also demonstrate the effectiveness of the DAC and CAC modules. Code is available at https://git.io/DCAC.
Collapse
|
25
|
Alley S, Jackson E, Olivié D, Van der Heide UA, Ménard C, Kadoury S. Effect of magnetic resonance imaging pre-processing on the performance of model-based prostate tumor probability mapping. Phys Med Biol 2022; 67. [PMID: 36223780 DOI: 10.1088/1361-6560/ac99b4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 10/12/2022] [Indexed: 11/07/2022]
Abstract
Objective. Multi-parametric magnetic resonance imaging (mpMRI) has become an important tool for the detection of prostate cancer in the past two decades. Despite the high sensitivity of MRI for tissue characterization, it often suffers from a lack of specificity. Several well-established pre-processing tools are publicly available for improving image quality and removing both intra- and inter-patient variability in order to increase the diagnostic accuracy of MRI. To date, most of these pre-processing tools have largely been assessed individually. In this study we present a systematic evaluation of a multi-step mpMRI pre-processing pipeline to automate tumor localization within the prostate using a previously trained model.Approach. The study was conducted on 31 treatment-naïve prostate cancer patients with a PI-RADS-v2 compliant mpMRI examination. Multiple methods were compared for each pre-processing step: (1) bias field correction, (2) normalization, and (3) deformable multi-modal registration. Optimal parameter values were estimated for each step on the basis of relevant individual metrics. Tumor localization was then carried out via a model-based approach that takes both mpMRI and prior clinical knowledge features as input. A sequential optimization approach was adopted for determining the optimal parameters and techniques in each step of the pipeline.Main results. The application of bias field correction alone increased the accuracy of tumor localization (area under the curve (AUC) = 0.77;p-value = 0.004) over unprocessed data (AUC = 0.74). Adding normalization to the pre-processing pipeline further improved diagnostic accuracy of the model to an AUC of 0.85 (p-value = 0.000 12). Multi-modal registration of apparent diffusion coefficient images to T2-weighted images improved the alignment of tumor locations in all but one patient, resulting in a slight decrease in accuracy (AUC = 0.84;p-value = 0.30).Significance. Overall, our findings suggest that the combined effect of multiple pre-processing steps with optimal values has the ability to improve the quantitative classification of prostate cancer using mpMRI. Clinical trials: NCT03378856 and NCT03367702.
Collapse
Affiliation(s)
| | - Edward Jackson
- The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Damien Olivié
- Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | | | - Cynthia Ménard
- Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | - Samuel Kadoury
- Polytechnique Montréal, Montréal, Québec, Canada.,Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| |
Collapse
|
26
|
Rouvière O, Jaouen T, Baseilhac P, Benomar ML, Escande R, Crouzet S, Souchon R. Artificial intelligence algorithms aimed at characterizing or detecting prostate cancer on MRI: How accurate are they when tested on independent cohorts? – A systematic review. Diagn Interv Imaging 2022; 104:221-234. [PMID: 36517398 DOI: 10.1016/j.diii.2022.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
PURPOSE The purpose of this study was to perform a systematic review of the literature on the diagnostic performance, in independent test cohorts, of artificial intelligence (AI)-based algorithms aimed at characterizing/detecting prostate cancer on magnetic resonance imaging (MRI). MATERIALS AND METHODS Medline, Embase and Web of Science were searched for studies published between January 2018 and September 2022, using a histological reference standard, and assessing prostate cancer characterization/detection by AI-based MRI algorithms in test cohorts composed of more than 40 patients and with at least one of the following independency criteria as compared to the training cohort: different institution, different population type, different MRI vendor, different magnetic field strength or strict temporal splitting. RESULTS Thirty-five studies were selected. The overall risk of bias was low. However, 23 studies did not use predefined diagnostic thresholds, which may have optimistically biased the results. Test cohorts fulfilled one to three of the five independency criteria. The diagnostic performance of the algorithms used as standalones was good, challenging that of human reading. In the 12 studies with predefined diagnostic thresholds, radiomics-based computer-aided diagnosis systems (assessing regions-of-interest drawn by the radiologist) tended to provide more robust results than deep learning-based computer-aided detection systems (providing probability maps). Two of the six studies comparing unassisted and assisted reading showed significant improvement due to the algorithm, mostly by reducing false positive findings. CONCLUSION Prostate MRI AI-based algorithms showed promising results, especially for the relatively simple task of characterizing predefined lesions. The best management of discrepancies between human reading and algorithm findings still needs to be defined.
Collapse
Affiliation(s)
- Olivier Rouvière
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France; Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France.
| | | | - Pierre Baseilhac
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Mohammed Lamine Benomar
- LabTAU, INSERM, U1032, Lyon 69003, France; University of Ain Temouchent, Faculty of Science and Technology, Algeria
| | - Raphael Escande
- Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Vascular and Urinary Imaging, Lyon 69003, France
| | - Sébastien Crouzet
- Université Lyon 1, Faculté de médecine Lyon Est, Lyon 69003, France; LabTAU, INSERM, U1032, Lyon 69003, France; Hospices Civils de Lyon, Hôpital Edouard Herriot, Department of Urology, Lyon 69003, France
| | | |
Collapse
|
27
|
Yang H, Chen C, Jiang M, Liu Q, Cao J, Heng PA, Dou Q. DLTTA: Dynamic Learning Rate for Test-Time Adaptation on Cross-Domain Medical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3575-3586. [PMID: 35839185 DOI: 10.1109/tmi.2022.3191535] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Test-time adaptation (TTA) has increasingly been an important topic to efficiently tackle the cross-domain distribution shift at test time for medical images from different institutions. Previous TTA methods have a common limitation of using a fixed learning rate for all the test samples. Such a practice would be sub-optimal for TTA, because test data may arrive sequentially therefore the scale of distribution shift would change frequently. To address this problem, we propose a novel dynamic learning rate adjustment method for test-time adaptation, called DLTTA, which dynamically modulates the amount of weights update for each test image to account for the differences in their distribution shift. Specifically, our DLTTA is equipped with a memory bank based estimation scheme to effectively measure the discrepancy of a given test sample. Based on this estimated discrepancy, a dynamic learning rate adjustment strategy is then developed to achieve a suitable degree of adaptation for each test sample. The effectiveness and general applicability of our DLTTA is extensively demonstrated on three tasks including retinal optical coherence tomography (OCT) segmentation, histopathological image classification, and prostate 3D MRI segmentation. Our method achieves effective and fast test-time adaptation with consistent performance improvement over current state-of-the-art test-time adaptation methods. Code is available at https://github.com/med-air/DLTTA.
Collapse
|
28
|
González C, Gotkowski K, Fuchs M, Bucher A, Dadras A, Fischbach R, Kaltenborn IJ, Mukhopadhyay A. Distance-based detection of out-of-distribution silent failures for Covid-19 lung lesion segmentation. Med Image Anal 2022; 82:102596. [PMID: 36084564 PMCID: PMC9400372 DOI: 10.1016/j.media.2022.102596] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 08/04/2022] [Accepted: 08/18/2022] [Indexed: 11/16/2022]
Abstract
Automatic segmentation of ground glass opacities and consolidations in chest computer tomography (CT) scans can potentially ease the burden of radiologists during times of high resource utilisation. However, deep learning models are not trusted in the clinical routine due to failing silently on out-of-distribution (OOD) data. We propose a lightweight OOD detection method that leverages the Mahalanobis distance in the feature space and seamlessly integrates into state-of-the-art segmentation pipelines. The simple approach can even augment pre-trained models with clinically relevant uncertainty quantification. We validate our method across four chest CT distribution shifts and two magnetic resonance imaging applications, namely segmentation of the hippocampus and the prostate. Our results show that the proposed method effectively detects far- and near-OOD samples across all explored scenarios.
Collapse
Affiliation(s)
- Camila González
- Darmstadt University of Technology, Karolinenplatz 5, 64289 Darmstadt, Germany.
| | - Karol Gotkowski
- Darmstadt University of Technology, Karolinenplatz 5, 64289 Darmstadt, Germany
| | - Moritz Fuchs
- Darmstadt University of Technology, Karolinenplatz 5, 64289 Darmstadt, Germany
| | - Andreas Bucher
- Uniklinik Frankfurt, Theodor-Stern-Kai 7, 60590 Frankfurt am Main, Germany
| | - Armin Dadras
- Uniklinik Frankfurt, Theodor-Stern-Kai 7, 60590 Frankfurt am Main, Germany
| | - Ricarda Fischbach
- Uniklinik Frankfurt, Theodor-Stern-Kai 7, 60590 Frankfurt am Main, Germany
| | | | | |
Collapse
|
29
|
Pang Y, Chen X, Huang Y, Yap PT, Lian J. Weakly Supervised MR-TRUS Image Synthesis for Brachytherapy of Prostate Cancer. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13436:485-494. [PMID: 38863462 PMCID: PMC11165422 DOI: 10.1007/978-3-031-16446-0_46] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2024]
Abstract
Prostate magnetic resonance imaging (MRI) offers accurate details of structures and tumors for prostate cancer brachytherapy. However, it is unsuitable for routine treatment since MR images differ significantly from trans-rectal ultrasound (TRUS) images conventionally used for radioactive seed implants in brachytherapy. TRUS imaging is fast, convenient, and widely available in the operation room but is known for its low soft-tissue contrast and tumor visualization capability in the prostate area. Conventionally, practitioners usually rely on prostate segmentation to fuse the two imaging modalities with non-rigid registration. However, prostate delineation is often not available on diagnostic MR images. Besides, the high non-linear intensity relationship between two imaging modalities poses a challenge to non-rigid registration. Hence, we propose a method to generate a TRUS-styled image from a prostate MR image to replace the role of the TRUS image in radiation therapy dose pre-planning. We propose a structural constraint to handle non-linear projections of anatomical structures between MR and TRUS images. We further include an adversarial mechanism to enforce the model to preserve anatomical features in an MR image (such as prostate boundary and dominant intraprostatic lesion (DIL)) while synthesizing the TRUS-styled counterpart image. The proposed method is compared with other state-of-art methods with real TRUS images as the reference. The results demonstrate that the TRUS images synthesized by our method can be used for brachytherapy treatment planning for prostate cancer.
Collapse
Affiliation(s)
- Yunkui Pang
- University of North Carolina, Chapel Hill, NC 27599, USA
| | - Xu Chen
- College of Computer Science and Technology, Huaqiao University, Xiamen 361021, China
| | - Yunzhi Huang
- School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Pew-Thian Yap
- University of North Carolina, Chapel Hill, NC 27599, USA
| | - Jun Lian
- University of North Carolina, Chapel Hill, NC 27599, USA
| |
Collapse
|
30
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
31
|
Kumaraswamy AK, Patil CM. Automatic prostate segmentation of magnetic resonance imaging using Res-Net. MAGMA (NEW YORK, N.Y.) 2022; 35:621-630. [PMID: 34890013 DOI: 10.1007/s10334-021-00979-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 09/18/2021] [Accepted: 11/17/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES Segmenting the prostate from magnetic resonance images plays an important role in prostate cancer diagnosis and in evaluating the treatment response. However, the lack of a clear prostate boundary, heterogeneity of prostate tissue, large variety of prostate shape and scarcity of annotated training data makes automatic segmentation a very challenging task. In this work, we proposed a novel two stage segmentation method to automatically segment prostate to support accurate and reproducible results with multisite and multivendor dataset. In the proposed method, we use the combination U-Net with residual blocks. METHODS The proposed method comprises two stage neural network, first is 2D U-Net, used find the approximate location of prostate, the second is the combination of U-Net and Res-Net used for accurate segmentation of prostate. The network was trained on 116 patient datasets from three publicly available data sources. 80% of data is used for training, 10% for validation, and 10% for testing. The commonly used segmentation evaluation metrics Dice similarity coefficient (DSC), Sensitivity, and Specificity are used for quantitative evaluation of the network. RESULTS With the proposed method average DSC value of 93.8%, Sensitivity value of 94.6% and Specificity of 99.3% was achieved on test datasets. CONCLUSIONS Our experimental results show that the segmentation accuracy can be improved significantly using two stage neural networks.
Collapse
Affiliation(s)
- Asha Kuppe Kumaraswamy
- Department of Electronics and Communication, Vidyavardhaka College of Engineering, Mysuru, India.
| | - Chandrashekar M Patil
- Department of Electronics and Communication, Vidyavardhaka College of Engineering, Mysuru, India
| |
Collapse
|
32
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Prostate158 - An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Comput Biol Med 2022; 148:105817. [PMID: 35841780 DOI: 10.1016/j.compbiomed.2022.105817] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 06/12/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The development of deep learning (DL) models for prostate segmentation on magnetic resonance imaging (MRI) depends on expert-annotated data and reliable baselines, which are often not publicly available. This limits both reproducibility and comparability. METHODS Prostate158 consists of 158 expert annotated biparametric 3T prostate MRIs comprising T2w sequences and diffusion-weighted sequences with apparent diffusion coefficient maps. Two U-ResNets trained for segmentation of anatomy (central gland, peripheral zone) and suspicious lesions for prostate cancer (PCa) with a PI-RADS score of ≥4 served as baseline algorithms. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average surface distance (ASD). The Wilcoxon test with Bonferroni correction was used to evaluate differences in performance. The generalizability of the baseline model was assessed using the open datasets Medical Segmentation Decathlon and PROSTATEx. RESULTS Compared to Reader 1, the models achieved a DSC/HD/ASD of 0.88/18.3/2.2 for the central gland, 0.75/22.8/1.9 for the peripheral zone, and 0.45/36.7/17.4 for PCa. Compared with Reader 2, the DSC/HD/ASD were 0.88/17.5/2.6 for the central gland, 0.73/33.2/1.9 for the peripheral zone, and 0.4/39.5/19.1 for PCa. Interrater agreement measured in DSC/HD/ASD was 0.87/11.1/1.0 for the central gland, 0.75/15.8/0.74 for the peripheral zone, and 0.6/18.8/5.5 for PCa. Segmentation performances on the Medical Segmentation Decathlon and PROSTATEx were 0.82/22.5/3.4; 0.86/18.6/2.5 for the central gland, and 0.64/29.2/4.7; 0.71/26.3/2.2 for the peripheral zone. CONCLUSIONS We provide an openly accessible, expert-annotated 3T dataset of prostate MRI and a reproducible benchmark to foster the development of prostate segmentation algorithms.
Collapse
Affiliation(s)
- Lisa C Adams
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | - Marcus R Makowski
- Technical University of Munich, Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Ismaninger Str. 22, 81675, Munich, Germany
| | - Günther Engel
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Institute for Diagnostic and Interventional Radiology, Georg-August University, Göttingen, Germany
| | - Maximilian Rattunde
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Felix Busch
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Patrick Asbach
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Stefan M Niehues
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | | | - Geert Litjens
- Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | - Keno K Bressem
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
33
|
Alkadi R, Abdullah O, Werghi N. The Classification Power of Classical and Intra-voxel Incoherent Motion (IVIM) Fitting Models of Diffusion-weighted Magnetic Resonance Images: An Experimental Study. J Digit Imaging 2022; 35:678-691. [PMID: 35182292 PMCID: PMC9156596 DOI: 10.1007/s10278-022-00604-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 01/25/2022] [Accepted: 01/25/2022] [Indexed: 12/15/2022] Open
Abstract
This study aims at investigating the classification power of different b-values of the diffusion-weighted magnetic resonance images (DWI) as indicator of prostate cancer. This paper investigates several techniques for analyzing data from DWI acquired at a range of b-values for the purpose of detecting prostate cancer. In the first phase of experiments, we analyze the available data by producing two main parametric maps using two common models, namely: the intra-voxel incoherent motion (IVIM) model and the mono-exponential ADC model. Accordingly, we evaluated the benign/malignant tissue classification potential of several parametric maps produced using different combinations of b-values and fitting models. In the second phase, we utilized the maps that performed best in the first phase of experiments to design a machine learning-based computer-assisted diagnosis system for the detection of early stage prostate cancer. The system performance was cross-validated using data from 20 patients. On a fivefold cross-validation scheme, a maximum accuracy and an area under the receiver operating characteristic (AUC) of 90% and 0.978, respectively, was achieved by a system that uses ADC maps fitted using the mono-exponential model at 11 different b-values. The results suggest that the proposed machine learning-based diagnosis system is potentially powerful in differentiating between malignant and benign prostate tissues when combined with carefully generated ADC maps.
Collapse
|
34
|
Souza SAS, Reis LO, Alves AFF, Silva LC, Medeiros MCK, Andrade DL, Billis A, Amaro JL, Martins DL, Trindade AP, Miranda JRA, Pina DR. Multiple analyses suggests texture features can indicate the presence of tumor in the prostate tissue. Phys Eng Sci Med 2022; 45:525-535. [PMID: 35325377 DOI: 10.1007/s13246-022-01118-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 03/09/2022] [Indexed: 10/18/2022]
Abstract
Several studies have demonstrated statistical and texture analysis abilities to differentiate cancerous from healthy tissue in magnetic resonance imaging. This study developed a method based on texture analysis and machine learning to differentiate prostate findings. Forty-eight male patients with PI-RADS classification and subsequent radical prostatectomy histopathological analysis were used as gold standard. Experienced radiologists delimited the regions of interest in magnetic resonance images. Six different groups of images were used to perform multiple analyses (seven analyses variations). Those analyses were outlined by specialists in urology as those of most significant importance for the classification. Forty texture features were extracted from each image and processed with Random Forest, Support Vector Machine, K-Nearest Neighbors, and Naive Bayes. Those seven analyses variation results were described in terms of area under the ROC curve (AUC), accuracy, F-score, precision and sensitivity. The highest AUC (93.7%) and accuracy (88.8%) were obtained when differentiating the group with both MRI and histopathology positive findings against the group with both negative MRI and histopathology. When differentiating the group with both MRI and histopathology positive findings versus the peripheral image zone group the AUC value was 86.6%. When differentiating the group with negative MRI/positive histopathology versus the group with both negative MRI and histopathology the AUC value was 80.7%. The evaluation of statistical and texture analysis promoted very suggestive indications for future work in prostate cancer suspicious regions. The method is fast for both region of interest selection and classification with machine learning and the result brings original contributions in the classification of different groups of patients. This tool is low-cost, and can be used to assist diagnostic decisions.
Collapse
Affiliation(s)
- Sérgio Augusto Santana Souza
- São Paulo State University Júlio de Mesquita Filho, R. Prof. Dr. Antônio Celso Wagner Zanin, 250 - Distrito de Rubião Junior, Botucatu, SP, CEP: 18618-689, Brazil
| | - Leonardo Oliveira Reis
- Department of Urology, UroScience, State University of Campinas, Unicamp and Pontifical Catholic University of Campinas, PUC-Campinas, Av. John Boyd Dunlop-Jardim Ipaussurama, Campinas, SP, CEP: 13034-685, Brazil
| | - Allan Felipe Fattori Alves
- Botucatu Medical School, Clinics Hospital, Medical Physics and Radioprotection Nucleus, Av. Prof. Mário Rubens Guimarães Montenegro, s/n - UNESP - Campus de Botucatu, Botucatu, SP, CEP: 18618687, Brazil
| | - Letícia Cotinguiba Silva
- São Paulo State University Júlio de Mesquita Filho, R. Prof. Dr. Antônio Celso Wagner Zanin, 250 - Distrito de Rubião Junior, Botucatu, SP, CEP: 18618-689, Brazil
| | | | - Danilo Leite Andrade
- Department of Urology, UroScience, State University of Campinas, Unicamp and Pontifical Catholic University of Campinas, PUC-Campinas, Av. John Boyd Dunlop-Jardim Ipaussurama, Campinas, SP, CEP: 13034-685, Brazil
| | - Athanase Billis
- Department of Anatomic Pathology and Urology, School of Medical Sciences, State University of Campinas (Unicamp), Campinas, Brazil
| | - João Luiz Amaro
- Department of Urology, Botucatu Medical School, São Paulo State University (UNESP), Botucatu, SP, Brazil
| | | | - André Petean Trindade
- Botucatu Medical School, São Paulo State University Júlio de Mesquita Filho, Av. Prof. Mário Rubens Guimarães Montenegro, s/n - UNESP - Campus de Botucatu, Botucatu, SP, CEP:18618687, Brazil
| | - José Ricardo Arruda Miranda
- Institute of Bioscience, São Paulo State University Júlio de Mesquita Filho, R. Prof. Dr. Antônio Celso Wagner Zanin, 250 - Distrito de Rubião Junior, Botucatu, SP, CEP: 8618-689, Brazil
| | - Diana Rodrigues Pina
- Botucatu Medical School, São Paulo State University Júlio de Mesquita Filho, Av. Prof. Mário Rubens Guimarães Montenegro, s/n - UNESP - Campus de Botucatu, Botucatu, SP, CEP:18618687, Brazil.
| |
Collapse
|
35
|
Jin J, Zhang L, Leng E, Metzger GJ, Koopmeiners JS. Bayesian spatial models for voxel-wise prostate cancer classification using multi-parametric magnetic resonance imaging data. Stat Med 2022; 41:483-499. [PMID: 34747059 PMCID: PMC9316890 DOI: 10.1002/sim.9245] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 10/15/2021] [Accepted: 10/15/2021] [Indexed: 11/09/2022]
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) has been playing an increasingly important role in the detection of prostate cancer (PCa). Various computer-aided detection algorithms were proposed for automated PCa detection by combining information in multiple mpMRI parameters. However, there are specific features of mpMRI, including between-voxel correlation within each prostate and heterogeneity across patients, that have not been fully explored but could potentially improve PCa detection if leveraged appropriately. This article proposes novel Bayesian approaches for voxel-wise PCa classification that accounts for spatial correlation and between-patient heterogeneity in the mpMRI data. Modeling the spatial correlation is challenging due to the extreme high dimensionality of the data, and we propose three scalable approaches based on Nearest Neighbor Gaussian Process (NNGP), reduced-rank approximation, and a conditional autoregressive (CAR) model that approximates a Gaussian Process with the Matérn covariance, respectively. Our simulation study shows that properly modeling the spatial correlation and between-patient heterogeneity can substantially improve PCa classification. Application to in vivo data illustrates that classification is improved by all three spatial modeling approaches considered, while modeling the between-patient heterogeneity does not further improve our classifiers. Among the proposed models, the NNGP-based model is recommended given its high classification accuracy and computational efficiency.
Collapse
Affiliation(s)
- Jin Jin
- Division of Biostatistics, School of Public Health,
University of Minnesota, Minneapolis, MN 55455, USA
| | - Lin Zhang
- Division of Biostatistics, School of Public Health,
University of Minnesota, Minneapolis, MN 55455, USA
| | - Ethan Leng
- Department of Biomedical Engineering, University of
Minnesota, Minneapolis, MN 55455, USA
| | - Gregory J. Metzger
- Department of Radiology, Center for Magnetic Resonance
Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Joseph S. Koopmeiners
- Division of Biostatistics, School of Public Health,
University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|
36
|
Rouvière O, Souchon R, Lartizien C, Mansuy A, Magaud L, Colom M, Dubreuil-Chambardel M, Debeer S, Jaouen T, Duran A, Rippert P, Riche B, Monini C, Vlaeminck-Guillem V, Haesebaert J, Rabilloud M, Crouzet S. Detection of ISUP ≥2 prostate cancers using multiparametric MRI: prospective multicentre assessment of the non-inferiority of an artificial intelligence system as compared to the PI-RADS V.2.1 score (CHANGE study). BMJ Open 2022; 12:e051274. [PMID: 35140147 PMCID: PMC8830410 DOI: 10.1136/bmjopen-2021-051274] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
INTRODUCTION Prostate multiparametric MRI (mpMRI) has shown good sensitivity in detecting cancers with an International Society of Urological Pathology (ISUP) grade of ≥2. However, it lacks specificity, and its inter-reader reproducibility remains moderate. Biomarkers, such as the Prostate Health Index (PHI), may help select patients for prostate biopsy. Computer-aided diagnosis/detection (CAD) systems may also improve mpMRI interpretation. Different prototypes of CAD systems are currently developed under the Recherche Hospitalo-Universitaire en Santé / Personalized Focused Ultrasound Surgery of Localized Prostate Cancer (RHU PERFUSE) research programme, tackling challenging issues such as robustness across imaging protocols and magnetic resonance (MR) vendors, and ability to characterise cancer aggressiveness. The study primary objective is to evaluate the non-inferiority of the area under the receiver operating characteristic curve of the final CAD system as compared with the Prostate Imaging-Reporting and Data System V.2.1 (PI-RADS V.2.1) in predicting the presence of ISUP ≥2 prostate cancer in patients undergoing prostate biopsy. METHODS This prospective, multicentre, non-inferiority trial will include 420 men with suspected prostate cancer, a prostate-specific antigen level of ≤30 ng/mL and a clinical stage ≤T2 c. Included men will undergo prostate mpMRI that will be interpreted using the PI-RADS V.2.1 score. Then, they will undergo systematic and targeted biopsy. PHI will be assessed before biopsy. At the end of patient inclusion, MR images will be assessed by the final version of the CAD system developed under the RHU PERFUSE programme. Key secondary outcomes include the prediction of ISUP grade ≥2 prostate cancer during a 3-year follow-up, and the number of biopsy procedures saved and ISUP grade ≥2 cancers missed by several diagnostic pathways combining PHI and MRI findings. ETHICS AND DISSEMINATION Ethical approval was obtained from the Comité de Protection des Personnes Nord Ouest III (ID-RCB: 2020-A02785-34). After publication of the results, access to MR images will be possible for testing other CAD systems. TRIAL REGISTRATION NUMBER NCT04732156.
Collapse
Affiliation(s)
- Olivier Rouvière
- Université Lyon 1, Université de Lyon, Lyon, France
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
- LabTau, INSERM U1032, Lyon, France
| | | | - Carole Lartizien
- CREATIS, INSERM U1294, Villeurbanne, France
- CNRS UMR 5220, INSA-Lyon, Villeurbanne, France
| | - Adeline Mansuy
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | - Laurent Magaud
- Service Recherche et Epidémiologie Cliniques, Pôle Santé Publique, Hospices Civils de Lyon, Lyon, France
| | - Matthieu Colom
- Direction de la Recherche Clinique et de l'Innovation, Hospices Civils de Lyon, Lyon, France
| | - Marine Dubreuil-Chambardel
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | - Sabine Debeer
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | | | - Audrey Duran
- CREATIS, INSERM U1294, Villeurbanne, France
- CNRS UMR 5220, INSA-Lyon, Villeurbanne, France
| | - Pascal Rippert
- Service Recherche et Epidémiologie Cliniques, Pôle Santé Publique, Hospices Civils de Lyon, Lyon, France
| | - Benjamin Riche
- Service de Biostatistique-Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, Lyon, France
- Laboratoire de Biométrie et Biologie Évolutive CNRS UMR 5558, Équipe Biostatistiques Santé, Université de Lyon, Lyon, France
| | | | - Virginie Vlaeminck-Guillem
- Université Lyon 1, Université de Lyon, Lyon, France
- Service de Biochimie et Biologie Moléculaire Sud, Centre Hospitalier Lyon Sud, Hospices Civils de Lyon, Pierre Bénite, France
| | - Julie Haesebaert
- Université Lyon 1, Université de Lyon, Lyon, France
- Service Recherche et Epidémiologie Cliniques, Pôle Santé Publique, Hospices Civils de Lyon, Lyon, France
- Research on Healthcare Performance (RESHAPE), INSERM U1290, Lyon, France
| | - Muriel Rabilloud
- Université Lyon 1, Université de Lyon, Lyon, France
- Service de Biostatistique-Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, Lyon, France
- Laboratoire de Biométrie et Biologie Évolutive CNRS UMR 5558, Équipe Biostatistiques Santé, Université de Lyon, Lyon, France
| | - Sébastien Crouzet
- Université Lyon 1, Université de Lyon, Lyon, France
- LabTau, INSERM U1032, Lyon, France
- Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| |
Collapse
|
37
|
Chen T, Zhang Z, Tan S, Zhang Y, Wei C, Wang S, Zhao W, Qian X, Zhou Z, Shen J, Dai Y, Hu J. MRI Based Radiomics Compared With the PI-RADS V2.1 in the Prediction of Clinically Significant Prostate Cancer: Biparametric vs Multiparametric MRI. Front Oncol 2022; 11:792456. [PMID: 35127499 PMCID: PMC8810653 DOI: 10.3389/fonc.2021.792456] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Accepted: 12/20/2021] [Indexed: 12/18/2022] Open
Abstract
PurposeTo compare the performance of radiomics to that of the Prostate Imaging Reporting and Data System (PI-RADS) v2.1 scoring system in the detection of clinically significant prostate cancer (csPCa) based on biparametric magnetic resonance imaging (bpMRI) vs. multiparametric MRI (mpMRI).MethodsA total of 204 patients with pathological results were enrolled between January 2018 and December 2019, with 142 patients in the training cohort and 62 patients in the testing cohort. The radiomics model was compared with the PI-RADS v2.1 for the diagnosis of csPCa based on bpMRI and mpMRI by using receiver operating characteristic (ROC) curve analysis.ResultsThe radiomics model based on bpMRI and mpMRI signatures showed high predictive efficiency but with no significant differences (AUC = 0.975 vs 0.981, p=0.687 in the training cohort, and 0.953 vs 0.968, p=0.287 in the testing cohort, respectively). In addition, the radiomics model outperformed the PI-RADS v2.1 in the diagnosis of csPCa regardless of whether bpMRI (AUC = 0.975 vs. 0.871, p= 0.030 for the training cohort and AUC = 0.953 vs. 0.853, P = 0.024 for the testing cohort) or mpMRI (AUC = 0.981 vs. 0.880, p= 0.030 for the training cohort and AUC = 0.968 vs. 0.863, P = 0.016 for the testing cohort) was incorporated.ConclusionsOur study suggests the performance of bpMRI- and mpMRI-based radiomics models show no significant difference, which indicates that omitting DCE imaging in radiomics can simplify the process of analysis. Adding radiomics to PI-RADS v2.1 may improve the performance to predict csPCa.
Collapse
Affiliation(s)
- Tong Chen
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Zhiyuan Zhang
- School of Medical Imaging, Biomedical Engineering, Xuzhou Medical University, Xuzhou, China
| | - Shuangxiu Tan
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, China
- Department of Ultrasound, Nanjing Drum Tower Hospital, Nanjing Medical School, Nanjing, China
| | - Yueyue Zhang
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Chaogang Wei
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Shan Wang
- Department of Radiology, Jiangsu Jiangyin People’s Hospital, Jiangyin, China
| | - Wenlu Zhao
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Xusheng Qian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China
| | - Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Junkang Shen
- Department of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, China
- Institute of Imaging Medicine, Soochow University, Suzhou, China
- *Correspondence: Junkang Shen, ; Yakang Dai, ; Jisu Hu,
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- *Correspondence: Junkang Shen, ; Yakang Dai, ; Jisu Hu,
| | - Jisu Hu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, China
- *Correspondence: Junkang Shen, ; Yakang Dai, ; Jisu Hu,
| |
Collapse
|
38
|
Duran A, Dussert G, Rouviére O, Jaouen T, Jodoin PM, Lartizien C. ProstAttention-Net: a deep attention model for prostate cancer segmentation by aggressiveness in MRI scans. Med Image Anal 2022; 77:102347. [DOI: 10.1016/j.media.2021.102347] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/20/2021] [Accepted: 12/31/2021] [Indexed: 11/27/2022]
|
39
|
Ferro M, de Cobelli O, Musi G, del Giudice F, Carrieri G, Busetto GM, Falagario UG, Sciarra A, Maggi M, Crocetto F, Barone B, Caputo VF, Marchioni M, Lucarelli G, Imbimbo C, Mistretta FA, Luzzago S, Vartolomei MD, Cormio L, Autorino R, Tătaru OS. Radiomics in prostate cancer: an up-to-date review. Ther Adv Urol 2022; 14:17562872221109020. [PMID: 35814914 PMCID: PMC9260602 DOI: 10.1177/17562872221109020] [Citation(s) in RCA: 53] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Accepted: 05/30/2022] [Indexed: 12/24/2022] Open
Abstract
Prostate cancer (PCa) is the most common worldwide diagnosed malignancy in male population. The diagnosis, the identification of aggressive disease, and the post-treatment follow-up needs a more comprehensive and holistic approach. Radiomics is the extraction and interpretation of images phenotypes in a quantitative manner. Radiomics may give an advantage through advancements in imaging modalities and through the potential power of artificial intelligence techniques by translating those features into clinical outcome prediction. This article gives an overview on the current evidence of methodology and reviews the available literature on radiomics in PCa patients, highlighting its potential for personalized treatment and future applications.
Collapse
Affiliation(s)
- Matteo Ferro
- Department of Urology, European Institute of Oncology, IRCCS, Milan, Italy, via Ripamonti 435 Milano, Italy
| | - Ottavio de Cobelli
- Department of Urology, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology, Università degli Studi di Milano, Milan, Italy
| | - Gennaro Musi
- Department of Urology, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology, Università degli Studi di Milano, Milan, Italy
| | - Francesco del Giudice
- Department of Urology, Policlinico Umberto I, Sapienza University of Rome, Rome, Italy
| | - Giuseppe Carrieri
- Department of Urology and Organ Transplantation, University of Foggia, Foggia, Italy
| | - Gian Maria Busetto
- Department of Urology and Organ Transplantation, University of Foggia, Foggia, Italy
| | | | - Alessandro Sciarra
- Department of Urology, Policlinico Umberto I, Sapienza University of Rome, Rome, Italy
| | - Martina Maggi
- Department of Urology, Policlinico Umberto I, Sapienza University of Rome, Rome, Italy
| | - Felice Crocetto
- Department of Neurosciences, Reproductive Sciences and Odontostomatology, University of Naples ‘Federico II’, Naples, Italy
| | - Biagio Barone
- Department of Neurosciences, Reproductive Sciences and Odontostomatology, University of Naples ‘Federico II’, Naples, Italy
| | - Vincenzo Francesco Caputo
- Department of Neurosciences, Reproductive Sciences and Odontostomatology, University of Naples ‘Federico II’, Naples, Italy
| | - Michele Marchioni
- Department of Medical, Oral and Biotechnological Sciences, G. d’Annunzio, University of Chieti, Chieti, Italy; Urology Unit, ‘SS. Annunziata’ Hospital, Chieti, Italy
- Department of Urology, ASL Abruzzo 2, Chieti, Italy
| | - Giuseppe Lucarelli
- Department of Emergency and Organ Transplantation, Urology, Andrology and Kidney Transplantation Unit, University of Bari, Bari, Italy
| | - Ciro Imbimbo
- Department of Neurosciences, Reproductive Sciences and Odontostomatology, University of Naples ‘Federico II’, Naples, Italy
| | - Francesco Alessandro Mistretta
- Department of Urology, European Institute of Oncology, IRCCS, Milan, Italy
- Università degli Studi di Milano, Milan, Italy
| | - Stefano Luzzago
- Department of Urology, European Institute of Oncology, IRCCS, Milan, Italy
- Università degli Studi di Milano, Milan, Italy
| | - Mihai Dorin Vartolomei
- Department of Cell and Molecular Biology, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Târgu Mures, Târgu Mures, Romania
- Department of Urology, Medical University of Vienna, Vienna, Austria
| | - Luigi Cormio
- Urology and Renal Transplantation Unit, Department of Medical and Surgical Sciences, University of Foggia, Foggia, Italy
- Urology Unit, Bonomo Teaching Hospital, Foggia, Italy
| | | | - Octavian Sabin Tătaru
- Institution Organizing University Doctoral Studies, I.O.S.U.D., George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Târgu Mures, Târgu Mures, Romania
| |
Collapse
|
40
|
Jin J, Zhang L, Leng E, Metzger GJ, Koopmeiners JS. Multi-resolution super learner for voxel-wise classification of prostate cancer using multi-parametric MRI. J Appl Stat 2021; 50:805-826. [PMID: 36819087 PMCID: PMC9930806 DOI: 10.1080/02664763.2021.2017411] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 12/05/2021] [Indexed: 10/19/2022]
Abstract
Multi-parametric MRI (mpMRI) is a critical tool in prostate cancer (PCa) diagnosis and management. To further advance the use of mpMRI in patient care, computer aided diagnostic methods are under continuous development for supporting/supplanting standard radiological interpretation. While voxel-wise PCa classification models are the gold standard, few if any approaches have incorporated the inherent structure of the mpMRI data, such as spatial heterogeneity and between-voxel correlation, into PCa classification. We propose a machine learning-based method to fill in this gap. Our method uses an ensemble learning approach to capture regional heterogeneity in the data, where classifiers are developed at multiple resolutions and combined using the super learner algorithm, and further account for between-voxel correlation through a Gaussian kernel smoother. It allows any type of classifier to be the base learner and can be extended to further classify PCa sub-categories. We introduce the algorithms for binary PCa classification, as well as for classifying the ordinal clinical significance of PCa for which a weighted likelihood approach is implemented to improve the detection of less prevalent cancer categories. The proposed method has shown important advantages over conventional modeling and machine learning approaches in simulations and application to our motivating patient data.
Collapse
Affiliation(s)
- Jin Jin
- Department of Biostatistics, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA
| | - Lin Zhang
- Devision of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN, USA
| | - Ethan Leng
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN, USA
| | | | - Joseph S. Koopmeiners
- Devision of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
41
|
Liang S, Beaton D, Arnott SR, Gee T, Zamyadi M, Bartha R, Symons S, MacQueen GM, Hassel S, Lerch JP, Anagnostou E, Lam RW, Frey BN, Milev R, Müller DJ, Kennedy SH, Scott CJM, Strother SC. Magnetic Resonance Imaging Sequence Identification Using a Metadata Learning Approach. Front Neuroinform 2021; 15:622951. [PMID: 34867254 PMCID: PMC8635782 DOI: 10.3389/fninf.2021.622951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 10/21/2021] [Indexed: 11/29/2022] Open
Abstract
Despite the wide application of the magnetic resonance imaging (MRI) technique, there are no widely used standards on naming and describing MRI sequences. The absence of consistent naming conventions presents a major challenge in automating image processing since most MRI software require a priori knowledge of the type of the MRI sequences to be processed. This issue becomes increasingly critical with the current efforts toward open-sharing of MRI data in the neuroscience community. This manuscript reports an MRI sequence detection method using imaging metadata and a supervised machine learning technique. Three datasets from the Brain Center for Ontario Data Exploration (Brain-CODE) data platform, each involving MRI data from multiple research institutes, are used to build and test our model. The preliminary results show that a random forest model can be trained to accurately identify MRI sequence types, and to recognize MRI scans that do not belong to any of the known sequence types. Therefore the proposed approach can be used to automate processing of MRI data that involves a large number of variations in sequence names, and to help standardize sequence naming in ongoing data collections. This study highlights the potential of the machine learning approaches in helping manage health data.
Collapse
Affiliation(s)
- Shuai Liang
- Rotman Research Institute, Baycrest Health Center, Toronto, ON, Canada
- Indoc Research, Toronto, ON, Canada
| | - Derek Beaton
- Rotman Research Institute, Baycrest Health Center, Toronto, ON, Canada
| | - Stephen R. Arnott
- Rotman Research Institute, Baycrest Health Center, Toronto, ON, Canada
| | - Tom Gee
- Indoc Research, Toronto, ON, Canada
| | - Mojdeh Zamyadi
- Rotman Research Institute, Baycrest Health Center, Toronto, ON, Canada
| | - Robert Bartha
- Robarts Research Institute, Western University, London, ON, Canada
| | - Sean Symons
- Department of Medical Imaging, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Glenda M. MacQueen
- Department of Psychiatry, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Stefanie Hassel
- Department of Psychiatry, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Jason P. Lerch
- Mouse Imaging Centre, Hospital for Sick Children, Toronto, ON, Canada
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Evdokia Anagnostou
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
- Department of Pediatrics, University of Toronto, Toronto, ON, Canada
| | - Raymond W. Lam
- Department of Psychiatry, University of British Columbia, Vancouver, BC, Canada
| | - Benicio N. Frey
- Department of Psychiatry and Behavioral Neurosciences, McMaster University, Hamilton, ON, Canada
- Mood Disorders Program, St. Joseph’s Healthcare, Hamilton, ON, Canada
| | - Roumen Milev
- Departments of Psychiatry and Psychology, Providence Care Hospital, Queen’s University, Kingston, ON, Canada
| | - Daniel J. Müller
- Molecular Brain Science, Centre for Addiction and Mental Health, Campbell Family Mental Health Research Institute, Toronto, ON, Canada
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada
| | - Sidney H. Kennedy
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada
- Department of Psychiatry, Krembil Research Centre, University Health Network, Toronto, ON, Canada
- Department of Psychiatry, St. Michael’s Hospital, University of Toronto, Toronto, ON, Canada
- Keenan Research Centre for Biomedical Science, St. Michael’s Hospital, Li Ka Shing Knowledge Institute, Toronto, ON, Canada
| | - Christopher J. M. Scott
- L.C. Campbell Cognitive Neurology Research Unit, Toronto, ON, Canada
- Heart & Stroke Foundation Centre for Stroke Recovery, Toronto, ON, Canada
- Sunnybrook Health Sciences Centre, Brain Sciences Research Program, Sunnybrook Research Institute, Toronto, ON, Canada
| | - Stephen C. Strother
- Rotman Research Institute, Baycrest Health Center, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
42
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
43
|
Multiparametric MRI and Radiomics in Prostate Cancer: A Review of the Current Literature. Diagnostics (Basel) 2021; 11:diagnostics11101829. [PMID: 34679527 PMCID: PMC8534893 DOI: 10.3390/diagnostics11101829] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 09/26/2021] [Accepted: 09/27/2021] [Indexed: 12/22/2022] Open
Abstract
Prostate cancer (PCa) represents the fourth most common cancer and the fifth leading cause of cancer death of men worldwide. Multiparametric MRI (mp-MRI) has high sensitivity and specificity in the detection of PCa, and it is currently the most widely used imaging technique for tumor localization and cancer staging. mp-MRI plays a key role in risk stratification of naïve patients, in active surveillance for low-risk patients, and in monitoring recurrence after definitive therapy. Radiomics is an emerging and promising tool which allows a quantitative tumor evaluation from radiological images via conversion of digital images into mineable high-dimensional data. The purpose of radiomics is to increase the features available to detect PCa, to avoid unnecessary biopsies, to define tumor aggressiveness, and to monitor post-treatment recurrence of PCa. The integration of radiomics data, including different imaging modalities (such as PET-CT) and other clinical and histopathological data, could improve the prediction of tumor aggressiveness as well as guide clinical decisions and patient management. The purpose of this review is to describe the current research applications of radiomics in PCa on MR images.
Collapse
|
44
|
Sunoqrot MRS, Selnæs KM, Sandsmark E, Langørgen S, Bertilsson H, Bathen TF, Elschot M. The Reproducibility of Deep Learning-Based Segmentation of the Prostate Gland and Zones on T2-Weighted MR Images. Diagnostics (Basel) 2021; 11:diagnostics11091690. [PMID: 34574031 PMCID: PMC8471645 DOI: 10.3390/diagnostics11091690] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/08/2021] [Accepted: 09/15/2021] [Indexed: 01/02/2023] Open
Abstract
Volume of interest segmentation is an essential step in computer-aided detection and diagnosis (CAD) systems. Deep learning (DL)-based methods provide good performance for prostate segmentation, but little is known about the reproducibility of these methods. In this work, an in-house collected dataset from 244 patients was used to investigate the intra-patient reproducibility of 14 shape features for DL-based segmentation methods of the whole prostate gland (WP), peripheral zone (PZ), and the remaining prostate zones (non-PZ) on T2-weighted (T2W) magnetic resonance (MR) images compared to manual segmentations. The DL-based segmentation was performed using three different convolutional neural networks (CNNs): V-Net, nnU-Net-2D, and nnU-Net-3D. The two-way random, single score intra-class correlation coefficient (ICC) was used to measure the inter-scan reproducibility of each feature for each CNN and the manual segmentation. We found that the reproducibility of the investigated methods is comparable to manual for all CNNs (14/14 features), except for V-Net in PZ (7/14 features). The ICC score for segmentation volume was found to be 0.888, 0.607, 0.819, and 0.903 in PZ; 0.988, 0.967, 0.986, and 0.983 in non-PZ; 0.982, 0.975, 0.973, and 0.984 in WP for manual, V-Net, nnU-Net-2D, and nnU-Net-3D, respectively. The results of this work show the feasibility of embedding DL-based segmentation in CAD systems, based on multiple T2W MR scans of the prostate, which is an important step towards the clinical implementation.
Collapse
Affiliation(s)
- Mohammed R. S. Sunoqrot
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (T.F.B.); (M.E.)
- Correspondence:
| | - Kirsten M. Selnæs
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (T.F.B.); (M.E.)
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway; (E.S.); (S.L.)
| | - Elise Sandsmark
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway; (E.S.); (S.L.)
| | - Sverre Langørgen
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway; (E.S.); (S.L.)
| | - Helena Bertilsson
- Department of Cancer Research and Molecular Medicine, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway;
- Department of Urology, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway
| | - Tone F. Bathen
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (T.F.B.); (M.E.)
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway; (E.S.); (S.L.)
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (T.F.B.); (M.E.)
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway; (E.S.); (S.L.)
| |
Collapse
|
45
|
Wong T, Schieda N, Sathiadoss P, Haroon M, Abreu-Gomez J, Ukwatta E. Fully automated detection of prostate transition zone tumors on T2-weighted and apparent diffusion coefficient (ADC) map MR images using U-Net ensemble. Med Phys 2021; 48:6889-6900. [PMID: 34418108 DOI: 10.1002/mp.15181] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/19/2021] [Accepted: 08/07/2021] [Indexed: 01/10/2023] Open
Abstract
PURPOSE Accurate detection of transition zone (TZ) prostate cancer (PCa) on magnetic resonance imaging (MRI) remains challenging using clinical subjective assessment due to overlap between PCa and benign prostatic hyperplasia (BPH). The objective of this paper is to describe a deep-learning-based framework for fully automated detection of PCa in the TZ using T2-weighted (T2W) and apparent diffusion coefficient (ADC) map MR images. METHOD This was a single-center IRB-approved cross-sectional study of men undergoing 3T MRI on two systems. The dataset consisted of 196 patients (103 with and 93 without clinically significant [Grade Group 2 or higher] TZ PCa) to train and test our proposed methodology, with an additional 168 patients with peripheral zone PCa used only for training. We proposed an ensemble of classifiers in which multiple U-Net-based models are designed for prediction of TZ PCa location on ADC map MR images, with initial automated segmentation of the prostate to guide detection. We compared accuracy of ADC alone to T2W and combined ADC+T2W MRI for input images, and investigated improvements using ensembles over their constituent models with different methods of diversity in individual models by hyperparameter configuration, loss function and model architecture. RESULTS Our developed algorithm reported sensitivity and precision of 0.829 and 0.617 in 56 test cases containing 31 instances of TZ PCa and in 25 patients without clinically significant TZ tumors. Patient-wise classification accuracy had an area under receiver operator characteristic curve (AUROC) of 0.974. Single U-Net models using ADC alone (sensitivity 0.829, precision 0.534) outperformed assessment using T2W (sensitivity 0.086, precision 0.081) and assessment using combined ADC+T2W (sensitivity 0.687, precision 0.489). While the ensemble of U-Nets with varying hyperparameters demonstrated the highest performance, all ensembles improved PCa detection compared to individual models, with sensitivities and precisions close to the collective best of constituent models. CONCLUSION We describe a deep-learning-based method for fully automated TZ PCa detection using ADC map MR images that outperformed assessment by T2W and ADC+T2W.
Collapse
Affiliation(s)
- Timothy Wong
- School of Engineering, University of Guelph, Guelph, ON, Canada
| | - Nicola Schieda
- Department of Radiology, University of Ottawa, Ottawa, ON, Canada
| | - Paul Sathiadoss
- Department of Radiology, University of Ottawa, Ottawa, ON, Canada
| | - Mohammad Haroon
- Department of Radiology, University of Ottawa, Ottawa, ON, Canada
| | - Jorge Abreu-Gomez
- Joint Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| | - Eranga Ukwatta
- School of Engineering, University of Guelph, Guelph, ON, Canada
| |
Collapse
|
46
|
Juneja M, Kaur Saini S, Kaul S, Acharjee R, Thakur N, Jindal P. Denoising of magnetic resonance imaging using Bayes shrinkage based fused wavelet transform and autoencoder based deep learning approach. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102844] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
47
|
Investigation of the plaque morphology effect on changes of pulsatile blood flow in a stenosed curved artery induced by an external magnetic field. Comput Biol Med 2021; 135:104600. [PMID: 34214938 DOI: 10.1016/j.compbiomed.2021.104600] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 06/09/2021] [Accepted: 06/20/2021] [Indexed: 01/20/2023]
Abstract
In a new therapeutic technique, called magnetic drug targeting (MDT), magnetic particles carrying therapeutic agents are directed to the target tissue by applying an external magnetic field. Meanwhile, this magnetic field also affects the blood as a biomagnetic fluid. Therefore, it is necessary to select a magnetic field with an acceptable range of influence on the blood flow. This study investigates the effect of an external magnetic field on the pulsatile blood flow in a stenosed curved artery to identify a safe magnetic field. The effects of a number of parameters, including the magnetic susceptibility of blood in oxygenated and deoxygenated states and the magnetic field strength, were studied. Moreover, the effect of the plaque morphology, including the occlusion percentage and the chord length of the stenosis, on changes in blood flow induced by the magnetic field was investigated. The results show that applying a magnetic field increases the wall shear stress (WSS) and the pressure of the deoxygenated blood. Comparing the wall shear stresses of the deoxygenated and oxygenated blood shows that the effect of magnetic field on the deoxygenated blood is more significant than its effect on the oxygenated blood due to its higher magnetic susceptibility. The study of the stenosis geometry shows that the influence of magnetic field on the blood flow is increased by decreasing the occlusion percentage of the artery. Furthermore, among the evaluated lengths, the 50° chord length results in the highest variation under the influence of the magnetic field. Finally, the magnetic field of Mn = 2.5 can be utilized as a safe field for MDT purposes in such a stenosed curved artery.
Collapse
|
48
|
Singh D, Kumar V, Das CJ, Singh A, Mehndiratta A. Characterisation of prostate cancer using texture analysis for diagnostic and prognostic monitoring. NMR IN BIOMEDICINE 2021; 34:e4495. [PMID: 33638244 DOI: 10.1002/nbm.4495] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 02/06/2021] [Accepted: 02/08/2021] [Indexed: 06/12/2023]
Abstract
Automated classification of significant prostate cancer (PCa) using MRI plays a potential role in assisting in clinical decision-making. Multiparametric MRI using a machine-aided approach is a better step to improve the overall accuracy of diagnosis of PCa. The objective of this study was to develop and validate a framework for differentiating Prostate Imaging-Reporting and Data System version 2 (PI-RADS v2) grades (grade 2 to grade 5) of PCa using texture features and machine learning (ML) methods with diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC). The study cohort included an MRI dataset of 59 patients with clinically proven PCa. Regions of interest (ROIs) for a total of 435 lesions were delineated from the segmented peripheral zones of DWI and ADC. Six texture methods comprising 98 texture features in total (49 each of DWI and ADC) were extracted from lesion ROIs. Random forest (RF) and correlation-based feature selection methods were applied on feature vectors to select the best features for classification. Two ML classifiers, support vector machine (SVM) and K-nearest neighbour, were used and validated by 10-fold cross-validation. The proposed framework achieved high diagnostic performance with a sensitivity of 85.25% ± 3.84%, specificity of 95.71% ± 1.96%, accuracy of 84.90% ± 3.37% and area under the receiver-operating characteristic curve of 0.98 for PI-RADS v2 grades (2 to 5) classification using the RF feature selection method and Gaussian SVM classifier with combined features of DWI + ADC. The proposed computer-assisted framework can distinguish between PCa lesions with different aggressiveness based on PI-RADS v2 standards using texture analysis to improve the efficiency of PCa diagnostic performance.
Collapse
Affiliation(s)
- Dharmesh Singh
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Virendra Kumar
- Department of NMR, All India Institute of Medical Sciences, New Delhi, India
| | - Chandan J Das
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Anup Singh
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
- Department of Biomedical Engineering, All India Institute of Medical Sciences, New Delhi, India
| | - Amit Mehndiratta
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
- Department of Biomedical Engineering, All India Institute of Medical Sciences, New Delhi, India
| |
Collapse
|
49
|
Twilt JJ, van Leeuwen KG, Huisman HJ, Fütterer JJ, de Rooij M. Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review. Diagnostics (Basel) 2021; 11:diagnostics11060959. [PMID: 34073627 PMCID: PMC8229869 DOI: 10.3390/diagnostics11060959] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 05/19/2021] [Accepted: 05/21/2021] [Indexed: 12/14/2022] Open
Abstract
Due to the upfront role of magnetic resonance imaging (MRI) for prostate cancer (PCa) diagnosis, a multitude of artificial intelligence (AI) applications have been suggested to aid in the diagnosis and detection of PCa. In this review, we provide an overview of the current field, including studies between 2018 and February 2021, describing AI algorithms for (1) lesion classification and (2) lesion detection for PCa. Our evaluation of 59 included studies showed that most research has been conducted for the task of PCa lesion classification (66%) followed by PCa lesion detection (34%). Studies showed large heterogeneity in cohort sizes, ranging between 18 to 499 patients (median = 162) combined with different approaches for performance validation. Furthermore, 85% of the studies reported on the stand-alone diagnostic accuracy, whereas 15% demonstrated the impact of AI on diagnostic thinking efficacy, indicating limited proof for the clinical utility of PCa AI applications. In order to introduce AI within the clinical workflow of PCa assessment, robustness and generalizability of AI applications need to be further validated utilizing external validation and clinical workflow experiments.
Collapse
|
50
|
Björeland U, Nyholm T, Jonsson J, Skorpil M, Blomqvist L, Strandberg S, Riklund K, Beckman L, Thellenberg-Karlsson C. Impact of neoadjuvant androgen deprivation therapy on magnetic resonance imaging features in prostate cancer before radiotherapy. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2021; 17:117-123. [PMID: 33898790 PMCID: PMC8058024 DOI: 10.1016/j.phro.2021.01.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 01/19/2021] [Accepted: 01/19/2021] [Indexed: 01/01/2023]
Abstract
Background and purpose In locally advanced prostate cancer (PC), androgen deprivation therapy (ADT) in combination with whole prostate radiotherapy (RT) is the standard treatment. ADT affects the prostate as well as the tumour on multiparametric magnetic resonance imaging (MRI) with decreased PC conspicuity and impaired localisation of the prostate lesion. Image texture analysis has been suggested to be of aid in separating tumour from normal tissue. The aim of the study was to investigate the impact of ADT on baseline defined MRI features in prostate cancer with the goal to investigate if it might be of use in radiotherapy planning. Materials and methods Fifty PC patients were included. Multiparametric MRI was performed before, and three months after ADT. At baseline, a tumour volume was delineated on apparent diffusion coefficient (ADC) maps with suspected tumour content and a reference volume in normal prostatic tissue. These volumes were transferred to MRIs after ADT and were analysed with first-order -and invariant Haralick -features. Results At baseline, the median value and several of the invariant Haralick features of ADC, showed a significant difference between tumour and reference volumes. After ADT, only ADC median value could significantly differentiate the two volumes. Conclusions Invariant Haralick -features could not distinguish between baseline MRI defined PC and normal tissue after ADT. First-order median value remained significantly different in tumour and reference volumes after ADT, but the difference was less pronounced than before ADT.
Collapse
Affiliation(s)
- Ulrika Björeland
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
- Corresponding author at: Department of Medical Physics, Sundsvall Hospital, 85186 Sundsvall, Sweden.
| | - Tufve Nyholm
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Joakim Jonsson
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Mikael Skorpil
- Department of Molecular Medicine and Surgery, Karolinska Institutet, Stockholm, Sweden
| | - Lennart Blomqvist
- Department of Molecular Medicine and Surgery, Karolinska Institutet, Stockholm, Sweden
| | - Sara Strandberg
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Katrine Riklund
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Lars Beckman
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | | |
Collapse
|