1
|
Johnson LA, Harmon SA, Yilmaz EC, Lin Y, Belue MJ, Merriman KM, Lay NS, Sanford TH, Sarma KV, Arnold CW, Xu Z, Roth HR, Yang D, Tetreault J, Xu D, Patel KR, Gurram S, Wood BJ, Citrin DE, Pinto PA, Choyke PL, Turkbey B. Automated prostate gland segmentation in challenging clinical cases: comparison of three artificial intelligence methods. Abdom Radiol (NY) 2024; 49:1545-1556. [PMID: 38512516 DOI: 10.1007/s00261-024-04242-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 02/05/2024] [Accepted: 02/06/2024] [Indexed: 03/23/2024]
Abstract
OBJECTIVE Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.
Collapse
Affiliation(s)
- Latrice A Johnson
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Stephanie A Harmon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Enis C Yilmaz
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Yue Lin
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Mason J Belue
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Katie M Merriman
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Nathan S Lay
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Karthik V Sarma
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, USA
| | - Corey W Arnold
- Department of Radiology, University of California, Los Angeles, Los Angeles, CA, USA
| | - Ziyue Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Dong Yang
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Daguang Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | - Krishnan R Patel
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Sandeep Gurram
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Bradford J Wood
- Center for Interventional Oncology, National Cancer Institute, NIH, Bethesda, MD, USA
- Department of Radiology, Clinical Center, NIH, Bethesda, MD, USA
| | - Deborah E Citrin
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter A Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA.
- Molecular Imaging Branch (B.T.), National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, MD, 20892, USA.
| |
Collapse
|
2
|
Fedorov A, Longabaugh WJR, Pot D, Clunie DA, Pieper SD, Gibbs DL, Bridge C, Herrmann MD, Homeyer A, Lewis R, Aerts HJWL, Krishnaswamy D, Thiriveedhi VK, Ciausu C, Schacherer DP, Bontempi D, Pihl T, Wagner U, Farahani K, Kim E, Kikinis R. National Cancer Institute Imaging Data Commons: Toward Transparency, Reproducibility, and Scalability in Imaging Artificial Intelligence. Radiographics 2023; 43:e230180. [PMID: 37999984 DOI: 10.1148/rg.230180] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2023]
Abstract
The remarkable advances of artificial intelligence (AI) technology are revolutionizing established approaches to the acquisition, interpretation, and analysis of biomedical imaging data. Development, validation, and continuous refinement of AI tools requires easy access to large high-quality annotated datasets, which are both representative and diverse. The National Cancer Institute (NCI) Imaging Data Commons (IDC) hosts large and diverse publicly available cancer image data collections. By harmonizing all data based on industry standards and colocalizing it with analysis and exploration resources, the IDC aims to facilitate the development, validation, and clinical translation of AI tools and address the well-documented challenges of establishing reproducible and transparent AI processing pipelines. Balanced use of established commercial products with open-source solutions, interconnected by standard interfaces, provides value and performance, while preserving sufficient agility to address the evolving needs of the research community. Emphasis on the development of tools, use cases to demonstrate the utility of uniform data representation, and cloud-based analysis aim to ease adoption and help define best practices. Integration with other data in the broader NCI Cancer Research Data Commons infrastructure opens opportunities for multiomics studies incorporating imaging data to further empower the research community to accelerate breakthroughs in cancer detection, diagnosis, and treatment. Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- Andrey Fedorov
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - William J R Longabaugh
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - David Pot
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - David A Clunie
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Steven D Pieper
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - David L Gibbs
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Christopher Bridge
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Markus D Herrmann
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - André Homeyer
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Rob Lewis
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Hugo J W L Aerts
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Deepa Krishnaswamy
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Vamsi Krishna Thiriveedhi
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Cosmin Ciausu
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Daniela P Schacherer
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Dennis Bontempi
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Todd Pihl
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Ulrike Wagner
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Keyvan Farahani
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Erika Kim
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| | - Ron Kikinis
- From the Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 399 Revolution Dr, Somerville, MA 02145 (A.F., D.K., V.K.T., C.C., R.K.); Institute for Systems Biology, Seattle, Wash (W.J.R.L., D.L.G.); General Dynamics Information Technology, Rockville, Md (D.P.); PixelMed Publishing, Bangor, Pa (D.A.C.); Isomics, Cambridge, Mass (S.D.P.); Departments of Radiology (C.B.) and Pathology (M.D.H.), Massachusetts General Hospital and Harvard Medical School, Boston, Mass; Fraunhofer MEVIS, Bremen, Germany (A.H., D.P.S.); Radical Imaging, Boston, Mass (R.L.); Artificial Intelligence in Medicine Program, Mass General Brigham, Harvard Medical School, Boston, Mass (H.J.W.L.A., D.B.); Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, the Netherlands (H.J.W.L.A., D.B.); Frederick National Laboratory for Cancer Research, Rockville, Md (T.P., U.W.); and National Cancer Institute, Bethesda, Md (K.F., E.K.)
| |
Collapse
|
3
|
Vesal S, Gayo I, Bhattacharya I, Natarajan S, Marks LS, Barratt DC, Fan RE, Hu Y, Sonn GA, Rusu M. Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study. Med Image Anal 2022; 82:102620. [PMID: 36148705 PMCID: PMC10161676 DOI: 10.1016/j.media.2022.102620] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 08/31/2022] [Accepted: 09/05/2022] [Indexed: 11/24/2022]
Abstract
Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7 mm and Dice: 82.0±0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.
Collapse
Affiliation(s)
- Sulaiman Vesal
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| | - Iani Gayo
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Indrani Bhattacharya
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Shyam Natarajan
- Department of Urology, University of California Los Angeles, 200 Medical Plaza Driveway, Los Angeles, CA 90024, USA
| | - Leonard S Marks
- Department of Urology, University of California Los Angeles, 200 Medical Plaza Driveway, Los Angeles, CA 90024, USA
| | - Dean C Barratt
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Richard E Fan
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK
| | - Geoffrey A Sonn
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| |
Collapse
|
4
|
Dai W, Woo B, Liu S, Marques M, Engstrom C, Greer PB, Crozier S, Dowling JA, Chandra SS. CAN3D: Fast 3D medical image segmentation via compact context aggregation. Med Image Anal 2022; 82:102562. [PMID: 36049450 DOI: 10.1016/j.media.2022.102562] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 05/19/2022] [Accepted: 07/29/2022] [Indexed: 11/24/2022]
Abstract
Direct automatic segmentation of objects in 3D medical imaging, such as magnetic resonance (MR) imaging, is challenging as it often involves accurately identifying multiple individual structures with complex geometries within a large volume under investigation. Most deep learning approaches address these challenges by enhancing their learning capability through a substantial increase in trainable parameters within their models. An increased model complexity will incur high computational costs and large memory requirements unsuitable for real-time implementation on standard clinical workstations, as clinical imaging systems typically have low-end computer hardware with limited memory and CPU resources only. This paper presents a compact convolutional neural network (CAN3D) designed specifically for clinical workstations and allows the segmentation of large 3D Magnetic Resonance (MR) images in real-time. The proposed CAN3D has a shallow memory footprint to reduce the number of model parameters and computer memory required for state-of-the-art performance and maintain data integrity by directly processing large full-size 3D image input volumes with no patches required. The proposed architecture significantly reduces computational costs, especially for inference using the CPU. We also develop a novel loss function with extra shape constraints to improve segmentation accuracy for imbalanced classes in 3D MR images. Compared to state-of-the-art approaches (U-Net3D, improved U-Net3D and V-Net), CAN3D reduced the number of parameters up to two orders of magnitude and achieved much faster inference, up to 5 times when predicting with a standard commercial CPU (instead of GPU). For the open-access OAI-ZIB knee MR dataset, in comparison with manual segmentation, CAN3D achieved Dice coefficient values of (mean = 0.87 ± 0.02 and 0.85 ± 0.04) with mean surface distance errors (mean = 0.36 ± 0.32 mm and 0.29 ± 0.10 mm) for imbalanced classes such as (femoral and tibial) cartilage volumes respectively when training volume-wise under only 12G video memory. Similarly, CAN3D demonstrated high accuracy and efficiency on a pelvis 3D MR imaging dataset for prostate cancer consisting of 211 examinations with expert manual semantic labels (bladder, body, bone, rectum, prostate) now released publicly for scientific use as part of this work.
Collapse
Affiliation(s)
- Wei Dai
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia.
| | - Boyeong Woo
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | - Siyu Liu
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | - Matthew Marques
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | - Craig Engstrom
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | | | - Stuart Crozier
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | | | - Shekhar S Chandra
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| |
Collapse
|
5
|
A Fusion Biopsy Framework for Prostate Cancer Based on Deformable Superellipses and nnU-Net. Bioengineering (Basel) 2022; 9:bioengineering9080343. [PMID: 35892756 PMCID: PMC9394419 DOI: 10.3390/bioengineering9080343] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 07/13/2022] [Accepted: 07/21/2022] [Indexed: 11/24/2022] Open
Abstract
In prostate cancer, fusion biopsy, which couples magnetic resonance imaging (MRI) with transrectal ultrasound (TRUS), poses the basis for targeted biopsy by allowing the comparison of information coming from both imaging modalities at the same time. Compared with the standard clinical procedure, it provides a less invasive option for the patients and increases the likelihood of sampling cancerous tissue regions for the subsequent pathology analyses. As a prerequisite to image fusion, segmentation must be achieved from both MRI and TRUS domains. The automatic contour delineation of the prostate gland from TRUS images is a challenging task due to several factors including unclear boundaries, speckle noise, and the variety of prostate anatomical shapes. Automatic methodologies, such as those based on deep learning, require a huge quantity of training data to achieve satisfactory results. In this paper, the authors propose a novel optimization formulation to find the best superellipse, a deformable model that can accurately represent the prostate shape. The advantage of the proposed approach is that it does not require extensive annotations, and can be used independently of the specific transducer employed during prostate biopsies. Moreover, in order to show the clinical applicability of the method, this study also presents a module for the automatic segmentation of the prostate gland from MRI, exploiting the nnU-Net framework. Lastly, segmented contours from both imaging domains are fused with a customized registration algorithm in order to create a tool that can help the physician to perform a targeted prostate biopsy by interacting with the graphical user interface.
Collapse
|
6
|
Mata C, Walker P, Oliver A, Martí J, Lalande A. Usefulness of Collaborative Work in the Evaluation of Prostate Cancer from MRI. Clin Pract 2022; 12:350-362. [PMID: 35645317 PMCID: PMC9149964 DOI: 10.3390/clinpract12030040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 11/16/2022] Open
Abstract
The aim of this study is to show the usefulness of collaborative work in the evaluation of prostate cancer from T2-weighted MRI using a dedicated software tool. The variability of annotations on images of the prostate gland (central and peripheral zones as well as tumour) by two independent experts was firstly evaluated, and secondly compared with a consensus between these two experts. Using a prostate MRI database, experts drew regions of interest (ROIs) corresponding to healthy prostate (peripheral and central zones) and cancer. One of the experts then drew the ROI with knowledge of the other expert’s ROI. The surface area of each ROI was used to measure the Hausdorff distance and the Dice coefficient was measured from the respective contours. They were evaluated between the different experiments, taking the annotations of the second expert as the reference. The results showed that the significant differences between the two experts disappeared with collaborative work. To conclude, this study shows that collaborative work with a dedicated tool allows consensus between expertise in the evaluation of prostate cancer from T2-weighted MRI.
Collapse
Affiliation(s)
- Christian Mata
- Pediatric Computational Imaging Research Group, Hospital Sant Joan de Déu, 08950 Esplugues de Llobregat, Spain
- Research Centre for Biomedical Engineering (CREB), Barcelona East School of Engineering, Universitat Politècnica de Catalunya, 08019 Barcelona, Spain
- Correspondence:
| | - Paul Walker
- ImViA Laboratory, Université de Bourgogne Franche-Comté, 64 Rue de Sully, 21000 Dijon, France; (P.W.); (A.L.)
| | - Arnau Oliver
- Institute of Computer Vision and Robotics, University of Girona, Campus Montilivi, Ed. P-IV, 17003 Girona, Spain; (A.O.); (J.M.)
| | - Joan Martí
- Institute of Computer Vision and Robotics, University of Girona, Campus Montilivi, Ed. P-IV, 17003 Girona, Spain; (A.O.); (J.M.)
| | - Alain Lalande
- ImViA Laboratory, Université de Bourgogne Franche-Comté, 64 Rue de Sully, 21000 Dijon, France; (P.W.); (A.L.)
| |
Collapse
|
7
|
Jiang J, Guo Y, Bi Z, Huang Z, Yu G, Wang J. Segmentation of prostate ultrasound images: the state of the art and the future directions of segmentation algorithms. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10179-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
8
|
|
9
|
Autonomous Prostate Segmentation in 2D B-Mode Ultrasound Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12062994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Prostate brachytherapy is a treatment for prostate cancer; during the planning of the procedure, ultrasound images of the prostate are taken. The prostate must be segmented out in each of the ultrasound images, and to assist with the procedure, an autonomous prostate segmentation algorithm is proposed. The prostate contouring system presented here is based on a novel superpixel algorithm, whereby pixels in the ultrasound image are grouped into superpixel regions that are optimized based on statistical similarity measures, so that the various structures within the ultrasound image can be differentiated. An active shape prostate contour model is developed and then used to delineate the prostate within the image based on the superpixel regions. Before segmentation, this contour model was fit to a series of point-based clinician-segmented prostate contours exported from conventional prostate brachytherapy planning software to develop a statistical model of the shape of the prostate. The algorithm was evaluated on nine sets of in vivo prostate ultrasound images and compared with manually segmented contours from a clinician, where the algorithm had an average volume difference of 4.49 mL or 10.89%.
Collapse
|
10
|
Estimation of the Prostate Volume from Abdominal Ultrasound Images by Image-Patch Voting. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031390] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Estimation of the prostate volume with ultrasound offers many advantages such as portability, low cost, harmlessness, and suitability for real-time operation. Abdominal Ultrasound (AUS) is a practical procedure that deserves more attention in automated prostate-volume-estimation studies. As the experts usually consider automatic end-to-end volume-estimation procedures as non-transparent and uninterpretable systems, we proposed an expert-in-the-loop automatic system that follows the classical prostate-volume-estimation procedures. Our system directly estimates the diameter parameters of the standard ellipsoid formula to produce the prostate volume. To obtain the diameters, our system detects four diameter endpoints from the transverse and two diameter endpoints from the sagittal AUS images as defined by the classical procedure. These endpoints are estimated using a new image-patch voting method to address characteristic problems of AUS images. We formed a novel prostate AUS data set from 305 patients with both transverse and sagittal planes. The data set includes MRI images for 75 of these patients. At least one expert manually marked all the data. Extensive experiments performed on this data set showed that the proposed system results ranged among experts’ volume estimations, and our system can be used in clinical practice.
Collapse
|
11
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
12
|
Wang S, Liu X, Zhao J, Liu Y, Liu S, Liu Y, Zhao J. Computer auxiliary diagnosis technique of detecting cholangiocarcinoma based on medical imaging: A review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106265. [PMID: 34311415 DOI: 10.1016/j.cmpb.2021.106265] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 06/28/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Cholangiocarcinoma (CCA) is one of the most aggressive human malignant tumors and is becoming one of the main factors of death and disability globally. Specifically, 60% to 70% of CCA patients were diagnosed with local invasion or distant metastasis and lost the chance of radical operation. The overall median survival time was less than 12 months. As a non-invasive diagnostic technology, medical imaging consisting of computed tomography (CT) imaging, magnetic resonance imaging (MRI), and ultrasound (US) imaging, is the most effectively and commonly used method to detect CCA. The computer auxiliary diagnosis (CAD) system based on medical imaging is helpful for rapid diagnosis and provides credible "second opinion" for specialists. The purpose of this review is to categorize and review the CAD technique of detecting CCA based on medical imaging. METHODS This work applies a four-level screening process to choose suitable publications. 125 research papers published in different academic research databases were selected and analyzed according to specific criteria. From the five steps of medical image acquisition, processing, analysis, understanding and verification of CAD combined with artificial intelligence algorithms, we obtain the most advanced insights related to CCA detection. RESULTS This work provides a comprehensive analysis and comparison analysis of the current CAD systems of detecting CCA. After careful investigation, we find that the main detection methods are traditional machine learning method and deep learning method. For the detection, the most commonly used method is semi-automatic segmentation algorithm combined with support vector machine classifier method, combination of which has good detection performance. The end-to-end training mode makes deep learning method more and more popular in CAD systems. However, due to the limited medical training data, the accuracy of deep learning method is unsatisfactory. CONCLUSIONS Based on analysis of artificial intelligence methods applied in CCA, this work is expected to be truly applied in clinical practice in the future to improve the level of clinical diagnosis and treatment of it. This work concludes by providing a prediction of future trends, which will be of great significance for researchers in the medical imaging of CCA and artificial intelligence.
Collapse
Affiliation(s)
- Shiyu Wang
- School of Electronic and Electric Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Xiang Liu
- School of Electronic and Electric Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Jingwen Zhao
- School of Electronic and Electric Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Yiwen Liu
- School of Electronic and Electric Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Shuhong Liu
- Department of Pathology and Hepatology, The Fifth Medical Centre of Chinese PLA General Hospital, Beijing 100039, China
| | - Yisi Liu
- Department of Pathology and Hepatology, The Fifth Medical Centre of Chinese PLA General Hospital, Beijing 100039, China
| | - Jingmin Zhao
- Department of Pathology and Hepatology, The Fifth Medical Centre of Chinese PLA General Hospital, Beijing 100039, China.
| |
Collapse
|
13
|
Salvaggio G, Comelli A, Portoghese M, Cutaia G, Cannella R, Vernuccio F, Stefano A, Dispensa N, La Tona G, Salvaggio L, Calamia M, Gagliardo C, Lagalla R, Midiri M. Deep Learning Network for Segmentation of the Prostate Gland With Median Lobe Enlargement in T2-weighted MR Images: Comparison With Manual Segmentation Method. Curr Probl Diagn Radiol 2021; 51:328-333. [PMID: 34315623 DOI: 10.1067/j.cpradiol.2021.06.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 04/20/2021] [Accepted: 06/16/2021] [Indexed: 12/16/2022]
Abstract
PURPOSE Aim of this study was to evaluate a fully automated deep learning network named Efficient Neural Network (ENet) for segmentation of prostate gland with median lobe enlargement compared to manual segmentation. MATERIALS AND METHODS One-hundred-three patients with median lobe enlargement on prostate MRI were retrospectively included. Ellipsoid formula, manual segmentation and automatic segmentation were used for prostate volume estimation using T2 weighted MRI images. ENet was used for automatic segmentation; it is a deep learning network developed for fast inference and high accuracy in augmented reality and automotive scenarios. Student t-test was performed to compare prostate volumes obtained with ellipsoid formula, manual segmentation, and automated segmentation. To provide an evaluation of the similarity or difference to manual segmentation, sensitivity, positive predictive value (PPV), dice similarity coefficient (DSC), volume overlap error (VOE), and volumetric difference (VD) were calculated. RESULTS Differences between prostate volume obtained from ellipsoid formula versus manual segmentation and versus automatic segmentation were statistically significant (P < 0.049318 and P < 0.034305, respectively), while no statistical difference was found between volume obtained from manual versus automatic segmentation (P = 0.438045). The performance of ENet versus manual segmentations was good providing a sensitivity of 93.51%, a PPV of 87.93%, a DSC of 90.38%, a VOE of 17.32% and a VD of 6.85%. CONCLUSION The presence of median lobe enlargement may lead to MRI volume overestimation when using the ellipsoid formula so that a segmentation method is recommended. ENet volume estimation showed great accuracy in evaluation of prostate volume similar to that of manual segmentation.
Collapse
Affiliation(s)
- Giuseppe Salvaggio
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Albert Comelli
- Ri.Med Foundation, Palermo, Italy; Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy
| | - Marzia Portoghese
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Giuseppe Cutaia
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy; Department of Health Promotion, Mother and Child Care, Internal Medicine and Medical Specialties (PROMISE), University of Palermo, Palermo, Italy.
| | - Roberto Cannella
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Federica Vernuccio
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy
| | - Nino Dispensa
- Discipline Chirurgiche, Oncologiche e Stomatologiche - Unità operativa di Urologia, Università degli Studi di Palermo, Palermo, Italy
| | - Giuseppe La Tona
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Leonardo Salvaggio
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Mauro Calamia
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Cesare Gagliardo
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Roberto Lagalla
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| | - Massimo Midiri
- Section of Radiology - BiND, University Hospital "Paolo Giaccone", Via del Vespro 129, 90127, Palermo, Italy
| |
Collapse
|
14
|
Massanova M, Robertson S, Barone B, Dutto L, Caputo VF, Bhatt JR, Ahmad I, Bada M, Obeidallah A, Crocetto F. The Comparison of Imaging and Clinical Methods to Estimate Prostate Volume: A Single-Centre Retrospective Study. Urol Int 2021; 105:804-810. [PMID: 34247169 DOI: 10.1159/000516681] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Accepted: 03/29/2021] [Indexed: 11/19/2022]
Abstract
BACKGROUND Prostate volume (PV) is a useful tool in risk stratification, diagnosis, and follow-up of numerous prostatic diseases including prostate cancer and benign prostatic hypertrophy. There is currently no accepted ideal PV measurement method. OBJECTIVE This study compares multiple means of PV estimation, including digital rectal examination (DRE), transrectal ultrasound (TRUS), and magnetic resonance imaging (MRI), and radical prostatectomy specimens to determine the best volume measurement style. METHODS A retrospective, observational, single-site study with patients identified using an institutional database was performed. A total of 197 patients who underwent robot-assisted radical prostatectomy were considered. Data collected included age, serum PSA at the time of the prostate biopsy, clinical T stage, Gleason score, and PVs for each of the following methods: DRE, TRUS, MRI, and surgical specimen weight (SPW) and volume. RESULTS A paired t test was performed, which reported a statistically significant difference between PV measures (DRE, TRUS, MRI ellipsoid, MRI bullet, SP ellipsoid, and SP bullet) and the actual prostate weight. Lowest differences were reported for SP ellipsoid volume (M = -2.37; standard deviation [SD] = 10.227; t[167] = -3.011; and p = 0.003), MRI ellipsoid volume (M = -4.318; SD = 9.53; t[167] = -5.87; and p = 0.000), and MRI bullet volume (M = 5.31; SD = 10.77; t[167] = 6.387; and p = 0.000). CONCLUSION The PV obtained by MRI has proven to correlate with the PV obtained via auto-segmentation software as well as actual SPW, while also being more cost-effective and time-efficient. Therefore, demonstrating that MRI estimated the PV is an adequate method for use in clinical practice for therapeutic planning and patient follow-up.
Collapse
Affiliation(s)
- Matteo Massanova
- Department of Urology, Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Sophie Robertson
- Department of Urology, Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Biagio Barone
- Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples "Federico II,", Naples, Italy
| | - Lorenzo Dutto
- Department of Urology, Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Vincenzo Francesco Caputo
- Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples "Federico II,", Naples, Italy
| | - Jaimin R Bhatt
- Department of Urology, Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Imran Ahmad
- Department of Urology, Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Maida Bada
- Department of Urology, Ospedale San Bassiano, Bassano del Grappa, Italy
| | - Alison Obeidallah
- Department of Urology, Queen Elizabeth University Hospital, Glasgow, United Kingdom
| | - Felice Crocetto
- Department of Neuroscience, Reproductive Sciences and Dentistry, School of Medicine, University of Naples "Federico II,", Naples, Italy
| |
Collapse
|
15
|
Abstract
PURPOSE OF REVIEW The purpose of this review was to identify the most recent lines of research focusing on the application of artificial intelligence (AI) in the diagnosis and staging of prostate cancer (PCa) with imaging. RECENT FINDINGS The majority of studies focused on the improvement in the interpretation of bi-parametric and multiparametric magnetic resonance imaging, and in the planning of image guided biopsy. These initial studies showed that AI methods based on convolutional neural networks could achieve a diagnostic performance close to that of radiologists. In addition, these methods could improve segmentation and reduce inter-reader variability. Methods based on both clinical and imaging findings could help in the identification of high-grade PCa and more aggressive disease, thus guiding treatment decisions. Though these initial results are promising, only few studies addressed the repeatability and reproducibility of the investigated AI tools. Further, large-scale validation studies are missing and no diagnostic phase III or higher studies proving improved outcomes regarding clinical decision making have been conducted. SUMMARY AI techniques have the potential to significantly improve and simplify diagnosis, risk stratification and staging of PCa. Larger studies with a focus on quality standards are needed to allow a widespread introduction of AI in clinical practice.
Collapse
Affiliation(s)
- Pascal A T Baltzer
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | | |
Collapse
|
16
|
Meyer A, Mehrtash A, Rak M, Bashkanov O, Langbein B, Ziaei A, Kibel AS, Tempany CM, Hansen C, Tokuda J. Domain adaptation for segmentation of critical structures for prostate cancer therapy. Sci Rep 2021; 11:11480. [PMID: 34075061 PMCID: PMC8169882 DOI: 10.1038/s41598-021-90294-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 05/04/2021] [Indexed: 11/23/2022] Open
Abstract
Preoperative assessment of the proximity of critical structures to the tumors is crucial in avoiding unnecessary damage during prostate cancer treatment. A patient-specific 3D anatomical model of those structures, namely the neurovascular bundles (NVB) and the external urethral sphincters (EUS), can enable physicians to perform such assessments intuitively. As a crucial step to generate a patient-specific anatomical model from preoperative MRI in a clinical routine, we propose a multi-class automatic segmentation based on an anisotropic convolutional network. Our specific challenge is to train the network model on a unique source dataset only available at a single clinical site and deploy it to another target site without sharing the original images or labels. As network models trained on data from a single source suffer from quality loss due to the domain shift, we propose a semi-supervised domain adaptation (DA) method to refine the model's performance in the target domain. Our DA method combines transfer learning and uncertainty guided self-learning based on deep ensembles. Experiments on the segmentation of the prostate, NVB, and EUS, show significant performance gain with the combination of those techniques compared to pure TL and the combination of TL with simple self-learning ([Formula: see text] for all structures using a Wilcoxon's signed-rank test). Results on a different task and data (Pancreas CT segmentation) demonstrate our method's generic application capabilities. Our method has the advantage that it does not require any further data from the source domain, unlike the majority of recent domain adaptation strategies. This makes our method suitable for clinical applications, where the sharing of patient data is restricted.
Collapse
Affiliation(s)
- Anneke Meyer
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany.
| | - Alireza Mehrtash
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Marko Rak
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Oleksii Bashkanov
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Bjoern Langbein
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alireza Ziaei
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Adam S Kibel
- Division of Urology, Department of Surgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Clare M Tempany
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Christian Hansen
- Department of Simulation and Graphics and Research Campus STIMULATE, University of Magdeburg, Magdeburg, Germany
| | - Junichi Tokuda
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
17
|
Girum KB, Crehange G, Lalande A. Learning With Context Feedback Loop for Robust Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1542-1554. [PMID: 33606627 DOI: 10.1109/tmi.2021.3060497] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning has successfully been leveraged for medical image segmentation. It employs convolutional neural networks (CNN) to learn distinctive image features from a defined pixel-wise objective function. However, this approach can lead to less output pixel interdependence producing incomplete and unrealistic segmentation results. In this paper, we present a fully automatic deep learning method for robust medical image segmentation by formulating the segmentation problem as a recurrent framework using two systems. The first one is a forward system of an encoder-decoder CNN that predicts the segmentation result from the input image. The predicted probabilistic output of the forward system is then encoded by a fully convolutional network (FCN)-based context feedback system. The encoded feature space of the FCN is then integrated back into the forward system's feed-forward learning process. Using the FCN-based context feedback loop allows the forward system to learn and extract more high-level image features and fix previous mistakes, thereby improving prediction accuracy over time. Experimental results, performed on four different clinical datasets, demonstrate our method's potential application for single and multi-structure medical image segmentation by outperforming the state of the art methods. With the feedback loop, deep learning methods can now produce results that are both anatomically plausible and robust to low contrast images. Therefore, formulating image segmentation as a recurrent framework of two interconnected networks via context feedback loop can be a potential method for robust and efficient medical image analysis.
Collapse
|
18
|
Meyer A, Chlebus G, Rak M, Schindele D, Schostak M, van Ginneken B, Schenk A, Meine H, Hahn HK, Schreiber A, Hansen C. Anisotropic 3D Multi-Stream CNN for Accurate Prostate Segmentation from Multi-Planar MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105821. [PMID: 33218704 DOI: 10.1016/j.cmpb.2020.105821] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 10/26/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate and reliable segmentation of the prostate gland in MR images can support the clinical assessment of prostate cancer, as well as the planning and monitoring of focal and loco-regional therapeutic interventions. Despite the availability of multi-planar MR scans due to standardized protocols, the majority of segmentation approaches presented in the literature consider the axial scans only. In this work, we investigate whether a neural network processing anisotropic multi-planar images could work in the context of a semantic segmentation task, and if so, how this additional information would improve the segmentation quality. METHODS We propose an anisotropic 3D multi-stream CNN architecture, which processes additional scan directions to produce a high-resolution isotropic prostate segmentation. We investigate two variants of our architecture, which work on two (dual-plane) and three (triple-plane) image orientations, respectively. The influence of additional information used by these models is evaluated by comparing them with a single-plane baseline processing only axial images. To realize a fair comparison, we employ a hyperparameter optimization strategy to select optimal configurations for the individual approaches. RESULTS Training and evaluation on two datasets spanning multiple sites show statistical significant improvement over the plain axial segmentation (p<0.05 on the Dice similarity coefficient). The improvement can be observed especially at the base (0.898 single-plane vs. 0.906 triple-plane) and apex (0.888 single-plane vs. 0.901 dual-plane). CONCLUSION This study indicates that models employing two or three scan directions are superior to plain axial segmentation. The knowledge of precise boundaries of the prostate is crucial for the conservation of risk structures. Thus, the proposed models have the potential to improve the outcome of prostate cancer diagnosis and therapies.
Collapse
Affiliation(s)
- Anneke Meyer
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany.
| | - Grzegorz Chlebus
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany; Radboud University Medical Center, Nijmegen, The Netherlands
| | - Marko Rak
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany
| | - Daniel Schindele
- Clinic of Urology and Pediatric Urology, University Hospital Magdeburg, Germany
| | - Martin Schostak
- Clinic of Urology and Pediatric Urology, University Hospital Magdeburg, Germany
| | - Bram van Ginneken
- Radboud University Medical Center, Nijmegen, The Netherlands; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Andrea Schenk
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Hans Meine
- University of Bremen, Medical Image Computing Group, Bremen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Horst K Hahn
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | | | - Christian Hansen
- Faculty of Computer Science and Research Campus STIMULATE, University of Magdeburg, Germany
| |
Collapse
|
19
|
Magnetic Resonance Imaging Based Radiomic Models of Prostate Cancer: A Narrative Review. Cancers (Basel) 2021; 13:cancers13030552. [PMID: 33535569 PMCID: PMC7867056 DOI: 10.3390/cancers13030552] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 01/18/2021] [Accepted: 01/27/2021] [Indexed: 12/11/2022] Open
Abstract
Simple Summary The increasing interest in implementing artificial intelligence in radiomic models has occurred alongside advancement in the tools used for computer-aided diagnosis. Such tools typically apply both statistical and machine learning methodologies to assess the various modalities used in medical image analysis. Specific to prostate cancer, the radiomics pipeline has multiple facets that are amenable to improvement. This review discusses the steps of a magnetic resonance imaging based radiomics pipeline. Present successes, existing opportunities for refinement, and the most pertinent pending steps leading to clinical validation are highlighted. Abstract The management of prostate cancer (PCa) is dependent on biomarkers of biological aggression. This includes an invasive biopsy to facilitate a histopathological assessment of the tumor’s grade. This review explores the technical processes of applying magnetic resonance imaging based radiomic models to the evaluation of PCa. By exploring how a deep radiomics approach further optimizes the prediction of a PCa’s grade group, it will be clear how this integration of artificial intelligence mitigates existing major technological challenges faced by a traditional radiomic model: image acquisition, small data sets, image processing, labeling/segmentation, informative features, predicting molecular features and incorporating predictive models. Other potential impacts of artificial intelligence on the personalized treatment of PCa will also be discussed. The role of deep radiomics analysis-a deep texture analysis, which extracts features from convolutional neural networks layers, will be highlighted. Existing clinical work and upcoming clinical trials will be reviewed, directing investigators to pertinent future directions in the field. For future progress to result in clinical translation, the field will likely require multi-institutional collaboration in producing prospectively populated and expertly labeled imaging libraries.
Collapse
|
20
|
Comelli A, Dahiya N, Stefano A, Vernuccio F, Portoghese M, Cutaia G, Bruno A, Salvaggio G, Yezzi A. Deep Learning-Based Methods for Prostate Segmentation in Magnetic Resonance Imaging. APPLIED SCIENCES (BASEL, SWITZERLAND) 2021; 11:782. [PMID: 33680505 PMCID: PMC7932306 DOI: 10.3390/app11020782] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Magnetic Resonance Imaging-based prostate segmentation is an essential task for adaptive radiotherapy and for radiomics studies whose purpose is to identify associations between imaging features and patient outcomes. Because manual delineation is a time-consuming task, we present three deep-learning (DL) approaches, namely UNet, efficient neural network (ENet), and efficient residual factorized convNet (ERFNet), whose aim is to tackle the fully-automated, real-time, and 3D delineation process of the prostate gland on T2-weighted MRI. While UNet is used in many biomedical image delineation applications, ENet and ERFNet are mainly applied in self-driving cars to compensate for limited hardware availability while still achieving accurate segmentation. We apply these models to a limited set of 85 manual prostate segmentations using the k-fold validation strategy and the Tversky loss function and we compare their results. We find that ENet and UNet are more accurate than ERFNet, with ENet much faster than UNet. Specifically, ENet obtains a dice similarity coefficient of 90.89% and a segmentation time of about 6 s using central processing unit (CPU) hardware to simulate real clinical conditions where graphics processing unit (GPU) is not always available. In conclusion, ENet could be efficiently applied for prostate delineation even in small image training datasets with potential benefit for patient management personalization.
Collapse
Affiliation(s)
- Albert Comelli
- Ri.MED Foundation, Via Bandiera, 11, 90133 Palermo, Italy
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
| | - Navdeep Dahiya
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
| | - Federica Vernuccio
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica avanzata (BIND), University of Palermo, 90127 Palermo, Italy
| | - Marzia Portoghese
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica avanzata (BIND), University of Palermo, 90127 Palermo, Italy
| | - Giuseppe Cutaia
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica avanzata (BIND), University of Palermo, 90127 Palermo, Italy
| | - Alberto Bruno
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica avanzata (BIND), University of Palermo, 90127 Palermo, Italy
| | - Giuseppe Salvaggio
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica avanzata (BIND), University of Palermo, 90127 Palermo, Italy
| | - Anthony Yezzi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
21
|
A 3D-2D Hybrid U-Net Convolutional Neural Network Approach to Prostate Organ Segmentation of Multiparametric MRI. AJR Am J Roentgenol 2020; 216:111-116. [PMID: 32812797 DOI: 10.2214/ajr.19.22168] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
OBJECTIVE Prostate cancer is the most commonly diagnosed cancer in men in the United States with more than 200,000 new cases in 2018. Multiparametric MRI (mpMRI) is increasingly used for prostate cancer evaluation. Prostate organ segmentation is an essential step of surgical planning for prostate fusion biopsies. Deep learning convolutional neural networks (CNNs) are the predominant method of machine learning for medical image recognition. In this study, we describe a deep learning approach, a subset of artificial intelligence, for automatic localization and segmentation of prostates from mpMRI. MATERIALS AND METHODS This retrospective study included patients who underwent prostate MRI and ultrasound-MRI fusion transrectal biopsy between September 2014 and December 2016. Axial T2-weighted images were manually segmented by two abdominal radiologists, which served as ground truth. These manually segmented images were used for training on a customized hybrid 3D-2D U-Net CNN architecture in a fivefold cross-validation paradigm for neural network training and validation. The Dice score, a measure of overlap between manually segmented and automatically derived segmentations, and Pearson linear correlation coefficient of prostate volume were used for statistical evaluation. RESULTS The CNN was trained on 299 MRI examinations (total number of MR images = 7774) of 287 patients. The customized hybrid 3D-2D U-Net had a mean Dice score of 0.898 (range, 0.890-0.908) and a Pearson correlation coefficient for prostate volume of 0.974. CONCLUSION A deep learning CNN can automatically segment the prostate organ from clinical MR images. Further studies should examine developing pattern recognition for lesion localization and quantification.
Collapse
|
22
|
Prostate lesion segmentation in MR images using radiomics based deeply supervised U-Net. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.07.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
23
|
A Prostate MRI Segmentation Tool Based on Active Contour Models Using a Gradient Vector Flow. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186163] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Medical support systems used to assist in the diagnosis of prostate lesions generally related to prostate segmentation is one of the majors focus of interest in recent literature. The main problem encountered in the diagnosis of a prostate study is the localization of a Regions of Interest (ROI) containing a tumor tissue. In this paper, a new GUI tool based on a semi-automatic prostate segmentation is presented. The main rationale behind this tool and the focus of this article is facilitate the time consuming segmentation process used for annotating images in the clinical practice, enabling the radiologists to use novel and easy to use semi-automatic segmentation techniques instead of manual segmentation. In this work, a detailed specification of the proposed segmentation algorithm using an Active Contour Models (ACM) aided with a Gradient Vector Flow (GVF) component is defined. The purpose is to help the manual segmentation process of the main ROIs of prostate gland zones. Finally, an experimental case of use and a discussion part of the results are presented.
Collapse
|
24
|
Yang W, Shi Y, Park SH, Yang M, Gao Y, Shen D. An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images. IEEE J Biomed Health Inform 2020; 24:2278-2291. [DOI: 10.1109/jbhi.2019.2960153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
25
|
Girum KB, Lalande A, Hussain R, Créhange G. A deep learning method for real-time intraoperative US image segmentation in prostate brachytherapy. Int J Comput Assist Radiol Surg 2020; 15:1467-1476. [PMID: 32691302 DOI: 10.1007/s11548-020-02231-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 07/08/2020] [Indexed: 01/28/2023]
Abstract
PURPOSE This paper addresses the detection of the clinical target volume (CTV) in transrectal ultrasound (TRUS) image-guided intraoperative for permanent prostate brachytherapy. Developing a robust and automatic method to detect the CTV on intraoperative TRUS images is clinically important to have faster and reproducible interventions that can benefit both the clinical workflow and patient health. METHODS We present a multi-task deep learning method for an automatic prostate CTV boundary detection in intraoperative TRUS images by leveraging both the low-level and high-level (prior shape) information. Our method includes a channel-wise feature calibration strategy for low-level feature extraction and learning-based prior knowledge modeling for prostate CTV shape reconstruction. It employs CTV shape reconstruction from automatically sampled boundary surface coordinates (pseudo-landmarks) to detect the low-contrast and noisy regions across the prostate boundary, while being less biased from shadowing, inherent speckles, and artifact signals from the needle and implanted radioactive seeds. RESULTS The proposed method was evaluated on a clinical database of 145 patients who underwent permanent prostate brachytherapy under TRUS guidance. Our method achieved a mean accuracy of [Formula: see text] and a mean surface distance error of [Formula: see text]. Extensive ablation and comparison studies show that our method outperformed previous deep learning-based methods by more than 7% for the Dice similarity coefficient and 6.9 mm reduced 3D Hausdorff distance error. CONCLUSION Our study demonstrates the potential of shape model-based deep learning methods for an efficient and accurate CTV segmentation in an ultrasound-guided intervention. Moreover, learning both low-level features and prior shape knowledge with channel-wise feature calibration can significantly improve the performance of deep learning methods in medical image segmentation.
Collapse
Affiliation(s)
- Kibrom Berihu Girum
- ImViA Laboratory, University of Burgundy, Batiment I3M, 64b rue sully, 21000, Dijon, France. .,Radiation Oncology Department, CGFL, Dijon, France.
| | - Alain Lalande
- ImViA Laboratory, University of Burgundy, Batiment I3M, 64b rue sully, 21000, Dijon, France.,Medical Imaging Department, CHU Dijon, Dijon, France
| | - Raabid Hussain
- ImViA Laboratory, University of Burgundy, Batiment I3M, 64b rue sully, 21000, Dijon, France
| | - Gilles Créhange
- ImViA Laboratory, University of Burgundy, Batiment I3M, 64b rue sully, 21000, Dijon, France.,Radiation Oncology Department, CGFL, Dijon, France
| |
Collapse
|
26
|
da Silva GLF, Diniz PS, Ferreira JL, França JVF, Silva AC, de Paiva AC, de Cavalcanti EAA. Superpixel-based deep convolutional neural networks and active contour model for automatic prostate segmentation on 3D MRI scans. Med Biol Eng Comput 2020; 58:1947-1964. [DOI: 10.1007/s11517-020-02199-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 05/22/2020] [Indexed: 10/24/2022]
|
27
|
Wildeboer RR, van Sloun RJG, Wijkstra H, Mischi M. Artificial intelligence in multiparametric prostate cancer imaging with focus on deep-learning methods. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 189:105316. [PMID: 31951873 DOI: 10.1016/j.cmpb.2020.105316] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/09/2019] [Accepted: 01/04/2020] [Indexed: 05/16/2023]
Abstract
Prostate cancer represents today the most typical example of a pathology whose diagnosis requires multiparametric imaging, a strategy where multiple imaging techniques are combined to reach an acceptable diagnostic performance. However, the reviewing, weighing and coupling of multiple images not only places additional burden on the radiologist, it also complicates the reviewing process. Prostate cancer imaging has therefore been an important target for the development of computer-aided diagnostic (CAD) tools. In this survey, we discuss the advances in CAD for prostate cancer over the last decades with special attention to the deep-learning techniques that have been designed in the last few years. Moreover, we elaborate and compare the methods employed to deliver the CAD output to the operator for further medical decision making.
Collapse
Affiliation(s)
- Rogier R Wildeboer
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands.
| | - Ruud J G van Sloun
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands.
| | - Hessel Wijkstra
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, the Netherlands
| | - Massimo Mischi
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands
| |
Collapse
|
28
|
Wang S, Nie D, Qu L, Shao Y, Lian J, Wang Q, Shen D. CT Male Pelvic Organ Segmentation via Hybrid Loss Network With Incomplete Annotation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2151-2162. [PMID: 31940526 PMCID: PMC8195629 DOI: 10.1109/tmi.2020.2966389] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Sufficient data with complete annotation is essential for training deep models to perform automatic and accurate segmentation of CT male pelvic organs, especially when such data is with great challenges such as low contrast and large shape variation. However, manual annotation is expensive in terms of both finance and human effort, which usually results in insufficient completely annotated data in real applications. To this end, we propose a novel deep framework to segment male pelvic organs in CT images with incomplete annotation delineated in a very user-friendly manner. Specifically, we design a hybrid loss network derived from both voxel classification and boundary regression, to jointly improve the organ segmentation performance in an iterative way. Moreover, we introduce a label completion strategy to complete the labels of the rich unannotated voxels and then embed them into the training data to enhance the model capability. To reduce the computation complexity and improve segmentation performance, we locate the pelvic region based on salient bone structures to focus on the candidate segmentation organs. Experimental results on a large planning CT pelvic organ dataset show that our proposed method with incomplete annotation achieves comparable segmentation performance to the state-of-the-art methods with complete annotation. Moreover, our proposed method requires much less effort of manual contouring from medical professionals such that an institutional specific model can be more easily established.
Collapse
|
29
|
Bardis MD, Houshyar R, Chang PD, Ushinsky A, Glavis-Bloom J, Chahine C, Bui TL, Rupasinghe M, Filippi CG, Chow DS. Applications of Artificial Intelligence to Prostate Multiparametric MRI (mpMRI): Current and Emerging Trends. Cancers (Basel) 2020; 12:E1204. [PMID: 32403240 PMCID: PMC7281682 DOI: 10.3390/cancers12051204] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/02/2020] [Accepted: 05/08/2020] [Indexed: 01/13/2023] Open
Abstract
Prostate carcinoma is one of the most prevalent cancers worldwide. Multiparametric magnetic resonance imaging (mpMRI) is a non-invasive tool that can improve prostate lesion detection, classification, and volume quantification. Machine learning (ML), a branch of artificial intelligence, can rapidly and accurately analyze mpMRI images. ML could provide better standardization and consistency in identifying prostate lesions and enhance prostate carcinoma management. This review summarizes ML applications to prostate mpMRI and focuses on prostate organ segmentation, lesion detection and segmentation, and lesion characterization. A literature search was conducted to find studies that have applied ML methods to prostate mpMRI. To date, prostate organ segmentation and volume approximation have been well executed using various ML techniques. Prostate lesion detection and segmentation are much more challenging tasks for ML and were attempted in several studies. They largely remain unsolved problems due to data scarcity and the limitations of current ML algorithms. By contrast, prostate lesion characterization has been successfully completed in several studies because of better data availability. Overall, ML is well situated to become a tool that enhances radiologists' accuracy and speed.
Collapse
Affiliation(s)
- Michelle D. Bardis
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Roozbeh Houshyar
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Peter D. Chang
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Alexander Ushinsky
- Mallinckrodt Institute of Radiology, Washington University Saint Louis, St. Louis, MO 63110, USA;
| | - Justin Glavis-Bloom
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Chantal Chahine
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Thanh-Lan Bui
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Mark Rupasinghe
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | | | - Daniel S. Chow
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| |
Collapse
|
30
|
Lei Y, Dong X, Tian Z, Liu Y, Tian S, Wang T, Jiang X, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. CT prostate segmentation based on synthetic MRI-aided deep attention fully convolution network. Med Phys 2020; 47:530-540. [PMID: 31745995 PMCID: PMC7764436 DOI: 10.1002/mp.13933] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2019] [Revised: 10/10/2019] [Accepted: 11/13/2019] [Indexed: 01/02/2023] Open
Abstract
PURPOSE Accurate segmentation of the prostate on computed tomography (CT) for treatment planning is challenging due to CT's poor soft tissue contrast. Magnetic resonance imaging (MRI) has been used to aid prostate delineation, but its final accuracy is limited by MRI-CT registration errors. We developed a deep attention-based segmentation strategy on CT-based synthetic MRI (sMRI) to deal with the CT prostate delineation challenge without MRI acquisition. METHODS AND MATERIALS We developed a prostate segmentation strategy which employs an sMRI-aided deep attention network to accurately segment the prostate on CT. Our method consists of three major steps. First, a cycle generative adversarial network was used to estimate an sMRI from CT images. Second, a deep attention fully convolution network was trained based on sMRI and the prostate contours deformed from MRIs. Attention models were introduced to pay more attention to prostate boundary. The prostate contour for a query patient was obtained by feeding the patient's CT images into the trained sMRI generation model and segmentation model. RESULTS The segmentation technique was validated with a clinical study of 49 patients by leave-one-out experiments and validated with an additional 50 patients by hold-out test. The Dice similarity coefficient, Hausdorff distance, and mean surface distance indices between our segmented and deformed MRI-defined prostate manual contours were 0.92 ± 0.09, 4.38 ± 4.66, and 0.62 ± 0.89 mm, respectively, with leave-one-out experiments, and were 0.91 ± 0.07, 4.57 ± 3.03, and 0.62 ± 0.65 mm, respectively, with hold-out test. CONCLUSIONS We have proposed a novel CT-only prostate segmentation strategy using CT-based sMRI, and validated its accuracy against the prostate contours that were manually drawn on MRI images and deformed to CT images. This technique could provide accurate prostate volume for treatment planning without requiring MRI acquisition, greatly facilitating the routine clinical workflow.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaojun Jiang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
31
|
CNN-Based Prostate Zonal Segmentation on T2-Weighted MR Images: A Cross-Dataset Study. NEURAL APPROACHES TO DYNAMICS OF SIGNAL EXCHANGES 2020. [DOI: 10.1007/978-981-13-8950-4_25] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
32
|
Wang Y, Dou H, Hu X, Zhu L, Yang X, Xu M, Qin J, Heng PA, Wang T, Ni D. Deep Attentive Features for Prostate Segmentation in 3D Transrectal Ultrasound. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2768-2778. [PMID: 31021793 DOI: 10.1109/tmi.2019.2913184] [Citation(s) in RCA: 78] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is of essential importance for image-guided prostate interventions and treatment planning. However, developing such automatic solutions remains very challenging due to the missing/ambiguous boundary and inhomogeneous intensity distribution of the prostate in TRUS, as well as the large variability in prostate shapes. This paper develops a novel 3D deep neural network equipped with attention modules for better prostate segmentation in TRUS by fully exploiting the complementary information encoded in different layers of the convolutional neural network (CNN). Our attention module utilizes the attention mechanism to selectively leverage the multi-level features integrated from different layers to refine the features at each individual layer, suppressing the non-prostate noise at shallow layers of the CNN and increasing more prostate details into features at deep layers. Experimental results on challenging 3D TRUS volumes show that our method attains satisfactory segmentation performance. The proposed attention mechanism is a general strategy to aggregate multi-level deep features and has the potential to be used for other medical image segmentation tasks. The code is publicly available at https://github.com/wulalago/DAF3D.
Collapse
|
33
|
Cerrolaza JJ, Picazo ML, Humbert L, Sato Y, Rueckert D, Ballester MÁG, Linguraru MG. Computational anatomy for multi-organ analysis in medical imaging: A review. Med Image Anal 2019; 56:44-67. [DOI: 10.1016/j.media.2019.04.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 02/05/2019] [Accepted: 04/13/2019] [Indexed: 12/19/2022]
|
34
|
Karimi D, Zeng Q, Mathur P, Avinash A, Mahdavi S, Spadinger I, Abolmaesumi P, Salcudean SE. Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Med Image Anal 2019; 57:186-196. [PMID: 31325722 DOI: 10.1016/j.media.2019.07.005] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 06/06/2019] [Accepted: 07/04/2019] [Indexed: 12/31/2022]
Abstract
The goal of this work was to develop a method for accurate and robust automatic segmentation of the prostate clinical target volume in transrectal ultrasound (TRUS) images for brachytherapy. These images can be difficult to segment because of weak or insufficient landmarks or strong artifacts. We devise a method, based on convolutional neural networks (CNNs), that produces accurate segmentations on easy and difficult images alike. We propose two strategies to achieve improved segmentation accuracy on difficult images. First, for CNN training we adopt an adaptive sampling strategy, whereby the training process is encouraged to pay more attention to images that are difficult to segment. Secondly, we train a CNN ensemble and use the disagreement among this ensemble to identify uncertain segmentations and to estimate a segmentation uncertainty map. We improve uncertain segmentations by utilizing the prior shape information in the form of a statistical shape model. Our method achieves Hausdorff distance of 2.7 ± 2.3 mm and Dice score of 93.9 ± 3.5%. Comparisons with several competing methods show that our method achieves significantly better results and reduces the likelihood of committing large segmentation errors. Furthermore, our experiments show that our approach to estimating segmentation uncertainty is better than or on par with recent methods for estimation of prediction uncertainty in deep learning models. Our study demonstrates that estimation of model uncertainty and use of prior shape information can significantly improve the performance of CNN-based medical image segmentation methods, especially on difficult images.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Qi Zeng
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Prateek Mathur
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Apeksha Avinash
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | | | | | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
35
|
Lei Y, Tian S, He X, Wang T, Wang B, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. Med Phys 2019; 46:3194-3206. [PMID: 31074513 PMCID: PMC6625925 DOI: 10.1002/mp.13577] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 04/14/2019] [Accepted: 05/01/2019] [Indexed: 01/09/2023] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. METHODS AND MATERIALS We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. RESULTS Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively. CONCLUSION We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Bo Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| |
Collapse
|
36
|
van Sloun RJG, Wildeboer RR, Mannaerts CK, Postema AW, Gayet M, Beerlage HP, Salomon G, Wijkstra H, Mischi M. Deep Learning for Real-time, Automatic, and Scanner-adapted Prostate (Zone) Segmentation of Transrectal Ultrasound, for Example, Magnetic Resonance Imaging-transrectal Ultrasound Fusion Prostate Biopsy. Eur Urol Focus 2019; 7:78-85. [PMID: 31028016 DOI: 10.1016/j.euf.2019.04.009] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 03/25/2019] [Accepted: 04/10/2019] [Indexed: 02/06/2023]
Abstract
BACKGROUND Although recent advances in multiparametric magnetic resonance imaging (MRI) led to an increase in MRI-transrectal ultrasound (TRUS) fusion prostate biopsies, these are time consuming, laborious, and costly. Introduction of deep-learning approach would improve prostate segmentation. OBJECTIVE To exploit deep learning to perform automatic, real-time prostate (zone) segmentation on TRUS images from different scanners. DESIGN, SETTING, AND PARTICIPANTS Three datasets with TRUS images were collected at different institutions, using an iU22 (Philips Healthcare, Bothell, WA, USA), a Pro Focus 2202a (BK Medical), and an Aixplorer (SuperSonic Imagine, Aix-en-Provence, France) ultrasound scanner. The datasets contained 436 images from 181 men. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Manual delineations from an expert panel were used as ground truth. The (zonal) segmentation performance was evaluated in terms of the pixel-wise accuracy, Jaccard index, and Hausdorff distance. RESULTS AND LIMITATIONS The developed deep-learning approach was demonstrated to significantly improve prostate segmentation compared with a conventional automated technique, reaching median accuracy of 98% (95% confidence interval 95-99%), a Jaccard index of 0.93 (0.80-0.96), and a Hausdorff distance of 3.0 (1.3-8.7) mm. Zonal segmentation yielded pixel-wise accuracy of 97% (95-99%) and 98% (96-99%) for the peripheral and transition zones, respectively. Supervised domain adaptation resulted in retainment of high performance when applied to images from different ultrasound scanners (p > 0.05). Moreover, the algorithm's assessment of its own segmentation performance showed a strong correlation with the actual segmentation performance (Pearson's correlation 0.72, p < 0.001), indicating that possible incorrect segmentations can be identified swiftly. CONCLUSIONS Fusion-guided prostate biopsies, targeting suspicious lesions on MRI using TRUS are increasingly performed. The requirement for (semi)manual prostate delineation places a substantial burden on clinicians. Deep learning provides a means for fast and accurate (zonal) prostate segmentation of TRUS images that translates to different scanners. PATIENT SUMMARY Artificial intelligence for automatic delineation of the prostate on ultrasound was shown to be reliable and applicable to different scanners. This method can, for example, be applied to speed up, and possibly improve, guided prostate biopsies using magnetic resonance imaging-transrectal ultrasound fusion.
Collapse
Affiliation(s)
- Ruud J G van Sloun
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Rogier R Wildeboer
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Christophe K Mannaerts
- Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Arnoud W Postema
- Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Maudy Gayet
- Department of Urology, Jeroen Bosch Hospital, 's-Hertogenbosch, The Netherlands
| | - Harrie P Beerlage
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Georg Salomon
- Martini Klinik-Prostate Cancer Center, University Hospital Hamburg Eppendorf, Hamburg, Germany
| | - Hessel Wijkstra
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Amsterdam, The Netherlands
| | - Massimo Mischi
- Laboratory of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
37
|
Jaouen V, Bert J, Mountris KA, Boussion N, Schick U, Pradier O, Valeri A, Visvikis D. Prostate Volume Segmentation in TRUS Using Hybrid Edge-Bhattacharyya Active Surfaces. IEEE Trans Biomed Eng 2019; 66:920-933. [DOI: 10.1109/tbme.2018.2865428] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
38
|
Liu C, Gardner SJ, Wen N, Elshaikh MA, Siddiqui F, Movsas B, Chetty IJ. Automatic Segmentation of the Prostate on CT Images Using Deep Neural Networks (DNN). Int J Radiat Oncol Biol Phys 2019; 104:924-932. [PMID: 30890447 DOI: 10.1016/j.ijrobp.2019.03.017] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 03/05/2019] [Accepted: 03/10/2019] [Indexed: 11/16/2022]
Abstract
PURPOSE Recent advances in deep neural networks (DNNs) have unlocked opportunities for their application for automatic image segmentation. We have evaluated a DNN-based algorithm for automatic segmentation of the prostate gland on a large cohort of patient images. METHODS AND MATERIALS Planning-CT data sets for 1114 patients with prostate cancer were retrospectively selected and divided into 2 groups. Group A contained 1104 data sets, with 1 physician-generated prostate gland contour for each data set. Among these image sets, 771 were used for training, 193 for validation, and 140 for testing. Group B contained 10 data sets, each including prostate contours delineated by 5 independent physicians and a consensus contour generated using the STAPLE method in the CERR software package. All images were resampled to a spatial resolution of 1 × 1 × 1.5 mm. A region (128 × 128 × 64 voxels) containing the prostate was selected to train a DNN. The best-performing model on the validation data sets was used to segment the prostate on all testing images. Results were compared between DNN and physician-generated contours using the Dice similarity coefficient, Hausdorff distances, regional contour distances, and center-of-mass distances. RESULTS The mean Dice similarity coefficients between DNN-based prostate segmentation and physician-generated contours for test data in Group A, Group B, and Group B-consensus were 0.85 ± 0.06 (range, 0.65-0.93), 0.85 ± 0.04 (range, 0.80-0.91), and 0.88 ± 0.03 (range, 0.82-0.92), respectively. The Hausdorff distance was 7.0 ± 3.5 mm, 7.3 ± 2.0 mm, and 6.3 ± 2.0 mm for Group A, Group B, and Group B-consensus, respectively. The mean center-of-mass distances for all 3 data set groups were within 5 mm. CONCLUSIONS A DNN-based algorithm was used to automatically segment the prostate for a large cohort of patients with prostate cancer. DNN-based prostate segmentations were compared to the consensus contour for a smaller group of patients; the agreement between DNN segmentations and consensus contour was similar to the agreement reported in a previous study. Clinical use of DNNs is promising, but further investigation is warranted.
Collapse
Affiliation(s)
- Chang Liu
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan.
| | - Stephen J Gardner
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Ning Wen
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Mohamed A Elshaikh
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Farzan Siddiqui
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Benjamin Movsas
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Indrin J Chetty
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| |
Collapse
|
39
|
Yan K, Wang X, Kim J, Khadra M, Fulham M, Feng D. A propagation-DNN: Deep combination learning of multi-level features for MR prostate segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 170:11-21. [PMID: 30712600 DOI: 10.1016/j.cmpb.2018.12.031] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 12/13/2018] [Accepted: 12/28/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Prostate segmentation on Magnetic Resonance (MR) imaging is problematic because disease changes the shape and boundaries of the gland and it can be difficult to separate the prostate from surrounding tissues. We propose an automated model that extracts and combines multi-level features in a deep neural network to segment prostate on MR images. METHODS Our proposed model, the Propagation Deep Neural Network (P-DNN), incorporates the optimal combination of multi-level feature extraction as a single model. High level features from the convolved data using DNN are extracted for prostate localization and shape recognition, while labeling propagation, by low level cues, is embedded into a deep layer to delineate the prostate boundary. RESULTS A well-recognized benchmarking dataset (50 training data and 30 testing data from patients) was used to evaluate the P-DNN. When compared it to existing DNN methods, the P-DNN statistically outperformed the baseline DNN models with an average improvement in the DSC of 3.19%. When compared to the state-of-the-art non-DNN prostate segmentation methods, P-DNN was competitive by achieving 89.9 ± 2.8% DSC and 6.84 ± 2.5 mm HD on training sets and 84.13 ± 5.18% DSC and 9.74 ± 4.21 mm HD on testing sets. CONCLUSION Our results show that P-DNN maximizes multi-level feature extraction for prostate segmentation of MR images.
Collapse
Affiliation(s)
- Ke Yan
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| | - Xiuying Wang
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia.
| | - Jinman Kim
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| | - Mohamed Khadra
- Department of Urology, Nepean Hospital, Kingswood, Australia
| | - Michael Fulham
- Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia
| | - Dagan Feng
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| |
Collapse
|
40
|
Sun Y, Reynolds HM, Parameswaran B, Wraith D, Finnegan ME, Williams S, Haworth A. Multiparametric MRI and radiomics in prostate cancer: a review. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2019; 42:3-25. [PMID: 30762223 DOI: 10.1007/s13246-019-00730-z] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Accepted: 01/22/2019] [Indexed: 12/30/2022]
Abstract
Multiparametric MRI (mpMRI) is an imaging modality that combines anatomical MR imaging with one or more functional MRI sequences. It has become a versatile tool for detecting and characterising prostate cancer (PCa). The traditional role of mpMRI was confined to PCa staging, but due to the advanced imaging techniques, its role has expanded to various stages in clinical practises including tumour detection, disease monitor during active surveillance and sequential imaging for patient follow-up. Meanwhile, with the growing speed of data generation and the increasing volume of imaging data, it is highly demanded to apply computerised methods to process mpMRI data and extract useful information. Hence quantitative analysis for imaging data using radiomics has become an emerging paradigm. The application of radiomics approaches in prostate cancer has not only enabled automatic localisation of the disease but also provided a non-invasive solution to assess tumour biology (e.g. aggressiveness and the presence of hypoxia). This article reviews mpMRI and its expanding role in PCa detection, staging and patient management. Following that, an overview of prostate radiomics will be provided, with a special focus on its current applications as well as its future directions.
Collapse
Affiliation(s)
- Yu Sun
- University of Sydney, Sydney, Australia. .,Peter MacCallum Cancer Centre, Melbourne, Australia.
| | | | | | - Darren Wraith
- Queensland University of Technology, Brisbane, Australia
| | - Mary E Finnegan
- Imperial College Healthcare NHS Trust, London, UK.,Imperial College London, London, UK
| | | | | |
Collapse
|
41
|
Evaluation of an improved tool for non-invasive prediction of neonatal respiratory morbidity based on fully automated fetal lung ultrasound analysis. Sci Rep 2019; 9:1950. [PMID: 30760806 PMCID: PMC6374419 DOI: 10.1038/s41598-019-38576-w] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Accepted: 01/02/2019] [Indexed: 11/22/2022] Open
Abstract
The objective of this study was to evaluate the performance of a new version of quantusFLM®, a software tool for prediction of neonatal respiratory morbidity (NRM) by ultrasound, which incorporates a fully automated fetal lung delineation based on Deep Learning techniques. A set of 790 fetal lung ultrasound images obtained at 24 + 0–38 + 6 weeks’ gestation was evaluated. Perinatal outcomes and the occurrence of NRM were recorded. quantusFLM® version 3.0 was applied to all images to automatically delineate the fetal lung and predict NRM risk. The test was compared with the same technology but using a manual delineation of the fetal lung, and with a scenario where only gestational age was available. The software predicted NRM with a sensitivity, specificity, and positive and negative predictive value of 71.0%, 94.7%, 67.9%, and 95.4%, respectively, with an accuracy of 91.5%. The accuracy for predicting NRM obtained with the same texture analysis but using a manual delineation of the lung was 90.3%, and using only gestational age was 75.6%. To sum up, automated and non-invasive software predicted NRM with a performance similar to that reported for tests based on amniotic fluid analysis and much greater than that of gestational age alone.
Collapse
|
42
|
Fedorov A, Schwier M, Clunie D, Herz C, Pieper S, Kikinis R, Tempany C, Fennessy F. An annotated test-retest collection of prostate multiparametric MRI. Sci Data 2018; 5:180281. [PMID: 30512014 PMCID: PMC6278692 DOI: 10.1038/sdata.2018.281] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Accepted: 10/26/2018] [Indexed: 12/13/2022] Open
Abstract
Multiparametric Magnetic Resonance Imaging (mpMRI) is widely used for characterizing prostate cancer. Standard of care use of mpMRI in clinic relies on visual interpretation of the images by an expert. mpMRI is also increasingly used as a quantitative imaging biomarker of the disease. Little is known about repeatability of such quantitative measurements, and no test-retest datasets have been available publicly to support investigation of the technical characteristics of the MRI-based quantification in the prostate. Here we present an mpMRI dataset consisting of baseline and repeat prostate MRI exams for 15 subjects, manually annotated to define regions corresponding to lesions and anatomical structures, and accompanied by region-based measurements. This dataset aims to support further investigation of the repeatability of mpMRI-derived quantitative prostate measurements, study of the robustness and reliability of the automated analysis approaches, and to support development and validation of new image analysis techniques. The manuscript can also serve as an example of the use of DICOM for standardized encoding of the image annotation and quantification results.
Collapse
Affiliation(s)
- Andriy Fedorov
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Michael Schwier
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | | | - Christian Herz
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | | | - Ron Kikinis
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
- Fraunhofer MEVIS, Bremen, Germany
- Mathematics/Computer Science Faculty, University of Bremen, Bremen, Germany
| | - Clare Tempany
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Fiona Fennessy
- Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
43
|
Determination of Prostate Volume: A Comparison of Contemporary Methods. Acad Radiol 2018; 25:1582-1587. [PMID: 29609953 DOI: 10.1016/j.acra.2018.03.014] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Revised: 03/05/2018] [Accepted: 03/07/2018] [Indexed: 11/20/2022]
Abstract
RATIONALE AND OBJECTIVES Prostate volume (PV) determination provides important clinical information. We compared PVs determined by digital rectal examination (DRE), transrectal ultrasound (TRUS), magnetic resonance imaging (MRI) with or without three-dimensional (3D) segmentation software, and surgical prostatectomy weight (SPW) and volume (SPV). MATERIALS AND METHODS This retrospective review from 2010 to 2016 included patients who underwent radical prostatectomy ≤1 year after multiparametric prostate MRI. PVs from DRE and TRUS were obtained from urology clinic notes. MRI-based PVs were calculated using bullet and ellipsoid formulas, automated 3D segmentation software (MRI-A3D), manual segmentation by a radiologist (MRI-R3D), and a third-year medical student (MRI-S3D). SPW and SPV were derived from pathology reports. Intraclass correlation coefficients compared the relative accuracy of each volume measurement. RESULTS Ninety-nine patients were analyzed. Median PVs were DRE 35 mL, TRUS 35 mL, MRI-bullet 49 mL, MRI-ellipsoid 39 mL, MRI-A3D 37 mL, MRI-R3D 36 mL, MRI-S3D 36 mL, SPW 54 mL, SPV-bullet 47 mL, and SPV-ellipsoid 37 mL. SPW and bullet formulas had consistently large PV, and formula-based PV had a wider spread than PV based on segmentation. Compared to MRI-R3D, the intraclass correlation coefficient was 0.91 for MRI-S3D, 0.90 for MRI-ellipsoid, 0.73 for SPV-ellipsoid, 0.72 for MRI-bullet, 0.71 for TRUS, 0.70 for SPW, 0.66 for SPV-bullet, 0.38 for MRI-A3D, and 0.33 for DRE. CONCLUSIONS With MRI-R3D measurement as the reference, the most reliable methods for PV estimation were MRI-S3D and MRI-ellipsoid formula. Automated segmentations must be individually assessed for accuracy, as they are not always truly representative of the prostate anatomy. Manual segmentation of the prostate does not require expert training.
Collapse
|
44
|
Jaouen V, Bert J, Boussion N, Fayad H, Hatt M, Visvikis D. Image enhancement with PDEs and nonconservative advection flow fields. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 28:3075-3088. [PMID: 30452364 DOI: 10.1109/tip.2018.2881838] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
|
45
|
Lee YJ, Kim SH, Kim H, Lee JS, Piao S, Oh SJ. Value of computed tomography in calculating prostate volume when transrectal ultrasonography is not applicable. Low Urin Tract Symptoms 2018; 11:O147-O151. [PMID: 30010252 DOI: 10.1111/luts.12236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2017] [Revised: 05/22/2018] [Accepted: 06/12/2018] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The aim of this study was to evaluate the value of computed tomography (CT) in determining total prostate volume (TPV), as an alternative to transrectal ultrasonography (TRUS) when TRUS is not available. METHODS The patient cohort included patients who underwent both CT and TRUS within a 3-month interval from January 2012 to December 2013 at a single institution. In all, 67 non-contrast and 217 contrast-enhanced CT images were reviewed twice by 3 independent observers 2 months after the initial evaluation. Prostate length and width were measured on axial images and height was measured on sagittal images. To compare differences between CT and TRUS in TPV estimation, the CT/TRUS ratio of TPV was calculated and a Bland-Altman plot was constructed. Inter- and intraobserver variabilities and the effect of contrast enhancement were also evaluated statistically. RESULTS The mean (± SD) age of patients was 64.5 ±10.8 years and the mean time interval between CT and TRUS was 16.3 ± 22.6 days. The mean TRUS-measured TPV was 44.7 ± 24.9 mL and the mean CT/TRUS TPV ratio was 0.80 ± 0.20, indicating that TPV estimated by CT is 20% lower than that determined by TRUS, regardless of contrast enhancement (P > .05). The mean difference in TPV between TRUS and CT was 11.3 ± 14.3 mL, with differences of 1.7, 9.9, and 32.9 mL for prostate volumes of ≤30, >30-60, and >60 mL, respectively. Interobserver variability was excellent (r > 0.9), whereas intraobserver variability was very good (r > 0.7). CONCLUSION CT is a reliable method for prostate volume measurement and is well correlated with TRUS. Although CT estimates of TPV are 20% lower than those obtained using TRUS, CT can be used as an alternative to TRUS when TRUS is not available.
Collapse
Affiliation(s)
- Young Ju Lee
- Department of Urology, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Sung Han Kim
- Urology, Prostate Cancer, Research Institute and National Cancer Center, Goyang, Republic of Korea
| | - Hwanik Kim
- Department of Urology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Joong Sub Lee
- Department of Urology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Songzhe Piao
- Department of Urology, Taizhou Central Hospital, Zhejiang Sheng, China
| | - Seung-June Oh
- Department of Urology, Seoul National University Hospital, Seoul, Republic of Korea
| |
Collapse
|
46
|
A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy. Med Image Anal 2018; 48:107-116. [PMID: 29886268 DOI: 10.1016/j.media.2018.05.010] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Revised: 05/30/2018] [Accepted: 05/31/2018] [Indexed: 12/14/2022]
Abstract
Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach.
Collapse
|
47
|
Yang X, Yang JD, Hwang HP, Yu HC, Ahn S, Kim BW, You H. Segmentation of liver and vessels from CT images and classification of liver segments for preoperative liver surgical planning in living donor liver transplantation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 158:41-52. [PMID: 29544789 DOI: 10.1016/j.cmpb.2017.12.008] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 11/13/2017] [Accepted: 12/11/2017] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE The present study developed an effective surgical planning method consisting of a liver extraction stage, a vessel extraction stage, and a liver segment classification stage based on abdominal computerized tomography (CT) images. METHODS An automatic seed point identification method, customized level set methods, and an automated thresholding method were applied in this study to extraction of the liver, portal vein (PV), and hepatic vein (HV) from CT images. Then, a semi-automatic method was developed to separate PV and HV. Lastly, a local searching method was proposed for identification of PV branches and the nearest neighbor approximation method was applied to classifying liver segments. RESULTS Onsite evaluation of liver segmentation provided by the SLIVER07 website showed that the liver segmentation method achieved an average volumetric overlap accuracy of 95.2%. An expert radiologist evaluation of vessel segmentation showed no false positive errors or misconnections between PV and HV in the extracted vessel trees. Clinical evaluation of liver segment classification using 43 CT datasets from two medical centers showed that the proposed method achieved high accuracy in liver graft volumetry (absolute error, AE = 45.2 ± 20.9 ml; percentage of AE, %AE = 6.8% ± 3.2%; percentage of %AE > 10% = 16.3%; percentage of %AE > 20% = none) and the classified segment boundaries agreed with the intraoperative surgical cutting boundaries by visual inspection. CONCLUSIONS The method in this study is effective in segmentation of liver and vessels and classification of liver segments and can be applied to preoperative liver surgical planning in living donor liver transplantation.
Collapse
Affiliation(s)
- Xiaopeng Yang
- Department of Industrial Management and Engineering, Pohang University of Science and Technology, Pohang, 37673, South Korea
| | - Jae Do Yang
- Department of Surgery, Chonbuk National University Medical School, Jeonju, 54907, South Korea; Research Institute of Clinical Medicine of Chonbuk National University-Biomedical Research Institute of Chonbuk University Hospital, Jeonju, 54907, South Korea; Research Institute for Endocrine Sciences, Chonbuk National University, Jeonju, 54907, South Korea
| | - Hong Pil Hwang
- Department of Surgery, Chonbuk National University Medical School, Jeonju, 54907, South Korea; Research Institute of Clinical Medicine of Chonbuk National University-Biomedical Research Institute of Chonbuk University Hospital, Jeonju, 54907, South Korea; Research Institute for Endocrine Sciences, Chonbuk National University, Jeonju, 54907, South Korea
| | - Hee Chul Yu
- Department of Surgery, Chonbuk National University Medical School, Jeonju, 54907, South Korea; Research Institute of Clinical Medicine of Chonbuk National University-Biomedical Research Institute of Chonbuk University Hospital, Jeonju, 54907, South Korea; Research Institute for Endocrine Sciences, Chonbuk National University, Jeonju, 54907, South Korea.
| | - Sungwoo Ahn
- Department of Surgery, Chonbuk National University Medical School, Jeonju, 54907, South Korea; Research Institute of Clinical Medicine of Chonbuk National University-Biomedical Research Institute of Chonbuk University Hospital, Jeonju, 54907, South Korea; Research Institute for Endocrine Sciences, Chonbuk National University, Jeonju, 54907, South Korea
| | - Bong-Wan Kim
- Department of Liver Transplantation and Hepatobiliary Surgery, Ajou University School of Medicine, Suwon, 16499, South Korea
| | - Heecheon You
- Department of Industrial Management and Engineering, Pohang University of Science and Technology, Pohang, 37673, South Korea
| |
Collapse
|
48
|
Zeng Q, Samei G, Karimi D, Kesch C, Mahdavi SS, Abolmaesumi P, Salcudean SE. Prostate segmentation in transrectal ultrasound using magnetic resonance imaging priors. Int J Comput Assist Radiol Surg 2018; 13:749-757. [DOI: 10.1007/s11548-018-1742-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 03/19/2018] [Indexed: 10/17/2022]
|
49
|
Atlas registration and ensemble deep convolutional neural network-based prostate segmentation using magnetic resonance imaging. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.09.084] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
50
|
Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2017; 92:210-235. [PMID: 29247890 DOI: 10.1016/j.compbiomed.2017.11.018] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 12/14/2022]
Abstract
B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed.
Collapse
Affiliation(s)
- Kristen M Meiburger
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy
| | - U Rajendra Acharya
- Department of Electronic & Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy.
| |
Collapse
|