1
|
Park GE, Kim SH, Nam Y, Kang J, Park M, Kang BJ. 3D Breast Cancer Segmentation in DCE-MRI Using Deep Learning With Weak Annotation. J Magn Reson Imaging 2024; 59:2252-2262. [PMID: 37596823 DOI: 10.1002/jmri.28960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/01/2023] [Accepted: 08/03/2023] [Indexed: 08/20/2023] Open
Abstract
BACKGROUND Deep learning models require large-scale training to perform confidently, but obtaining annotated datasets in medical imaging is challenging. Weak annotation has emerged as a way to save time and effort. PURPOSE To develop a deep learning model for 3D breast cancer segmentation in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using weak annotation with reliable performance. STUDY TYPE Retrospective. POPULATION Seven hundred and thirty-six women with breast cancer from a single institution, divided into the development (N = 544) and test dataset (N = 192). FIELD STRENGTH/SEQUENCE 3.0-T, 3D fat-saturated gradient-echo axial T1-weighted flash 3D volumetric interpolated brain examination (VIBE) sequences. ASSESSMENT Two radiologists performed a weak annotation of the ground truth using bounding boxes. Based on this, the ground truth annotation was completed through autonomic and manual correction. The deep learning model using 3D U-Net transformer (UNETR) was trained with this annotated dataset. The segmentation results of the test set were analyzed by quantitative and qualitative methods, and the regions were divided into whole breast and region of interest (ROI) within the bounding box. STATISTICAL TESTS As a quantitative method, we used the Dice similarity coefficient to evaluate the segmentation result. The volume correlation with the ground truth was evaluated with the Spearman correlation coefficient. Qualitatively, three readers independently evaluated the visual score in four scales. A P-value <0.05 was considered statistically significant. RESULTS The deep learning model we developed achieved a median Dice similarity score of 0.75 and 0.89 for the whole breast and ROI, respectively. The volume correlation coefficient with respect to the ground truth volume was 0.82 and 0.86 for the whole breast and ROI, respectively. The mean visual score, as evaluated by three readers, was 3.4. DATA CONCLUSION The proposed deep learning model with weak annotation may show good performance for 3D segmentations of breast cancer using DCE-MRI. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Ga Eun Park
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Sung Hun Kim
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yoonho Nam
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Junghwa Kang
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Minjeong Park
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Bong Joo Kang
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
2
|
Wyatt CR, Huang W. Editorial for "Deep Learning-Based Segmentation of Locally Advanced Breast Cancer on MRI in Relation to Residual Cancer Burden: A Multi-Institutional Cohort Study". J Magn Reson Imaging 2023; 58:1750-1751. [PMID: 36939778 DOI: 10.1002/jmri.28680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 02/08/2023] [Indexed: 03/21/2023] Open
Affiliation(s)
- Cory R Wyatt
- Department of Diagnostic Radiology, Oregon Health & Sciences University, Portland, Oregon, USA
- Advanced Imaging Research Center, Oregon Health & Sciences University, Portland, Oregon, USA
| | - Wei Huang
- Advanced Imaging Research Center, Oregon Health & Sciences University, Portland, Oregon, USA
| |
Collapse
|
3
|
Ostmeier S, Axelrod B, Verhaaren BFJ, Christensen S, Mahammedi A, Liu Y, Pulli B, Li LJ, Zaharchuk G, Heit JJ. Non-inferiority of deep learning ischemic stroke segmentation on non-contrast CT within 16-hours compared to expert neuroradiologists. Sci Rep 2023; 13:16153. [PMID: 37752162 PMCID: PMC10522706 DOI: 10.1038/s41598-023-42961-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 09/17/2023] [Indexed: 09/28/2023] Open
Abstract
We determined if a convolutional neural network (CNN) deep learning model can accurately segment acute ischemic changes on non-contrast CT compared to neuroradiologists. Non-contrast CT (NCCT) examinations from 232 acute ischemic stroke patients who were enrolled in the DEFUSE 3 trial were included in this study. Three experienced neuroradiologists independently segmented hypodensity that reflected the ischemic core on each scan. The neuroradiologist with the most experience (expert A) served as the ground truth for deep learning model training. Two additional neuroradiologists' (experts B and C) segmentations were used for data testing. The 232 studies were randomly split into training and test sets. The training set was further randomly divided into 5 folds with training and validation sets. A 3-dimensional CNN architecture was trained and optimized to predict the segmentations of expert A from NCCT. The performance of the model was assessed using a set of volume, overlap, and distance metrics using non-inferiority thresholds of 20%, 3 ml, and 3 mm, respectively. The optimized model trained on expert A was compared to test experts B and C. We used a one-sided Wilcoxon signed-rank test to test for the non-inferiority of the model-expert compared to the inter-expert agreement. The final model performance for the ischemic core segmentation task reached a performance of 0.46 ± 0.09 Surface Dice at Tolerance 5mm and 0.47 ± 0.13 Dice when trained on expert A. Compared to the two test neuroradiologists the model-expert agreement was non-inferior to the inter-expert agreement, [Formula: see text]. The before, CNN accurately delineates the hypodense ischemic core on NCCT in acute ischemic stroke patients with an accuracy comparable to neuroradiologists.
Collapse
Affiliation(s)
| | - Brian Axelrod
- Department of Computer Science, Stanford University, Stanford, USA
| | | | | | | | | | | | - Li-Jia Li
- Stanford School of Medicine, Stanford, USA
| | | | | |
Collapse
|
4
|
Müller-Franzes G, Müller-Franzes F, Huck L, Raaff V, Kemmer E, Khader F, Arasteh ST, Lemainque T, Kather JN, Nebelung S, Kuhl C, Truhn D. Fibroglandular tissue segmentation in breast MRI using vision transformers: a multi-institutional evaluation. Sci Rep 2023; 13:14207. [PMID: 37648728 PMCID: PMC10468506 DOI: 10.1038/s41598-023-41331-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 08/24/2023] [Indexed: 09/01/2023] Open
Abstract
Accurate and automatic segmentation of fibroglandular tissue in breast MRI screening is essential for the quantification of breast density and background parenchymal enhancement. In this retrospective study, we developed and evaluated a transformer-based neural network for breast segmentation (TraBS) in multi-institutional MRI data, and compared its performance to the well established convolutional neural network nnUNet. TraBS and nnUNet were trained and tested on 200 internal and 40 external breast MRI examinations using manual segmentations generated by experienced human readers. Segmentation performance was assessed in terms of the Dice score and the average symmetric surface distance. The Dice score for nnUNet was lower than for TraBS on the internal testset (0.909 ± 0.069 versus 0.916 ± 0.067, P < 0.001) and on the external testset (0.824 ± 0.144 versus 0.864 ± 0.081, P = 0.004). Moreover, the average symmetric surface distance was higher (= worse) for nnUNet than for TraBS on the internal (0.657 ± 2.856 versus 0.548 ± 2.195, P = 0.001) and on the external testset (0.727 ± 0.620 versus 0.584 ± 0.413, P = 0.03). Our study demonstrates that transformer-based networks improve the quality of fibroglandular tissue segmentation in breast MRI compared to convolutional-based models like nnUNet. These findings might help to enhance the accuracy of breast density and parenchymal enhancement quantification in breast MRI screening.
Collapse
Affiliation(s)
- Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Fritz Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Luisa Huck
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Vanessa Raaff
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Eva Kemmer
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Firas Khader
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Teresa Lemainque
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University, Dresden, Germany
- Department of Medicine III, University Hospital RWTH, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital RWTH, Aachen, Germany.
| |
Collapse
|
5
|
Kokalj Ž, Džeroski S, Šprajc I, Štajdohar J, Draksler A, Somrak M. Machine learning-ready remote sensing data for Maya archaeology. Sci Data 2023; 10:558. [PMID: 37612295 PMCID: PMC10447422 DOI: 10.1038/s41597-023-02455-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 08/08/2023] [Indexed: 08/25/2023] Open
Abstract
In our study, we set out to collect a multimodal annotated dataset for remote sensing of Maya archaeology, that is suitable for deep learning. The dataset covers the area around Chactún, one of the largest ancient Maya urban centres in the central Yucatán Peninsula. The dataset includes five types of data records: raster visualisations and canopy height model from airborne laser scanning (ALS) data, Sentinel-1 and Sentinel-2 satellite data, and manual data annotations. The manual annotations (used as binary masks) represent three different types of ancient Maya structures (class labels: buildings, platforms, and aguadas - artificial reservoirs) within the study area, their exact locations, and boundaries. The dataset is ready for use with machine learning, including convolutional neural networks (CNNs) for object recognition, object localization (detection), and semantic segmentation. We would like to provide this dataset to help more research teams develop their own computer vision models for investigations of Maya archaeology or improve existing ones.
Collapse
Affiliation(s)
- Žiga Kokalj
- Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU), Novi trg 2, 1000, Ljubljana, Slovenia.
| | - Sašo Džeroski
- Information and Communication Technologies, Jožef Stefan International Postgraduate School, Jamova cesta 39, 1000, Ljubljana, Slovenia
- Jožef Stefan Institute, Jamova cesta 39, 1000, Ljubljana, Slovenia
| | - Ivan Šprajc
- Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU), Novi trg 2, 1000, Ljubljana, Slovenia
| | - Jasmina Štajdohar
- Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU), Novi trg 2, 1000, Ljubljana, Slovenia
| | - Andrej Draksler
- Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU), Novi trg 2, 1000, Ljubljana, Slovenia
| | - Maja Somrak
- Research Centre of the Slovenian Academy of Sciences and Arts (ZRC SAZU), Novi trg 2, 1000, Ljubljana, Slovenia
- Information and Communication Technologies, Jožef Stefan International Postgraduate School, Jamova cesta 39, 1000, Ljubljana, Slovenia
| |
Collapse
|
6
|
Salih M, Austin C, Warty RR, Tiktin C, Rolnik DL, Momeni M, Rezatofighi H, Reddy S, Smith V, Vollenhoven B, Horta F. Embryo selection through artificial intelligence versus embryologists: a systematic review. Hum Reprod Open 2023; 2023:hoad031. [PMID: 37588797 PMCID: PMC10426717 DOI: 10.1093/hropen/hoad031] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/17/2023] [Indexed: 08/18/2023] Open
Abstract
STUDY QUESTION What is the present performance of artificial intelligence (AI) decision support during embryo selection compared to the standard embryo selection by embryologists? SUMMARY ANSWER AI consistently outperformed the clinical teams in all the studies focused on embryo morphology and clinical outcome prediction during embryo selection assessment. WHAT IS KNOWN ALREADY The ART success rate is ∼30%, with a worrying trend of increasing female age correlating with considerably worse results. As such, there have been ongoing efforts to address this low success rate through the development of new technologies. With the advent of AI, there is potential for machine learning to be applied in such a manner that areas limited by human subjectivity, such as embryo selection, can be enhanced through increased objectivity. Given the potential of AI to improve IVF success rates, it remains crucial to review the performance between AI and embryologists during embryo selection. STUDY DESIGN SIZE DURATION The search was done across PubMed, EMBASE, Ovid Medline, and IEEE Xplore from 1 June 2005 up to and including 7 January 2022. Included articles were also restricted to those written in English. Search terms utilized across all databases for the study were: ('Artificial intelligence' OR 'Machine Learning' OR 'Deep learning' OR 'Neural network') AND ('IVF' OR 'in vitro fertili*' OR 'assisted reproductive techn*' OR 'embryo'), where the character '*' refers the search engine to include any auto completion of the search term. PARTICIPANTS/MATERIALS SETTING METHODS A literature search was conducted for literature relating to AI applications to IVF. Primary outcomes of interest were accuracy, sensitivity, and specificity of the embryo morphology grade assessments and the likelihood of clinical outcomes, such as clinical pregnancy after IVF treatments. Risk of bias was assessed using the Modified Down and Black Checklist. MAIN RESULTS AND THE ROLE OF CHANCE Twenty articles were included in this review. There was no specific embryo assessment day across the studies-Day 1 until Day 5/6 of embryo development was investigated. The types of input for training AI algorithms were images and time-lapse (10/20), clinical information (6/20), and both images and clinical information (4/20). Each AI model demonstrated promise when compared to an embryologist's visual assessment. On average, the models predicted the likelihood of successful clinical pregnancy with greater accuracy than clinical embryologists, signifying greater reliability when compared to human prediction. The AI models performed at a median accuracy of 75.5% (range 59-94%) on predicting embryo morphology grade. The correct prediction (Ground Truth) was defined through the use of embryo images according to post embryologists' assessment following local respective guidelines. Using blind test datasets, the embryologists' accuracy prediction was 65.4% (range 47-75%) with the same ground truth provided by the original local respective assessment. Similarly, AI models had a median accuracy of 77.8% (range 68-90%) in predicting clinical pregnancy through the use of patient clinical treatment information compared to 64% (range 58-76%) when performed by embryologists. When both images/time-lapse and clinical information inputs were combined, the median accuracy by the AI models was higher at 81.5% (range 67-98%), while clinical embryologists had a median accuracy of 51% (range 43-59%). LIMITATIONS REASONS FOR CAUTION The findings of this review are based on studies that have not been prospectively evaluated in a clinical setting. Additionally, a fair comparison of all the studies were deemed unfeasible owing to the heterogeneity of the studies, development of the AI models, database employed and the study design and quality. WIDER IMPLICATIONS OF THE FINDINGS AI provides considerable promise to the IVF field and embryo selection. However, there needs to be a shift in developers' perception of the clinical outcome from successful implantation towards ongoing pregnancy or live birth. Additionally, existing models focus on locally generated databases and many lack external validation. STUDY FUNDING/COMPETING INTERESTS This study was funded by Monash Data Future Institute. All authors have no conflicts of interest to declare. REGISTRATION NUMBER CRD42021256333.
Collapse
Affiliation(s)
- M Salih
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - C Austin
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
| | - R R Warty
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
| | - C Tiktin
- School of Engineering, RMIT University, Melbourne, Victoria, Australia
| | - D L Rolnik
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Women’s and Newborn Program, Monash Health, Melbourne, Victoria, Australia
| | - M Momeni
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
| | - H Rezatofighi
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University, Clayton, Victoria, Australia
- Monash Data Future Institute, Monash University, Clayton, Victoria, Australia
| | - S Reddy
- School of Medicine, Deakin University, Geelong, Victoria, Australia
| | - V Smith
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
| | - B Vollenhoven
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Women’s and Newborn Program, Monash Health, Melbourne, Victoria, Australia
- Monash IVF, Melbourne, Victoria, Australia
| | - F Horta
- Department of Obstetrics and Gynaecology, Monash University, Clayton, Victoria, Australia
- Monash Data Future Institute, Monash University, Clayton, Victoria, Australia
- City Fertility, Melbourne, Victoria, Australia
| |
Collapse
|
7
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
8
|
Visual ensemble selection of deep convolutional neural networks for 3D segmentation of breast tumors on dynamic contrast enhanced MRI. Eur Radiol 2023; 33:959-969. [PMID: 36074262 PMCID: PMC9889463 DOI: 10.1007/s00330-022-09113-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 07/09/2022] [Accepted: 08/14/2022] [Indexed: 02/04/2023]
Abstract
OBJECTIVES To develop a visual ensemble selection of deep convolutional neural networks (CNN) for 3D segmentation of breast tumors using T1-weighted dynamic contrast-enhanced (T1-DCE) MRI. METHODS Multi-center 3D T1-DCE MRI (n = 141) were acquired for a cohort of patients diagnosed with locally advanced or aggressive breast cancer. Tumor lesions of 111 scans were equally divided between two radiologists and segmented for training. The additional 30 scans were segmented independently by both radiologists for testing. Three 3D U-Net models were trained using either post-contrast images or a combination of post-contrast and subtraction images fused at either the image or the feature level. Segmentation accuracy was evaluated quantitatively using the Dice similarity coefficient (DSC) and the Hausdorff distance (HD95) and scored qualitatively by a radiologist as excellent, useful, helpful, or unacceptable. Based on this score, a visual ensemble approach selecting the best segmentation among these three models was proposed. RESULTS The mean and standard deviation of DSC and HD95 between the two radiologists were equal to 77.8 ± 10.0% and 5.2 ± 5.9 mm. Using the visual ensemble selection, a DSC and HD95 equal to 78.1 ± 16.2% and 14.1 ± 40.8 mm was reached. The qualitative assessment was excellent (resp. excellent or useful) in 50% (resp. 77%). CONCLUSION Using subtraction images in addition to post-contrast images provided complementary information for 3D segmentation of breast lesions by CNN. A visual ensemble selection allowing the radiologist to select the most optimal segmentation obtained by the three 3D U-Net models achieved comparable results to inter-radiologist agreement, yielding 77% segmented volumes considered excellent or useful. KEY POINTS • Deep convolutional neural networks were developed using T1-weighted post-contrast and subtraction MRI to perform automated 3D segmentation of breast tumors. • A visual ensemble selection allowing the radiologist to choose the best segmentation among the three 3D U-Net models outperformed each of the three models. • The visual ensemble selection provided clinically useful segmentations in 77% of cases, potentially allowing for a valuable reduction of the manual 3D segmentation workload for the radiologist and greatly facilitating quantitative studies on non-invasive biomarker in breast MRI.
Collapse
|
9
|
Fazekas S, Budai BK, Stollmayer R, Kaposi PN, Bérczi V. Artificial intelligence and neural networks in radiology – Basics that all radiology residents should know. IMAGING 2022. [DOI: 10.1556/1647.2022.00104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
AbstractThe area of Artificial Intelligence is developing at a high rate. In the medical field, an extreme amount of data is created every day. As the images and the reports are quantifiable, the field of radiology aspires to deliver better, more efficient clinical care. Artificial intelligence (AI) means the simulation of human intelligence by a system or machine. It has been developed to enable machines to “think”, which means to be able to learn, reason, predict, categorize, and solve problems concerning high amounts of data and make decisions in a more effective manner than before. Different AI methods can help radiologists with pre-screening images and identifying features. In this review, we summarize the basic concepts which are needed to understand AI. As the AI methods are expected to exceed the threshold for clinical usefulness soon, in the near future it will be inevitable to use AI in medicine.
Collapse
Affiliation(s)
- Szuzina Fazekas
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Bettina Katalin Budai
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Róbert Stollmayer
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Pál Novák Kaposi
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| | - Viktor Bérczi
- Department of Radiology, Medical Imaging Centre, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| |
Collapse
|
10
|
deSouza NM, van der Lugt A, Deroose CM, Alberich-Bayarri A, Bidaut L, Fournier L, Costaridou L, Oprea-Lager DE, Kotter E, Smits M, Mayerhoefer ME, Boellaard R, Caroli A, de Geus-Oei LF, Kunz WG, Oei EH, Lecouvet F, Franca M, Loewe C, Lopci E, Caramella C, Persson A, Golay X, Dewey M, O'Connor JPB, deGraaf P, Gatidis S, Zahlmann G. Standardised lesion segmentation for imaging biomarker quantitation: a consensus recommendation from ESR and EORTC. Insights Imaging 2022; 13:159. [PMID: 36194301 PMCID: PMC9532485 DOI: 10.1186/s13244-022-01287-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/01/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Lesion/tissue segmentation on digital medical images enables biomarker extraction, image-guided therapy delivery, treatment response measurement, and training/validation for developing artificial intelligence algorithms and workflows. To ensure data reproducibility, criteria for standardised segmentation are critical but currently unavailable. METHODS A modified Delphi process initiated by the European Imaging Biomarker Alliance (EIBALL) of the European Society of Radiology (ESR) and the European Organisation for Research and Treatment of Cancer (EORTC) Imaging Group was undertaken. Three multidisciplinary task forces addressed modality and image acquisition, segmentation methodology itself, and standards and logistics. Devised survey questions were fed via a facilitator to expert participants. The 58 respondents to Round 1 were invited to participate in Rounds 2-4. Subsequent rounds were informed by responses of previous rounds. RESULTS/CONCLUSIONS Items with ≥ 75% consensus are considered a recommendation. These include system performance certification, thresholds for image signal-to-noise, contrast-to-noise and tumour-to-background ratios, spatial resolution, and artefact levels. Direct, iterative, and machine or deep learning reconstruction methods, use of a mixture of CE marked and verified research tools were agreed and use of specified reference standards and validation processes considered essential. Operator training and refreshment were considered mandatory for clinical trials and clinical research. Items with a 60-74% agreement require reporting (site-specific accreditation for clinical research, minimal pixel number within lesion segmented, use of post-reconstruction algorithms, operator training refreshment for clinical practice). Items with ≤ 60% agreement are outside current recommendations for segmentation (frequency of system performance tests, use of only CE-marked tools, board certification of operators, frequency of operator refresher training). Recommendations by anatomical area are also specified.
Collapse
Affiliation(s)
- Nandita M deSouza
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London, UK.
| | - Aad van der Lugt
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Christophe M Deroose
- Nuclear Medicine, University Hospitals Leuven, Leuven, Belgium.,Nuclear Medicine and Molecular Imaging, Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| | | | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, Lincoln, LN6 7TS, UK
| | - Laure Fournier
- INSERM, Radiology Department, AP-HP, Hopital Europeen Georges Pompidou, Université de Paris, PARCC, 75015, Paris, France
| | - Lena Costaridou
- School of Medicine, University of Patras, University Campus, Rio, 26 500, Patras, Greece
| | - Daniela E Oprea-Lager
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Elmar Kotter
- Department of Radiology, University Medical Center Freiburg, Freiburg, Germany
| | - Marion Smits
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Marius E Mayerhoefer
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria.,Memorial Sloan Kettering Cancer Centre, New York, NY, USA
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Anna Caroli
- Department of Biomedical Engineering, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Bergamo, Italy
| | - Lioe-Fee de Geus-Oei
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands.,Biomedical Photonic Imaging Group, University of Twente, Enschede, The Netherlands
| | - Wolfgang G Kunz
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Edwin H Oei
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Frederic Lecouvet
- Department of Radiology, Institut de Recherche Expérimentale et Clinique (IREC), Cliniques Universitaires Saint Luc, Université Catholique de Louvain (UCLouvain), 10 Avenue Hippocrate, 1200, Brussels, Belgium
| | - Manuela Franca
- Department of Radiology, Centro Hospitalar Universitário do Porto, Instituto de Ciências Biomédicas de Abel Salazar, University of Porto, Porto, Portugal
| | - Christian Loewe
- Division of Cardiovascular and Interventional Radiology, Department for Bioimaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Egesta Lopci
- Nuclear Medicine, IRCCS - Humanitas Research Hospital, via Manzoni 56, Rozzano, MI, Italy
| | - Caroline Caramella
- Radiology Department, Hôpital Marie Lannelongue, Institut d'Oncologie Thoracique, Université Paris-Saclay, Le Plessis-Robinson, France
| | - Anders Persson
- Department of Radiology, and Department of Health, Medicine and Caring Sciences, Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Xavier Golay
- Queen Square Institute of Neurology, University College London, London, UK
| | - Marc Dewey
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - James P B O'Connor
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London, UK
| | - Pim deGraaf
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sergios Gatidis
- Department of Radiology, University of Tubingen, Tübingen, Germany
| | - Gudrun Zahlmann
- Radiological Society of North America (RSNA), Oak Brook, IL, USA
| | | | | |
Collapse
|