1
|
Bridge J, Meng Y, Zhu W, Fitzmaurice T, McCann C, Addison C, Wang M, Merritt C, Franks S, Mackey M, Messenger S, Sun R, Zhao Y, Zheng Y. Development and external validation of a mixed-effects deep learning model to diagnose COVID-19 from CT imaging. Front Med (Lausanne) 2023; 10:1113030. [PMID: 37680621 PMCID: PMC10481527 DOI: 10.3389/fmed.2023.1113030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 08/08/2023] [Indexed: 09/09/2023] Open
Abstract
Background The automatic analysis of medical images has the potential improve diagnostic accuracy while reducing the strain on clinicians. Current methods analyzing 3D-like imaging data, such as computerized tomography imaging, often treat each image slice as individual slices. This may not be able to appropriately model the relationship between slices. Methods Our proposed method utilizes a mixed-effects model within the deep learning framework to model the relationship between slices. We externally validated this method on a data set taken from a different country and compared our results against other proposed methods. We evaluated the discrimination, calibration, and clinical usefulness of our model using a range of measures. Finally, we carried out a sensitivity analysis to demonstrate our methods robustness to noise and missing data. Results In the external geographic validation set our model showed excellent performance with an AUROC of 0.930 (95%CI: 0.914, 0.947), with a sensitivity and specificity, PPV, and NPV of 0.778 (0.720, 0.828), 0.882 (0.853, 0.908), 0.744 (0.686, 0.797), and 0.900 (0.872, 0.924) at the 0.5 probability cut-off point. Our model also maintained good calibration in the external validation dataset, while other methods showed poor calibration. Conclusion Deep learning can reduce stress on healthcare systems by automatically screening CT imaging for COVID-19. Our method showed improved generalizability in external validation compared to previous published methods. However, deep learning models must be robustly assessed using various performance measures and externally validated in each setting. In addition, best practice guidelines for developing and reporting predictive models are vital for the safe adoption of such models.
Collapse
Affiliation(s)
- Joshua Bridge
- Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, United Kingdom
| | - Yanda Meng
- Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, United Kingdom
| | - Wenyue Zhu
- Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, United Kingdom
| | - Thomas Fitzmaurice
- Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, United Kingdom
- Department of Respiratory Medicine, Liverpool Heart and Chest Hospital NHS Foundation Trust, Liverpool, United Kingdom
| | - Caroline McCann
- Department of Radiology, Liverpool Heart and Chest Hospital NHS Foundation Trust, Liverpool, United Kingdom
| | - Cliff Addison
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | - Manhui Wang
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | | | - Stu Franks
- Alces Flight Limited, Bicester, United Kingdom
| | | | | | - Renrong Sun
- Department of Radiology, Hubei Provincial Hospital of Integrated Chinese and Western Medicine, Hubei University of Chinese Medicine, Wuhan, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Yalin Zheng
- Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, United Kingdom
| |
Collapse
|
2
|
Wilson KJ, Dhalla A, Meng Y, Tu Z, Zheng Y, Mhango P, Seydel KB, Beare NAV. Retinal imaging technologies in cerebral malaria: a systematic review. Malar J 2023; 22:139. [PMID: 37101295 PMCID: PMC10131356 DOI: 10.1186/s12936-023-04566-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 04/20/2023] [Indexed: 04/28/2023] Open
Abstract
BACKGROUND Cerebral malaria (CM) continues to present a major health challenge, particularly in sub-Saharan Africa. CM is associated with a characteristic malarial retinopathy (MR) with diagnostic and prognostic significance. Advances in retinal imaging have allowed researchers to better characterize the changes seen in MR and to make inferences about the pathophysiology of the disease. The study aimed to explore the role of retinal imaging in diagnosis and prognostication in CM; establish insights into pathophysiology of CM from retinal imaging; establish future research directions. METHODS The literature was systematically reviewed using the African Index Medicus, MEDLINE, Scopus and Web of Science databases. A total of 35 full texts were included in the final analysis. The descriptive nature of the included studies and heterogeneity precluded meta-analysis. RESULTS Available research clearly shows retinal imaging is useful both as a clinical tool for the assessment of CM and as a scientific instrument to aid the understanding of the condition. Modalities which can be performed at the bedside, such as fundus photography and optical coherence tomography, are best positioned to take advantage of artificial intelligence-assisted image analysis, unlocking the clinical potential of retinal imaging for real-time diagnosis in low-resource environments where extensively trained clinicians may be few in number, and for guiding adjunctive therapies as they develop. CONCLUSIONS Further research into retinal imaging technologies in CM is justified. In particular, co-ordinated interdisciplinary work shows promise in unpicking the pathophysiology of a complex disease.
Collapse
Affiliation(s)
- Kyle J Wilson
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK.
- Malawi-Liverpool-Wellcome Trust, Blantyre, Malawi.
| | - Amit Dhalla
- Department of Ophthalmology, Sheffield Teaching Hospitals, Sheffield, UK
| | - Yanda Meng
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK
| | - Zhanhan Tu
- School of Psychology and Vision Sciences, College of Life Science, The University of Leicester Ulverscroft Eye Unit, Robert Kilpatrick Clinical Sciences Building, Leicester Royal Infirmary, Leicester, UK
| | - Yalin Zheng
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK
- St. Paul's Eye Unit, Royal Liverpool University Hospitals, Liverpool, UK
- Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart and Chest Hospital, Liverpool, UK
| | - Priscilla Mhango
- Department of Ophthalmology, Kamuzu University of Health Sciences, Blantyre, Malawi
| | - Karl B Seydel
- College of Osteopathic Medicine, Michigan State University, East Lansing, MI, USA
- Blantyre Malaria Project, Kamuzu University of Health Sciences, Blantyre, Malawi
| | - Nicholas A V Beare
- Department of Eye & Vision Sciences, University of Liverpool, Liverpool, UK.
- St. Paul's Eye Unit, Royal Liverpool University Hospitals, Liverpool, UK.
| |
Collapse
|
3
|
Zhu W, Kolamunnage-Dona R, Zheng Y, Harding S, Czanner G. Spatial and spatio-temporal statistical analyses of retinal images: a review of methods and applications. BMJ Open Ophthalmol 2020; 5:e000479. [PMID: 32537517 PMCID: PMC7264837 DOI: 10.1136/bmjophth-2020-000479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 04/26/2020] [Accepted: 04/28/2020] [Indexed: 11/12/2022] Open
Abstract
Background Clinical research and management of retinal diseases greatly depend on the interpretation of retinal images and often longitudinally collected images. Retinal images provide context for spatial data, namely the location of specific pathologies within the retina. Longitudinally collected images can show how clinical events at one point can affect the retina over time. In this review, we aimed to assess statistical approaches to spatial and spatio-temporal data in retinal images. We also review the spatio-temporal modelling approaches used in other medical image types. Methods We conducted a comprehensive literature review of both spatial or spatio-temporal approaches and non-spatial approaches to the statistical analysis of retinal images. The key methodological and clinical characteristics of published papers were extracted. We also investigated whether clinical variables and spatial correlation were accounted for in the analysis. Results Thirty-four papers that included retinal imaging data were identified for full-text information extraction. Only 11 (32.4%) papers used spatial or spatio-temporal statistical methods to analyse images, others (23 papers, 67.6%) used non-spatial methods. Twenty-eight (82.4%) papers reported images collected cross-sectionally, while 6 (17.6%) papers reported analyses on images collected longitudinally. In imaging areas outside of ophthalmology, 19 papers were identified with spatio-temporal analysis, and multiple statistical methods were recorded. Conclusions In future statistical analyses of retinal images, it will be beneficial to clearly define and report the spatial distributions studied, report the spatial correlations, combine imaging data with clinical variables into analysis if available, and clearly state the software or packages used.
Collapse
Affiliation(s)
- Wenyue Zhu
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, a member of Liverpool Health Partners, Liverpool, UK
| | - Ruwanthi Kolamunnage-Dona
- Department of Health Data Science, Institute of Population Health Sciences, University of Liverpool, a member of Liverpool Health Partners, Liverpool, UK
| | - Yalin Zheng
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, a member of Liverpool Health Partners, Liverpool, UK.,St Paul's Eye Unit, Liverpool University Hospitals Foundation Trust, a member of Liverpool Health Partners, Liverpool, UK
| | - Simon Harding
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, a member of Liverpool Health Partners, Liverpool, UK.,St Paul's Eye Unit, Liverpool University Hospitals Foundation Trust, a member of Liverpool Health Partners, Liverpool, UK
| | - Gabriela Czanner
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, a member of Liverpool Health Partners, Liverpool, UK.,St Paul's Eye Unit, Liverpool University Hospitals Foundation Trust, a member of Liverpool Health Partners, Liverpool, UK.,Department of Applied Mathematics, Liverpool John Moores University, Liverpool, UK
| |
Collapse
|
4
|
Spatial Linear Mixed Effects Modelling for OCT Images: SLME Model. J Imaging 2020; 6:jimaging6060044. [PMID: 34460590 PMCID: PMC8321139 DOI: 10.3390/jimaging6060044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 05/27/2020] [Accepted: 05/30/2020] [Indexed: 11/22/2022] Open
Abstract
Much recent research focuses on how to make disease detection more accurate as well as “slimmer”, i.e., allowing analysis with smaller datasets. Explanatory models are a hot research topic because they explain how the data are generated. We propose a spatial explanatory modelling approach that combines Optical Coherence Tomography (OCT) retinal imaging data with clinical information. Our model consists of a spatial linear mixed effects inference framework, which innovatively models the spatial topography of key information via mixed effects and spatial error structures, thus effectively modelling the shape of the thickness map. We show that our spatial linear mixed effects (SLME) model outperforms traditional analysis-of-variance approaches in the analysis of Heidelberg OCT retinal thickness data from a prospective observational study, involving 300 participants with diabetes and 50 age-matched controls. Our SLME model has a higher power for detecting the difference between disease groups, and it shows where the shape of retinal thickness profiles differs between the eyes of participants with diabetes and the eyes of healthy controls. In simulated data, the SLME model demonstrates how incorporating spatial correlations can increase the accuracy of the statistical inferences. This model is crucial in the understanding of the progression of retinal thickness changes in diabetic maculopathy to aid clinicians for early planning of effective treatment. It can be extended to disease monitoring and prognosis in other diseases and with other imaging technologies.
Collapse
|
5
|
MacCormick IJC, Williams BM, Zheng Y, Li K, Al-Bander B, Czanner S, Cheeseman R, Willoughby CE, Brown EN, Spaeth GL, Czanner G. Accurate, fast, data efficient and interpretable glaucoma diagnosis with automated spatial analysis of the whole cup to disc profile. PLoS One 2019; 14:e0209409. [PMID: 30629635 PMCID: PMC6328156 DOI: 10.1371/journal.pone.0209409] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 12/05/2018] [Indexed: 11/25/2022] Open
Abstract
Background Glaucoma is the leading cause of irreversible blindness worldwide. It is a heterogeneous group of conditions with a common optic neuropathy and associated loss of peripheral vision. Both over and under-diagnosis carry high costs in terms of healthcare spending and preventable blindness. The characteristic clinical feature of glaucoma is asymmetrical optic nerve rim narrowing, which is difficult for humans to quantify reliably. Strategies to improve and automate optic disc assessment are therefore needed to prevent sight loss. Methods We developed a novel glaucoma detection algorithm that segments and analyses colour photographs to quantify optic nerve rim consistency around the whole disc at 15-degree intervals. This provides a profile of the cup/disc ratio, in contrast to the vertical cup/disc ratio in common use. We introduce a spatial probabilistic model, to account for the optic nerve shape, we then use this model to derive a disc deformation index and a decision rule for glaucoma. We tested our algorithm on two separate image datasets (ORIGA and RIM-ONE). Results The spatial algorithm accurately distinguished glaucomatous and healthy discs on internal and external validation (AUROC 99.6% and 91.0% respectively). It achieves this using a dataset 100-times smaller than that required for deep learning algorithms, is flexible to the type of cup and disc segmentation (automated or semi-automated), utilises images with missing data, and is correlated with the disc size (p = 0.02) and the rim-to-disc at the narrowest rim (p<0.001, in external validation). Discussion The spatial probabilistic algorithm is highly accurate, highly data efficient and it extends to any imaging hardware in which the boundaries of cup and disc can be segmented, thus making the algorithm particularly applicable to research into disease mechanisms, and also glaucoma screening in low resource settings.
Collapse
Affiliation(s)
- Ian J. C. MacCormick
- Department of Eye & Vision Science, Institute of Ageing and Chronic Disease, University of Liverpool, Liverpool, United Kingdom
- Centre for Clinical Brain Sciences, University of Edinburgh, Chancellor's Building, Edinburgh, United Kingdom
| | - Bryan M. Williams
- Department of Eye & Vision Science, Institute of Ageing and Chronic Disease, University of Liverpool, Liverpool, United Kingdom
| | - Yalin Zheng
- Department of Eye & Vision Science, Institute of Ageing and Chronic Disease, University of Liverpool, Liverpool, United Kingdom
- St Paul’s Eye Unit, Royal Liverpool University Hospitals NHS Trust, Liverpool, United Kingdom
| | - Kun Li
- Medical Information Engineering Department, Taishan Medical School, TaiAn City, ShanDong Province, China
| | - Baidaa Al-Bander
- Department of Electrical Engineering and Electronics, University of Liverpool, Brownlow Hill, Liverpool, United Kingdom
| | - Silvester Czanner
- School of Computing, Mathematics and Digital Technology, Faculty of Science and Engineering, Manchester Metropolitan University, Manchester, Manchester, United Kingdom
| | - Rob Cheeseman
- St Paul’s Eye Unit, Royal Liverpool University Hospitals NHS Trust, Liverpool, United Kingdom
| | - Colin E. Willoughby
- Biomedical Sciences Research Institute, Faculty of Life & Health Sciences, Ulster University, Coleraine, Northern Ireland
- Department of Ophthalmology, Royal Victoria Hospital, Belfast, Northern Ireland
| | - Emery N. Brown
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
| | - George L. Spaeth
- Glaucoma Research Center, Wills Eye Hospital, Philadelphia, Pennsylvania, United States of America
| | - Gabriela Czanner
- Department of Eye & Vision Science, Institute of Ageing and Chronic Disease, University of Liverpool, Liverpool, United Kingdom
- St Paul’s Eye Unit, Royal Liverpool University Hospitals NHS Trust, Liverpool, United Kingdom
- Department of Applied Mathematics, Faculty of Engineering and Technology, Liverpool John Moores University, Liverpool, United Kingdom
- * E-mail:
| |
Collapse
|