1
|
Gonçalves MB, Nakayama LF, Ferraz D, Faber H, Korot E, Malerbi FK, Regatieri CV, Maia M, Celi LA, Keane PA, Belfort R. Image quality assessment of retinal fundus photographs for diabetic retinopathy in the machine learning era: a review. Eye (Lond) 2024; 38:426-433. [PMID: 37667028 PMCID: PMC10858054 DOI: 10.1038/s41433-023-02717-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/26/2023] [Accepted: 08/25/2023] [Indexed: 09/06/2023] Open
Abstract
This study aimed to evaluate the image quality assessment (IQA) and quality criteria employed in publicly available datasets for diabetic retinopathy (DR). A literature search strategy was used to identify relevant datasets, and 20 datasets were included in the analysis. Out of these, 12 datasets mentioned performing IQA, but only eight specified the quality criteria used. The reported quality criteria varied widely across datasets, and accessing the information was often challenging. The findings highlight the importance of IQA for AI model development while emphasizing the need for clear and accessible reporting of IQA information. The study suggests that automated quality assessments can be a valid alternative to manual labeling and emphasizes the importance of establishing quality standards based on population characteristics, clinical use, and research purposes. In conclusion, image quality assessment is important for AI model development; however, strict data quality standards must not limit data sharing. Given the importance of IQA for developing, validating, and implementing deep learning (DL) algorithms, it's recommended that this information be reported in a clear, specific, and accessible way whenever possible. Automated quality assessments are a valid alternative to the traditional manual labeling process, and quality standards should be determined according to population characteristics, clinical use, and research purpose.
Collapse
Affiliation(s)
- Mariana Batista Gonçalves
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Luis Filipe Nakayama
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil.
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA.
| | - Daniel Ferraz
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Hanna Faber
- Department of Ophthalmology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Ophthalmology, University of Tuebingen, Tuebingen, Germany
| | - Edward Korot
- Retina Specialists of Michigan, Grand Rapids, MI, USA
- Stanford University Byers Eye Institute Palo Alto, Palo Alto, CA, USA
| | | | | | - Mauricio Maia
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
| | - Leo Anthony Celi
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA
- Harvard TH Chan School of Public Health, Department of Biostatistics, Boston, MA, USA
- Beth Israel Deaconess Medical Center, Department of Medicine, Boston, MA, USA
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Rubens Belfort
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
| |
Collapse
|
2
|
Chia MA, Hersch F, Sayres R, Bavishi P, Tiwari R, Keane PA, Turner AW. Validation of a deep learning system for the detection of diabetic retinopathy in Indigenous Australians. Br J Ophthalmol 2024; 108:268-273. [PMID: 36746615 PMCID: PMC10850716 DOI: 10.1136/bjo-2022-322237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 12/31/2022] [Indexed: 02/08/2023]
Abstract
BACKGROUND/AIMS Deep learning systems (DLSs) for diabetic retinopathy (DR) detection show promising results but can underperform in racial and ethnic minority groups, therefore external validation within these populations is critical for health equity. This study evaluates the performance of a DLS for DR detection among Indigenous Australians, an understudied ethnic group who suffer disproportionately from DR-related blindness. METHODS We performed a retrospective external validation study comparing the performance of a DLS against a retinal specialist for the detection of more-than-mild DR (mtmDR), vision-threatening DR (vtDR) and all-cause referable DR. The validation set consisted of 1682 consecutive, single-field, macula-centred retinal photographs from 864 patients with diabetes (mean age 54.9 years, 52.4% women) at an Indigenous primary care service in Perth, Australia. Three-person adjudication by a panel of specialists served as the reference standard. RESULTS For mtmDR detection, sensitivity of the DLS was superior to the retina specialist (98.0% (95% CI, 96.5 to 99.4) vs 87.1% (95% CI, 83.6 to 90.6), McNemar's test p<0.001) with a small reduction in specificity (95.1% (95% CI, 93.6 to 96.4) vs 97.0% (95% CI, 95.9 to 98.0), p=0.006). For vtDR, the DLS's sensitivity was again superior to the human grader (96.2% (95% CI, 93.4 to 98.6) vs 84.4% (95% CI, 79.7 to 89.2), p<0.001) with a slight drop in specificity (95.8% (95% CI, 94.6 to 96.9) vs 97.8% (95% CI, 96.9 to 98.6), p=0.002). For all-cause referable DR, there was a substantial increase in sensitivity (93.7% (95% CI, 91.8 to 95.5) vs 74.4% (95% CI, 71.1 to 77.5), p<0.001) and a smaller reduction in specificity (91.7% (95% CI, 90.0 to 93.3) vs 96.3% (95% CI, 95.2 to 97.4), p<0.001). CONCLUSION The DLS showed improved sensitivity and similar specificity compared with a retina specialist for DR detection. This demonstrates its potential to support DR screening among Indigenous Australians, an underserved population with a high burden of diabetic eye disease.
Collapse
Affiliation(s)
- Mark A Chia
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia
| | | | | | | | | | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Angus W Turner
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Nedlands, Western Australia, Australia
| |
Collapse
|
3
|
Widner K, Virmani S, Krause J, Nayar J, Tiwari R, Pedersen ER, Jeji D, Hammel N, Matias Y, Corrado GS, Liu Y, Peng L, Webster DR. Lessons learned from translating AI from development to deployment in healthcare. Nat Med 2023:10.1038/s41591-023-02293-9. [PMID: 37248297 DOI: 10.1038/s41591-023-02293-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | - Yun Liu
- Google Health, Palo Alto, CA, USA.
| | - Lily Peng
- Google Health, Palo Alto, CA, USA
- Verily, South San Francisco, CA, USA
| | | |
Collapse
|
4
|
Brady CJ, Cockrell RC, Aldrich LR, Wolle MA, West SK. A Virtual Reading Center Model Using Crowdsourcing to Grade Photographs for Trachoma: Validation Study (Preprint). J Med Internet Res 2022; 25:e41233. [PMID: 37023420 PMCID: PMC10132003 DOI: 10.2196/41233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 01/30/2023] [Accepted: 02/19/2023] [Indexed: 02/21/2023] Open
Abstract
BACKGROUND As trachoma is eliminated, skilled field graders become less adept at correctly identifying active disease (trachomatous inflammation-follicular [TF]). Deciding if trachoma has been eliminated from a district or if treatment strategies need to be continued or reinstated is of critical public health importance. Telemedicine solutions require both connectivity, which can be poor in the resource-limited regions of the world in which trachoma occurs, and accurate grading of the images. OBJECTIVE Our purpose was to develop and validate a cloud-based "virtual reading center" (VRC) model using crowdsourcing for image interpretation. METHODS The Amazon Mechanical Turk (AMT) platform was used to recruit lay graders to interpret 2299 gradable images from a prior field trial of a smartphone-based camera system. Each image received 7 grades for US $0.05 per grade in this VRC. The resultant data set was divided into training and test sets to internally validate the VRC. In the training set, crowdsourcing scores were summed, and the optimal raw score cutoff was chosen to optimize kappa agreement and the resulting prevalence of TF. The best method was then applied to the test set, and the sensitivity, specificity, kappa, and TF prevalence were calculated. RESULTS In this trial, over 16,000 grades were rendered in just over 60 minutes for US $1098 including AMT fees. After choosing an AMT raw score cut point to optimize kappa near the World Health Organization (WHO)-endorsed level of 0.7 (with a simulated 40% prevalence TF), crowdsourcing was 95% sensitive and 87% specific for TF in the training set with a kappa of 0.797. All 196 crowdsourced-positive images received a skilled overread to mimic a tiered reading center and specificity improved to 99%, while sensitivity remained above 78%. Kappa for the entire sample improved from 0.162 to 0.685 with overreads, and the skilled grader burden was reduced by over 80%. This tiered VRC model was then applied to the test set and produced a sensitivity of 99% and a specificity of 76% with a kappa of 0.775 in the entire set. The prevalence estimated by the VRC was 2.70% (95% CI 1.84%-3.80%) compared to the ground truth prevalence of 2.87% (95% CI 1.98%-4.01%). CONCLUSIONS A VRC model using crowdsourcing as a first pass with skilled grading of positive images was able to identify TF rapidly and accurately in a low prevalence setting. The findings from this study support further validation of a VRC and crowdsourcing for image grading and estimation of trachoma prevalence from field-acquired images, although further prospective field testing is required to determine if diagnostic characteristics are acceptable in real-world surveys with a low prevalence of the disease.
Collapse
Affiliation(s)
- Christopher J Brady
- Division of Ophthalmology, Department of Surgery, Larner College of Medicine at The University of Vermont, Burlington, VT, United States
| | - R Chase Cockrell
- Division of Surgical Research, Department of Surgery, Larner College of Medicine at The University of Vermont, Burlington, VT, United States
| | - Lindsay R Aldrich
- Larner College of Medicine at The University of Vermont, Burlington, VT, United States
| | - Meraf A Wolle
- Dana Center for Preventive Ophthalmology, Wilmer Eye Institute, Baltimore, MD, United States
| | - Sheila K West
- Dana Center for Preventive Ophthalmology, Wilmer Eye Institute, Baltimore, MD, United States
| |
Collapse
|
5
|
Detecting glaucoma with only OCT: Implications for the clinic, research, screening, and AI development. Prog Retin Eye Res 2022; 90:101052. [PMID: 35216894 DOI: 10.1016/j.preteyeres.2022.101052] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 01/21/2022] [Accepted: 02/01/2022] [Indexed: 12/25/2022]
Abstract
A method for detecting glaucoma based only on optical coherence tomography (OCT) is of potential value for routine clinical decisions, for inclusion criteria for research studies and trials, for large-scale clinical screening, as well as for the development of artificial intelligence (AI) decision models. Recent work suggests that the OCT probability (p-) maps, also known as deviation maps, can play a key role in an OCT-based method. However, artifacts seen on the p-maps of healthy control eyes can resemble patterns of damage due to glaucoma. We document in section 2 that these glaucoma-like artifacts are relatively common and are probably due to normal anatomical variations in healthy eyes. We also introduce a simple anatomical artifact model based upon known anatomical variations to help distinguish these artifacts from actual glaucomatous damage. In section 3, we apply this model to an OCT-based method for detecting glaucoma that starts with an examination of the retinal nerve fiber layer (RNFL) p-map. While this method requires a judgment by the clinician, sections 4 and 5 describe automated methods that do not. In section 4, the simple model helps explain the relatively poor performance of commonly employed summary statistics, including circumpapillary RNFL thickness. In section 5, the model helps account for the success of an AI deep learning model, which in turn validates our focus on the RNFL p-map. Finally, in section 6 we consider the implications of OCT-based methods for the clinic, research, screening, and the development of AI models.
Collapse
|
6
|
Dee EC, Yu RC, Celi LA, Nehal US. AIM and Business Models of Healthcare. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
7
|
Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res 2021; 90:101034. [PMID: 34902546 DOI: 10.1016/j.preteyeres.2021.101034] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/14/2023]
Abstract
An increasing number of artificial intelligence (AI) systems are being proposed in ophthalmology, motivated by the variety and amount of clinical and imaging data, as well as their potential benefits at the different stages of patient care. Despite achieving close or even superior performance to that of experts, there is a critical gap between development and integration of AI systems in ophthalmic practice. This work focuses on the importance of trustworthy AI to close that gap. We identify the main aspects or challenges that need to be considered along the AI design pipeline so as to generate systems that meet the requirements to be deemed trustworthy, including those concerning accuracy, resiliency, reliability, safety, and accountability. We elaborate on mechanisms and considerations to address those aspects or challenges, and define the roles and responsibilities of the different stakeholders involved in AI for ophthalmic care, i.e., AI developers, reading centers, healthcare providers, healthcare institutions, ophthalmological societies and working groups or committees, patients, regulatory bodies, and payers. Generating trustworthy AI is not a responsibility of a sole stakeholder. There is an impending necessity for a collaborative approach where the different stakeholders are represented along the AI design pipeline, from the definition of the intended use to post-market surveillance after regulatory approval. This work contributes to establish such multi-stakeholder interaction and the main action points to be taken so that the potential benefits of AI reach real-world ophthalmic settings.
Collapse
|
8
|
A Survey of Domain Knowledge Elicitation in Applied Machine Learning. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5120073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Eliciting knowledge from domain experts can play an important role throughout the machine learning process, from correctly specifying the task to evaluating model results. However, knowledge elicitation is also fraught with challenges. In this work, we consider why and how machine learning researchers elicit knowledge from experts in the model development process. We develop a taxonomy to characterize elicitation approaches according to the elicitation goal, elicitation target, elicitation process, and use of elicited knowledge. We analyze the elicitation trends observed in 28 papers with this taxonomy and identify opportunities for adding rigor to these elicitation approaches. We suggest future directions for research in elicitation for machine learning by highlighting avenues for further exploration and drawing on what we can learn from elicitation research in other fields.
Collapse
|
9
|
Korot E, Gonçalves MB, Khan SM, Struyven R, Wagner SK, Keane PA. Clinician-driven artificial intelligence in ophthalmology: resources enabling democratization. Curr Opin Ophthalmol 2021; 32:445-451. [PMID: 34265784 DOI: 10.1097/icu.0000000000000785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW This article aims to discuss the current state of resources enabling the democratization of artificial intelligence (AI) in ophthalmology. RECENT FINDINGS Open datasets, efficient labeling techniques, code-free automated machine learning (AutoML) and cloud-based platforms for deployment are resources that enable clinicians with scarce resources to drive their own AI projects. SUMMARY Clinicians are the use-case experts who are best suited to drive AI projects tackling patient-relevant outcome measures. Taken together, open datasets, efficient labeling techniques, code-free AutoML and cloud platforms break the barriers for clinician-driven AI. As AI becomes increasingly democratized through such tools, clinicians and patients stand to benefit greatly.
Collapse
Affiliation(s)
- Edward Korot
- Stanford University Byers Eye Institute, Palo Alto, California, USA
- Moorfields Eye Hospital, London, UK
| | - Mariana B Gonçalves
- Moorfields Eye Hospital, London, UK
- Federal University of São Paulo (UNIFESP)
- Vision Institute (IPEPO), Sao Paulo, Brazil
| | | | - Robbert Struyven
- Moorfields Eye Hospital, London, UK
- University College London, London, UK
| | | | | |
Collapse
|
10
|
Morya AK, Gowdar J, Kaushal A, Makwana N, Biswas S, Raj P, Singh S, Hegde S, Vaishnav R, Shetty S, S P V, Shah V, Paul S, Muralidhar S, Velis G, Padua W, Waghule T, Nazm N, Jeganathan S, Reddy Mallidi A, Susan John D, Sen S, Choudhary S, Parashar N, Sharma B, Raghav P, Udawat R, Ram S, Salodia UP. Evaluating the Viability of a Smartphone-Based Annotation Tool for Faster and Accurate Image Labelling for Artificial Intelligence in Diabetic Retinopathy. Clin Ophthalmol 2021; 15:1023-1039. [PMID: 33727785 PMCID: PMC7953891 DOI: 10.2147/opth.s289425] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 01/18/2021] [Indexed: 12/17/2022] Open
Abstract
Introduction Deep Learning (DL) and Artificial Intelligence (AI) have become widespread due to the advanced technologies and availability of digital data. Supervised learning algorithms have shown human-level performance or even better and are better feature extractor-quantifier than unsupervised learning algorithms. To get huge dataset with good quality control, there is a need of an annotation tool with a customizable feature set. This paper evaluates the viability of having an in house annotation tool which works on a smartphone and can be used in a healthcare setting. Methods We developed a smartphone-based grading system to help researchers in grading multiple retinal fundi. The process consisted of designing the flow of user interface (UI) keeping in view feedback from experts. Quantitative and qualitative analysis of change in speed of a grader over time and feature usage statistics was done. The dataset size was approximately 16,000 images with adjudicated labels by a minimum of 2 doctors. Results for an AI model trained on the images graded using this tool and its validation over some public datasets were prepared. Results We created a DL model and analysed its performance for a binary referrable DR Classification task, whether a retinal image has Referrable DR or not. A total of 32 doctors used the tool for minimum of 20 images each. Data analytics suggested significant portability and flexibility of the tool. Grader variability for images was in favour of agreement on images annotated. Number of images used to assess agreement is 550. Mean of 75.9% was seen in agreement. Conclusion Our aim was to make Annotation of Medical imaging easier and to minimize time taken for annotations without quality degradation. The user feedback and feature usage statistics confirm our hypotheses of incorporation of brightness and contrast variations, green channels and zooming add-ons in correlation to certain disease types. Simulation of multiple review cycles and establishing quality control can boost the accuracy of AI models even further. Although our study aims at developing an annotation tool for diagnosing and classifying diabetic retinopathy fundus images but same concept can be used for fundus images of other ocular diseases as well as other streams of medical science such as radiology where image-based diagnostic applications are utilised.
Collapse
Affiliation(s)
- Arvind Kumar Morya
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Jaitra Gowdar
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Abhishek Kaushal
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Nachiket Makwana
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Saurav Biswas
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Puneeth Raj
- Radiate Healthcare Innovations Private Limited, Bangalore, Karnataka, 560038, India
| | - Shabnam Singh
- Sri Narayani Hospital & Research Centre, Vellore, Tamilnadu, 632 055, India
| | - Sharat Hegde
- Prasad Netralaya, Udupi, Karnataka, 576101, India
| | - Raksha Vaishnav
- Bhaktivedanta Hospital, Mira Bhayandar, Maharashtra, 401107, India
| | - Sharan Shetty
- Prime Retina Eye Care Centre, Hyderabad, Telangana, 500029, India
| | | | - Vedang Shah
- Shree Netra Eye Foundation, Kolkata, West Bengal, 700020, India
| | | | | | | | - Winston Padua
- St. John's Medical College & Hospital, Bengaluru, Bengaluru, 560034, India
| | - Tushar Waghule
- Reti Vision Eye Clinic, KK Eye Institute, Pune, Maharashtra, 411001, India
| | - Nazneen Nazm
- ESI PGIMSR, ESI Medical College and Hospital, Kolkata, West Bengal, 700104, India
| | - Sangeetha Jeganathan
- Srinivas Institute of Medical Sciences and Research Centre, Mangalore, Karnataka, 574146, India
| | | | - Dona Susan John
- Diya Speciality Eye Care, Bengaluru, Karnataka, 560061, India
| | - Sagnik Sen
- Aravind Eye Hospital, Madurai, Tamil Nadu, 625 020, India
| | - Sandeep Choudhary
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Nishant Parashar
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Bhavana Sharma
- All India Institute of Medical Sciences, Bhopal, Madhya Pradesh, 462020, India
| | - Pankaja Raghav
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Raghuveer Udawat
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Sampat Ram
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| | - Umang P Salodia
- Department of Ophthalmology, All India Institute of Medical Sciences, Jodhpur, Rajasthan, 342005, India
| |
Collapse
|
11
|
Esteva A, Chou K, Yeung S, Naik N, Madani A, Mottaghi A, Liu Y, Topol E, Dean J, Socher R. Deep learning-enabled medical computer vision. NPJ Digit Med 2021; 4:5. [PMID: 33420381 PMCID: PMC7794558 DOI: 10.1038/s41746-020-00376-2] [Citation(s) in RCA: 253] [Impact Index Per Article: 84.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 12/01/2020] [Indexed: 02/07/2023] Open
Abstract
A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields-including medicine-to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit-including cardiology, pathology, dermatology, ophthalmology-and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.
Collapse
Affiliation(s)
| | | | | | - Nikhil Naik
- Salesforce AI Research, San Francisco, CA, USA
| | - Ali Madani
- Salesforce AI Research, San Francisco, CA, USA
| | | | - Yun Liu
- Google Research, Mountain View, CA, USA
| | - Eric Topol
- Scripps Research Translational Institute, La Jolla, CA, USA
| | - Jeff Dean
- Google Research, Mountain View, CA, USA
| | | |
Collapse
|
12
|
Dee EC, Yu RC, Celi LA, Nehal US. AIM and Business Models of Healthcare. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_247-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|