1
|
Maxin AJ, Kush S, Gulek BG, Winston GM, Chae J, Shaibani R, McGrath LB, Abecassis IJ, Levitt MR. Smartphone pupillometry for detection of cerebral vasospasm after aneurysmal subarachnoid hemorrhage. J Stroke Cerebrovasc Dis 2024; 33:107922. [PMID: 39128501 DOI: 10.1016/j.jstrokecerebrovasdis.2024.107922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 08/06/2024] [Accepted: 08/08/2024] [Indexed: 08/13/2024] Open
Abstract
OBJECTIVES Vasospasm is a complication of aneurysmal subarachnoid hemorrhage (aSAH) that can change the trajectory of recovery and is associated with morbidity and mortality. Earlier detection of vasospasm could improve patient outcomes. Our objective is to evaluate the accuracy of smartphone-based quantitative pupillometry in the detection of radiographic vasospasm and delayed cerebral ischemia (DCI) after aSAH. MATERIALS AND METHODS We prospectively collected pupillary light reflex (PLR) parameters from patients with aSAH admitted to a neurocritical care unit at a single hospital twice daily using quantitative smartphone pupillometry recordings. PLR parameters included: Maximum pupil diameter, minimum pupil diameter, percent change in pupil diameter, latency in beginning of pupil constriction to light, mean constriction velocity, maximum constriction velocity, and mean dilation velocity. Two-tailed t-tests for independent samples were performed to determine changes in average concurrent PLR parameter values between the following comparisons: (1) patients with and without radiographic vasospasm (defined by angiography with the need for endovascular intervention) and (2) patients with and without DCI. RESULTS 49 subjects with aSAH underwent 323 total PLR recordings. For PLR recordings taken with (n=35) and without (n=241) radiographic vasospasm, significant differences were observed in MIN (35.0 ± 7.5 pixels with vasospasm versus 31.6 ± 6.2 pixels without; p=0.002). For PLR recordings taken with (n=43) and without (n=241) DCI, significant differences were observed in MAX (48.9 ± 14.3 pixels with DCI versus 42.5 ± 9.2 pixels without; p<0.001). CONCLUSIONS Quantitative smartphone pupillometry has the potential to be used to detect radiographic vasospasm and DCI after aSAH.
Collapse
Affiliation(s)
- Anthony J Maxin
- Department of Neurological Surgery, University of Washington, Seattle, WA, United States; School of Medicine, Creighton University, Omaha, NE, United States.
| | - Sophie Kush
- School of Medicine, Creighton University, Omaha, NE, United States; Department of Neurological Surgery, Weill Cornell Medicine, New York, NY, United States.
| | - Bernice G Gulek
- Department of Neurological Surgery, University of Washington, Seattle, WA, United States; Department of Neurological Surgery, Weill Cornell Medicine, New York, NY, United States.
| | - Graham M Winston
- Department of Neurological Surgery, Weill Cornell Medicine, New York, NY, United States; Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States.
| | - John Chae
- Department of Neurological Surgery, Weill Cornell Medicine, New York, NY, United States; Department of Neurosurgery, University of Louisville, Louisville, KY, United States.
| | - Rami Shaibani
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States; Department of Radiology, University of Washington, Seattle, WA, United States
| | - Lynn B McGrath
- Department of Neurological Surgery, Weill Cornell Medicine, New York, NY, United States; Department of Mechanical Engineering, University of Washington, Seattle, WA, United States.
| | - Isaac J Abecassis
- Department of Neurosurgery, University of Louisville, Louisville, KY, United States; Stroke and Applied Neuroscience Center, University of Washington, Seattle, WA, United States
| | - Michael R Levitt
- Department of Neurological Surgery, University of Washington, Seattle, WA, United States; Department of Radiology, University of Washington, Seattle, WA, United States; Department of Mechanical Engineering, University of Washington, Seattle, WA, United States; Stroke and Applied Neuroscience Center, University of Washington, Seattle, WA, United States; Department of Neurology, University of Washington, Seattle, WA, United States.
| |
Collapse
|
2
|
Sousa AI, Marques-Neves C, Vieira PM. Development of a Smartphone-Based System for Intrinsically Photosensitive Retinal Ganglion Cells Targeted Chromatic Pupillometry. Bioengineering (Basel) 2024; 11:267. [PMID: 38534541 DOI: 10.3390/bioengineering11030267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 03/01/2024] [Accepted: 03/04/2024] [Indexed: 03/28/2024] Open
Abstract
Chromatic Pupillometry, used to assess Pupil Light Reflex (PLR) to a coloured light stimulus, has regained interest since the discovery of melanopsin in the intrinsically photosensitive Retinal Ganglion Cells (ipRGCs). This technique has shown the potential to be used as a screening tool for neuro-ophthalmological diseases; however, most of the pupillometers available are expensive and not portable, making it harder for them to be used as a widespread screening tool. In this study, we developed a smartphone-based system for chromatic pupillometry that allows targeted stimulation of the ipRGCs. Using a smartphone, this system is portable and accessible and takes advantage of the location of the ipRGCs in the perifovea. The system incorporates a 3D-printed support for the smartphone and an illumination system. Preliminary tests were carried out on a single individual and then validated on eleven healthy individuals with two different LED intensities. The average Post-Illumination Pupil Light Response 6 s after the stimuli offsets (PIPR-6s) showed a difference between the blue and the red stimuli of 9.5% for both intensities, which aligns with the studies using full-field stimulators. The results validated this system for a targeted stimulation of the ipRGCs for chromatic pupillometry, with the potential to be a portable and accessible screening tool for neuro-ophthalmological diseases.
Collapse
Affiliation(s)
- Ana Isabel Sousa
- Department of Physics, NOVA School of Science and Technology, NOVA University of Lisbon, 2829-516 Caparica, Portugal
| | | | - Pedro Manuel Vieira
- Department of Physics, NOVA School of Science and Technology, NOVA University of Lisbon, 2829-516 Caparica, Portugal
| |
Collapse
|
3
|
Maxin AJ, Gulek BG, Chae J, Winston G, Weisbeek P, McGrath LB, Levitt MR. A smartphone pupillometry tool for detection of acute large vessel occlusion. J Stroke Cerebrovasc Dis 2023; 32:107430. [PMID: 37857150 DOI: 10.1016/j.jstrokecerebrovasdis.2023.107430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 10/21/2023] Open
Abstract
OBJECTIVES Pupillary light reflex (PLR) parameters can be used as quantitative biomarkers of neurological function. Since digital infrared pupillometry is expensive, we sought to examine alterations in PLR parameters using a smartphone quantitative pupillometry platform in subjects with acute ischemic stroke (AIS). MATERIALS AND METHODS Patients were enrolled if they presented to the emergency department as a stroke code activation and had evidence of a large vessel occlusion (LVO) on computed tomography angiography. Controls were enrolled from hospital staff. A smartphone pupillometer was used in AIS patients with LVO pre-mechanical thrombectomy, immediately post-thrombectomy, and at 24 h post-thrombectomy. Clinical and demographic data were collected, along with the proprietary Neurological Pupil index (NPi) score from the NPi-200 digital infrared pupillometer. PLR parameters were compared using mean differences. The absolute and non-absolute inter-eye difference in each parameter for each subject were also analyzed by measuring 1 - (R:L) to determine alteration in the equilibrium between subject pupils. The NPi was analyzed for mean differences between cohorts. RESULTS Healthy controls (n = 132) and AIS patients (n = 31) were enrolled. Significant differences were observed in PLR parameters for healthy subjects when compared to pre-thrombectomy subjects in both mean and absolute inter-eye differences after post hoc Bonferroni correction. The proprietary NPi score was not significantly different for all groups and comparisons. CONCLUSIONS Significant alterations in the PLR were observed in AIS patients with LVO before thrombectomy, indicating the potential use of smartphone pupillometry for detection of LVO.
Collapse
Affiliation(s)
- Anthony J Maxin
- Department of Neurological Surgery, University of Washington, Seattle, WA, USA; School of Medicine, Creighton University, Omaha, NE, USA
| | - Bernice G Gulek
- Department of Neurological Surgery, University of Washington, Seattle, WA, USA
| | - John Chae
- Department of Neurological Surgery, Weill Cornell Medicine, New York, NY, USA
| | - Graham Winston
- Department of Neurological Surgery, Weill Cornell Medicine, New York, NY, USA
| | | | - Lynn B McGrath
- Department of Neurological Surgery, Weill Cornell Medicine, New York, NY, USA
| | - Michael R Levitt
- Department of Neurological Surgery, University of Washington, Seattle, WA, USA; Department of Radiology, University of Washington, Seattle, WA, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA, USA; Stroke & Applied Neuroscience Center, University of Washington, Seattle, WA, USA.
| |
Collapse
|
4
|
Barry C, Wang E. Racially fair pupillometry measurements for RGB smartphone cameras using the far red spectrum. Sci Rep 2023; 13:13841. [PMID: 37620445 PMCID: PMC10449795 DOI: 10.1038/s41598-023-40796-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 08/16/2023] [Indexed: 08/26/2023] Open
Abstract
Pupillometry is a measurement of pupil dilation commonly performed as part of neurological assessments. Prior work have demonstrated the potential for pupillometry in screening or diagnosing a number of neurological disorders including Alzheimer's Disease, Schizophrenia, and Traumatic Brain Injury. Unfortunately, the expense and inaccessibility of specialized pupilometers that image in the near infrared spectrum limit the measurement to high resource clinics or institutions. Ideally, this measurement could be available via ubiquitous devices like smartphones or tablets with integrated visible spectrum imaging systems. In the visible spectrum of RGB cameras, the melanin in the iris absorbs light such that it is difficult to distinguish the pupil aperature that appears black. In this paper, we propose a novel pupillometry technique to enable smartphone RGB cameras to effectively differentiate the pupil from the iris. The proposed system utilizes a 630 nm long-pass filter to image in the far red (630-700 nm) spectrum, where the melanin in the iris reflects light to appear brighter in constrast to the dark pupil. Using a convolutional neural network, the proposed system measures pupil diameter as it dynamically changes in a frame by frame video. Comparing across 4 different smartphone models, the pupil-iris contrast of N = 12 participants increases by an average of 451% with the proposed system. In a validation study of N = 11 participants comparing the relative pupil change in the proposed system to a Neuroptics PLR-3000 Pupillometer during a pupillary light response test, the prototype system acheived a mean absolute error of 2.4%.
Collapse
Affiliation(s)
- Colin Barry
- Electrical and Computer Engineering Department, University of California San Diego, La Jolla, CA, USA.
- Design Lab, University of California San Diego, La Jolla, CA, USA.
| | - Edward Wang
- Electrical and Computer Engineering Department, University of California San Diego, La Jolla, CA, USA
- Design Lab, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
5
|
Barry C, De Souza J, Xuan Y, Holden J, Granholm E, Wang EJ. At-Home Pupillometry using Smartphone Facial Identification Cameras. PROCEEDINGS OF THE SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS. CHI CONFERENCE 2022; 2022:235. [PMID: 38031623 PMCID: PMC10686294 DOI: 10.1145/3491102.3502493] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/01/2023]
Abstract
With recent developments in medical and psychiatric research surrounding pupillary response, cheap and accessible pupillometers could enable medical benefits from early neurological disease detection to measurements of cognitive load. In this paper, we introduce a novel smartphone-based pupillometer to allow for future development in clinical research surrounding at-home pupil measurements. Our solution utilizes a NIR front-facing camera for facial recognition paired with the RGB selfie camera to perform tracking of absolute pupil dilation with sub-millimeter accuracy. In comparison to a gold standard pupillometer during a pupillary light reflex test, the smartphone-based system achieves a median MAE of 0.27mm for absolute pupil dilation tracking and a median error of 3.52% for pupil dilation change tracking. Additionally, we remotely deployed the system to older adults as part of a usability study that demonstrates promise for future smartphone deployments to remotely collect data in older, inexperienced adult users operating the system themselves.
Collapse
Affiliation(s)
- Colin Barry
- Department of Electrical and Computer Engineering, University of California: San Diego La Jolla, California, USA
| | - Jessica De Souza
- Department of Electrical and Computer Engineering, University of California: San Diego La Jolla, California, USA
| | - Yinan Xuan
- Department of Electrical and Computer Engineering, University of California: San Diego La Jolla, California, USA
| | - Jason Holden
- Center for Mental Health Technology, University of California: San Diego La Jolla, California, USA
| | - Eric Granholm
- Center for Mental Health Technology, University of California: San Diego La Jolla, California, USA
| | - Edward Jay Wang
- Department of Electrical and Computer Engineering, University of California: San Diego La Jolla, California, USA
| |
Collapse
|
6
|
Solyman O, Abushanab MMI, Carey AR, Henderson AD. Pilot Study of Smartphone Infrared Pupillography and Pupillometry. Clin Ophthalmol 2022; 16:303-310. [PMID: 35173409 PMCID: PMC8840836 DOI: 10.2147/opth.s331989] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 01/19/2022] [Indexed: 11/23/2022] Open
Affiliation(s)
- Omar Solyman
- Research Institute of Ophthalmology, Giza, Egypt
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Ophthalmology, Qassim University Medical City, Al-Qassim, Saudi Arabia
- Correspondence: Omar Solyman, Tel +20 1009350101, Email
| | | | - Andrew R Carey
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Amanda D Henderson
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
7
|
Piaggio D, Namm G, Melillo P, Simonelli F, Iadanza E, Pecchia L. Pupillometry via smartphone for low-resource settings. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.05.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
8
|
Pinheiro HM, da Costa RM. Pupillary light reflex as a diagnostic aid from computational viewpoint: A systematic literature review. J Biomed Inform 2021; 117:103757. [PMID: 33826949 DOI: 10.1016/j.jbi.2021.103757] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Revised: 03/12/2021] [Accepted: 03/13/2021] [Indexed: 01/06/2023]
Abstract
This work presents a detailed and complete review of publications on pupillary light reflex (PLR) used to aid diagnoses. These are computational techniques used in the evaluation of pupillometry, as well as their application in computer-aided diagnoses (CAD) of pathologies or physiological conditions that can be studied by observing the movements of miosis and mydriasis of the human pupil. A careful survey was carried out of all studies published over the last 10 years which investigated, electronic devices, recording protocols, image treatment, computational algorithms and the pathologies related to PLR. We present the frontier of existing knowledge regarding methods and techniques used in this field of knowledge, which has been expanding due to the possibility of performing diagnoses with high precision, at a low cost and with a non-invasive method.
Collapse
|
9
|
Zhu X, Xia W, Bao Z, Zhong Y, Fang Y, Yang F, Gu X, Ye J, Huang W. Artificial Intelligence Segmented Dynamic Video Images for Continuity Analysis in the Detection of Severe Cardiovascular Disease. Front Neurosci 2021; 14:618481. [PMID: 33642970 PMCID: PMC7902880 DOI: 10.3389/fnins.2020.618481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/11/2020] [Indexed: 11/13/2022] Open
Abstract
In this paper, an artificial intelligence segmented dynamic video image based on the process of intensive cardiovascular and cerebrovascular disease monitoring is deeply investigated, and a sparse automatic coding deep neural network with a four layers stack structure is designed to automatically extract the deep features of the segmented dynamic video image shot, and six categories of normal, atrial premature, ventricular premature, right bundle branch block, left bundle branch block, and pacing are achieved through hierarchical training and optimization. Accurate recognition of heartbeats with an average accuracy of 99.5%. It provides technical assistance for the intelligent prediction of high-risk cardiovascular diseases like ventricular fibrillation. An intelligent prediction algorithm for sudden cardiac death based on the echolocation network was proposed. By designing an echolocation network with a multilayer serial structure, an intelligent distinction between sudden cardiac death signal and non-sudden death signal was realized, and the signal was predicted 5 min before sudden death occurred, with an average prediction accuracy of 94.32%. Using the self-learning capability of stack sparse auto-coding network, a large amount of label-free data is designed to train the stack sparse auto-coding deep neural network to automatically extract deep representations of plaque features. A small amount of labeled data then introduced to micro-train the entire network. Through the automatic analysis of the fiber cap thickness in the plaques, the automatic identification of thin fiber cap-like vulnerable plaques was achieved, and the average overlap of vulnerable regions reached 87%. The overall time for the automatic plaque and vulnerable plaque recognition algorithm was 0.54 s. It provides theoretical support for accurate diagnosis and endogenous analysis of high-risk cardiovascular diseases.
Collapse
Affiliation(s)
- Xi Zhu
- Clinical Medical College, Yangzhou University, Yangzhou, China
| | - Wei Xia
- Clinical Medical College, Yangzhou University, Yangzhou, China
| | - Zhuqing Bao
- Clinical Medical College, Yangzhou University, Yangzhou, China
| | - Yaohui Zhong
- Department of Computer Science and Technology, Nanjing University, Nanjing, China
| | - Yu Fang
- Clinical Medical College, Yangzhou University, Yangzhou, China
| | - Fei Yang
- Clinical Medical College, Yangzhou University, Yangzhou, China
| | - Xiaohua Gu
- Clinical Medical College, Yangzhou University, Yangzhou, China
| | - Jing Ye
- Clinical Medical College, Yangzhou University, Yangzhou, China
| | - Wennuo Huang
- Clinical Medical College, Yangzhou University, Yangzhou, China
| |
Collapse
|
10
|
Zhang X, Liang Y, Li W, Liu C, Gu D, Sun W, Miao L. Development and evaluation of deep learning for screening dental caries from oral photographs. Oral Dis 2020; 28:173-181. [PMID: 33244805 DOI: 10.1111/odi.13735] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 11/02/2020] [Accepted: 11/17/2020] [Indexed: 01/19/2023]
Affiliation(s)
- Xuan Zhang
- Department of Periodontology Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| | - Yuan Liang
- University of California Los Angeles CA USA
| | - Wen Li
- Department of Endodontics Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| | - Chao Liu
- Department of Orthodontics Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| | - Deao Gu
- Department of Orthodontics Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| | - Weibin Sun
- Department of Periodontology Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| | - Leiying Miao
- Department of Endodontics Nanjing Stomatological Hospital Medical School of Nanjing University Nanjing China
| |
Collapse
|
11
|
Guidelines and Benchmarks for Deployment of Deep Learning Models on Smartphones as Real-Time Apps. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2019. [DOI: 10.3390/make1010027] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Deep learning solutions are being increasingly used in mobile applications. Although there are many open-source software tools for the development of deep learning solutions, there are no guidelines in one place in a unified manner for using these tools toward real-time deployment of these solutions on smartphones. From the variety of available deep learning tools, the most suited ones are used in this paper to enable real-time deployment of deep learning inference networks on smartphones. A uniform flow of implementation is devised for both Android and iOS smartphones. The advantage of using multi-threading to achieve or improve real-time throughputs is also showcased. A benchmarking framework consisting of accuracy, CPU/GPU consumption, and real-time throughput is considered for validation purposes. The developed deployment approach allows deep learning models to be turned into real-time smartphone apps with ease based on publicly available deep learning and smartphone software tools. This approach is applied to six popular or representative convolutional neural network models, and the validation results based on the benchmarking metrics are reported.
Collapse
|