1
|
Baek J, Park HJ. Bayesian adaptive method for estimating speed-accuracy tradeoff functions of multiple task conditions. Behav Res Methods 2024; 56:4403-4420. [PMID: 37550467 PMCID: PMC11289146 DOI: 10.3758/s13428-023-02192-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2023] [Indexed: 08/09/2023]
Abstract
The speed-accuracy tradeoff (SAT) often makes psychophysical data difficult to interpret. Accordingly, the SAT experimental procedure and model were proposed for an integrated account of the speed and accuracy of responses. However, the extensive data collection for a SAT experiment has blocked its popularity. For a quick estimation of SAT function (SATf), we previously developed a Bayesian adaptive SAT method, including an online stimulus selection strategy. By simulations, the method was proved efficient with high accuracy and precision with minimal trials, adequate for practically applying a single condition task. However, it calls for extensions to more general designs with multiple conditions and should be revised to achieve improved estimation performance. It also demands real experimental validation with human participants. In the current study, we suggested an improved method to measure SATfs for multiple task conditions concurrently and to enhance robustness in general designs. The performance was evaluated with simulation studies and a psychophysical experiment using a flanker task. Simulation results revealed that the proposed method with the adaptive stimulus selection strategy efficiently estimated multiple SATfs and improved performance even for cases with an extreme parameter value. In the psychophysical experiment, SATfs estimated by minimal adaptive trials (1/8 of conventional trials) showed high agreement with those by conventional trials required for reliably estimating multiple SATfs. These results indicate that the Bayesian adaptive SAT method is reliable and efficient in estimating SATfs in most experimental settings and may apply to SATf estimation in general behavioral research designs.
Collapse
Affiliation(s)
- Jongsoo Baek
- Center for Systems and Translational Brain Sciences, Institute of Human Complexity and Systems Science, Yonsei University, Seoul, Republic of Korea
| | - Hae-Jeong Park
- Center for Systems and Translational Brain Sciences, Institute of Human Complexity and Systems Science, Yonsei University, Seoul, Republic of Korea.
- Department of Nuclear Medicine, Department of Psychiatry, Yonsei University College of Medicine, Seoul, Republic of Korea.
- Graduate School of Medical Science, Brain Korea 21 Project, Yonsei University College of Medicine, 50-1 Yonsei-ro, Sinchon-dong, Seodaemun-gu, Seoul, 03722, Republic of Korea.
- Department of Cognitive Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Sinchon-dong, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| |
Collapse
|
2
|
Kwak Y, Lu ZL, Carrasco M. How the window of visibility varies around polar angle. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.12.603257. [PMID: 39071431 PMCID: PMC11275830 DOI: 10.1101/2024.07.12.603257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Contrast sensitivity, the amount of contrast required to detect or discriminate an object, depends on spatial frequency (SF): The Contrast Sensitivity Function (CSF) peaks at intermediate SFs and drops at lower and higher SFs and is the basis of computational models of visual object recognition. The CSF varies from foveal to peripheral vision, but only a couple studies have assessed changes around polar angle of the visual field. Sensitivity is generally better along the horizontal than the vertical meridian, and better at the lower vertical than the upper vertical meridian, yielding polar angle asymmetries. Here, we investigate CSF attributes at polar angle locations at both group and individual levels, using Hierarchical Bayesian Modeling. This method enables precise estimation of CSF parameters by decomposing the variability of the dataset into multiple levels and analyzing covariance across observers. At the group level, peak contrast sensitivity and corresponding spatial frequency with the highest sensitivity are higher at the horizontal than vertical meridian, and at the lower than upper vertical meridian. At an individual level, CSF attributes (e.g., maximum sensitivity, the most preferred SF) across locations are highly correlated, indicating that although the CSFs differ across locations, the CSF at one location is predictive of the CSF at another location. Within each location, the CSF attributes co-vary, indicating that CSFs across individuals vary in a consistent manner (e.g., as maximum sensitivity increases, so does the SF at which sensitivity peaks), but more so at the horizontal than the vertical meridian locations. These results show similarities and uncover some critical polar angle differences across locations and individuals, suggesting that the CSF should not be generalized across iso-eccentric locations around the visual field. Our window of visibility varies with polar angle: It is enhanced and more consistent at the horizontal meridian.
Collapse
Affiliation(s)
- Yuna Kwak
- Department of Psychology, New York University, New York, United States
| | - Zhong-Lin Lu
- Department of Arts & Sciences, New York University Shanghai, Shanghai, China
- Center for Neural Science, New York University, New York, United States
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, United States
- Center for Neural Science, New York University, New York, United States
| |
Collapse
|
3
|
Yang J, Alshaikh E, Yu D, Kerwin T, Rundus C, Zhang F, Wrabel CG, Perry L, Lu ZL. Visual Function and Driving Performance Under Different Lighting Conditions in Older Drivers: Preliminary Results From an Observational Study. JMIR Form Res 2024; 8:e58465. [PMID: 38922681 PMCID: PMC11237778 DOI: 10.2196/58465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 05/04/2024] [Accepted: 05/04/2024] [Indexed: 06/27/2024] Open
Abstract
BACKGROUND Age-related vision changes significantly contribute to fatal crashes at night among older drivers. However, the effects of lighting conditions on age-related vision changes and associated driving performance remain unclear. OBJECTIVE This pilot study examined the associations between visual function and driving performance assessed by a high-fidelity driving simulator among drivers 60 and older across 3 lighting conditions: daytime (photopic), nighttime (mesopic), and nighttime with glare. METHODS Active drivers aged 60 years or older participated in visual function assessments and simulated driving on a high-fidelity driving simulator. Visual acuity (VA), contrast sensitivity function (CSF), and visual field map (VFM) were measured using quantitative VA, quantitative CSF, and quantitative VFM procedures under photopic and mesopic conditions. VA and CSF were also obtained in the presence of glare in the mesopic condition. Two summary metrics, the area under the log CSF (AULCSF) and volume under the surface of VFM (VUSVFM), quantified CSF and VFM. Driving performance measures (average speed, SD of speed [SDspeed], SD of lane position (SDLP), and reaction time) were assessed under daytime, nighttime, and nighttime with glare conditions. Pearson correlations determined the associations between visual function and driving performance across the 3 lighting conditions. RESULTS Of the 20 drivers included, the average age was 70.3 years; 55% were male. Poor photopic VA was significantly correlated with greater SDspeed (r=0.26; P<.001) and greater SDLP (r=0.31; P<.001). Poor photopic AULCSF was correlated with greater SDLP (r=-0.22; P=.01). Poor mesopic VUSFVM was significantly correlated with slower average speed (r=-0.24; P=.007), larger SDspeed (r=-0.19; P=.04), greater SDLP (r=-0.22; P=.007), and longer reaction times (r=-0.22; P=.04) while driving at night. For functional vision in the mesopic condition with glare, poor VA was significantly correlated with longer reaction times (r=0.21; P=.046) while driving at night with glare; poor AULCSF was significantly correlated with slower speed (r=-0.32; P<.001), greater SDLP (r=-0.26; P=.001) and longer reaction times (r=-0.2; P=.04) while driving at night with glare. No other significant correlations were observed between visual function and driving performance under the same lighting conditions. CONCLUSIONS Visual functions differentially affect driving performance in different lighting conditions among older drivers, with more substantial impacts on driving during nighttime, especially in glare. Additional research with larger sample sizes is needed to confirm these results.
Collapse
Affiliation(s)
- Jingzhen Yang
- Center for Injury Research and Policy at the Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, United States
- Department of Pediatrics, The Ohio State University, Columbus, OH, United States
| | - Enas Alshaikh
- Center for Injury Research and Policy at the Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, United States
| | - Deyue Yu
- College of Optometry, The Ohio State University, Columbus, OH, United States
| | - Thomas Kerwin
- Driving Simulation Laboratory, The Ohio State University, Columbus, OH, United States
| | - Christopher Rundus
- Center for Injury Research and Policy at the Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, United States
| | - Fangda Zhang
- Center for Injury Research and Policy at the Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, United States
| | - Cameron G Wrabel
- Driving Simulation Laboratory, The Ohio State University, Columbus, OH, United States
| | - Landon Perry
- College of Optometry, The Ohio State University, Columbus, OH, United States
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
- Center for Neural Science and Department of Psychology, New York University, New York, NY, United States
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| |
Collapse
|
4
|
Zhao Y, Liu J, Dosher BA, Lu ZL. Enabling identification of component processes in perceptual learning with nonparametric hierarchical Bayesian modeling. J Vis 2024; 24:8. [PMID: 38780934 PMCID: PMC11131338 DOI: 10.1167/jov.24.5.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 04/13/2024] [Indexed: 05/25/2024] Open
Abstract
Perceptual learning is a multifaceted process, encompassing general learning, between-session forgetting or consolidation, and within-session fast relearning and deterioration. The learning curve constructed from threshold estimates in blocks or sessions, based on tens or hundreds of trials, may obscure component processes; high temporal resolution is necessary. We developed two nonparametric inference procedures: a Bayesian inference procedure (BIP) to estimate the posterior distribution of contrast threshold in each learning block for each learner independently and a hierarchical Bayesian model (HBM) that computes the joint posterior distribution of contrast threshold across all learning blocks at the population, subject, and test levels via the covariance of contrast thresholds across blocks. We applied the procedures to the data from two studies that investigated the interaction between feedback and training accuracy in Gabor orientation identification over 1920 trials across six sessions and estimated learning curve with block sizes L = 10, 20, 40, 80, 160, and 320 trials. The HBM generated significantly better fits to the data, smaller standard deviations, and more precise estimates, compared to the BIP across all block sizes. In addition, the HBM generated unbiased estimates, whereas the BIP only generated unbiased estimates with large block sizes but exhibited increased bias with small block sizes. With L = 10, 20, and 40, we were able to consistently identify general learning, between-session forgetting, and rapid relearning and adaptation within sessions. The nonparametric HBM provides a general framework for fine-grained assessment of the learning curve and enables identification of component processes in perceptual learning.
Collapse
Affiliation(s)
- Yukai Zhao
- Center for Neural Science, New York University, New York, NY, USA
| | - Jiajuan Liu
- Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Barbara Anne Dosher
- Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
- NYU-ECNU Institute of Brain and Cognitive Neuroscience, Shanghai, China
| |
Collapse
|
5
|
Marticorena DCP, Wong QW, Browning J, Wilbur K, Jayakumar S, Davey PG, Seitz AR, Gardner JR, Barbour DL. Contrast response function estimation with nonparametric Bayesian active learning. J Vis 2024; 24:6. [PMID: 38197739 PMCID: PMC10790677 DOI: 10.1167/jov.24.1.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 11/10/2023] [Indexed: 01/11/2024] Open
Abstract
Multidimensional psychometric functions can typically be estimated nonparametrically for greater accuracy or parametrically for greater efficiency. By recasting the estimation problem from regression to classification, however, powerful machine learning tools can be leveraged to provide an adjustable balance between accuracy and efficiency. Contrast sensitivity functions (CSFs) are behaviorally estimated curves that provide insight into both peripheral and central visual function. Because estimation can be impractically long, current clinical workflows must make compromises such as limited sampling across spatial frequency or strong assumptions on CSF shape. This article describes the development of the machine learning contrast response function (MLCRF) estimator, which quantifies the expected probability of success in performing a contrast detection or discrimination task. A machine learning CSF can then be derived from the MLCRF. Using simulated eyes created from canonical CSF curves and actual human contrast response data, the accuracy and efficiency of the machine learning contrast sensitivity function (MLCSF) was evaluated to determine its potential utility for research and clinical applications. With stimuli selected randomly, the MLCSF estimator converged slowly toward ground truth. With optimal stimulus selection via Bayesian active learning, convergence was nearly an order of magnitude faster, requiring only tens of stimuli to achieve reasonable estimates. Inclusion of an informative prior provided no consistent advantage to the estimator as configured. MLCSF achieved efficiencies on par with quickCSF, a conventional parametric estimator, but with systematically higher accuracy. Because MLCSF design allows accuracy to be traded off against efficiency, it should be explored further to uncover its full potential.
Collapse
Affiliation(s)
- Dom C P Marticorena
- Department of Biomedical Engineering, Washington University, St. Louis, MO, USA
| | - Quinn Wai Wong
- Department of Biomedical Engineering, Washington University, St. Louis, MO, USA
| | - Jake Browning
- Department of Computer Science and Engineering, Washington University, St. Louis, MO, USA
| | - Ken Wilbur
- Department of Computer Science and Engineering, Washington University, St. Louis, MO, USA
| | - Samyukta Jayakumar
- Department of Psychology, University of California, Riverside, Riverside, CA, USA
| | | | - Aaron R Seitz
- Department of Psychology, Northeastern University, Boston, MA, USA
| | - Jacob R Gardner
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Dennis L Barbour
- Department of Biomedical Engineering, Washington University, St. Louis, MO, USA
| |
Collapse
|
6
|
Zhao Y, Liu J, Dosher BA, Lu ZL. Estimating the Trial-by-Trial Learning Curve in Perceptual Learning with Hierarchical Bayesian Modeling. RESEARCH SQUARE 2023:rs.3.rs-3649060. [PMID: 38045291 PMCID: PMC10690334 DOI: 10.21203/rs.3.rs-3649060/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
The learning curve serves as a crucial metric for assessing human performance in perceptual learning. It may encompass various component processes, including general learning, between-session forgetting or consolidation, and within-session rapid relearning and adaptation or deterioration. Typically, empirical learning curves are constructed by aggregating tens or hundreds of trials of data in blocks or sessions. Here, we devised three inference procedures for estimating the trial-by-trial learning curve based on the multi-component functional form identified in Zhao et al. (submitted): general learning, between-session forgetting, and within-session rapid relearning and adaptation. These procedures include a Bayesian inference procedure (BIP) estimating the posterior distribution of parameters for each learner independently, and two hierarchical Bayesian models (HBMv and HBMc) computing the joint posterior distribution of parameters and hyperparameters at the population, subject, and test levels. The HBMv and HBMc incorporate variance and covariance hyperparameters, respectively, between and within subjects. We applied these procedures to data from two studies investigating the interaction between feedback and training accuracy in Gabor orientation identification across about 2000 trials spanning six sessions (Liu et al., 2010, 2012) and estimated the trial-by-trial learning curves at both the subject and population levels. The HBMc generated best fits to the data and the smallest half width of 68.2% credible interval of the learning curves compared to the BIP and HBMv. The parametric HBMc with the multi-component functional form provides a general framework for trial-by-trial analysis of the component processes in perceptual learning and for predicting the learning curve in unmeasured time points.
Collapse
|
7
|
Lu ZL, Zhao Y, Lesmes LA, Dorr M. Quantification of expected information gain in visual acuity and contrast sensitivity tests. Sci Rep 2023; 13:16795. [PMID: 37798305 PMCID: PMC10556053 DOI: 10.1038/s41598-023-43913-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 09/29/2023] [Indexed: 10/07/2023] Open
Abstract
We make use of expected information gain to quantify the amount of knowledge obtained from measurements in a population. In the first application, we compared the expected information gain in the Snellen, ETDRS, and qVA visual acuity (VA) tests, as well as in the Pelli-Robson, CSV-1000, and qCSF contrast sensitivity (CS) tests. For the VA tests, ETDRS generated more expected information gain than Snellen. Additionally, the qVA test with 15 rows (or 45 optotypes) generated more expected information gain than ETDRS, whether scored with VA threshold alone or with both VA threshold and VA range. Regarding the CS tests, CSV-1000 generated more expected information gain than Pelli-Robson, and the qCSF test with 25 trials generated more expected information gain than CSV-1000, whether scored with AULCSF or with CSF at six spatial frequencies. The active learning-based qVA and qCSF tests have the potential to generate more expected information gain than traditional paper chart tests. Although we have specifically applied it to compare VA and CS tests, expected information gain is a general concept that can be used to compare measurements in any domain.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China.
- Center for Neural Science and Department of Psychology, New York University, 4 Washington Place, New York, NY, 10003, USA.
- NYU-ECNU Institute of Brain and Cognitive Neuroscience at NYU Shanghai, Shanghai, China.
| | - Yukai Zhao
- Center for Neural Science, New York University, New York, USA
| | | | - Michael Dorr
- Adaptive Sensory Technology Inc., San Diego, CA, USA
| |
Collapse
|
8
|
Lu ZL, Zhao Y, Lesmes LA, Dorr M. Quantification of Expected Information Gain in Visual Acuity and Contrast Sensitivity Tests. RESEARCH SQUARE 2023:rs.3.rs-3031340. [PMID: 37333239 PMCID: PMC10275059 DOI: 10.21203/rs.3.rs-3031340/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
We introduce expected information gain to quantify measurements and apply it to compare visual acuity (VA) and contrast sensitivity (CS) tests. We simulated observers with parameters covered by the visual acuity and contrast sensitivity tests and observers based on distributions of normal observers tested in three luminance and four Bangerter foil conditions. We first generated the probability distributions of test scores for each individual in each population in the Snellen, ETDRS and qVA visual acuity tests and the Pelli-Robson, CSV-1000 and qCSF contrast sensitivity tests and constructed the probability distributions of all possible test scores of the entire population. We then computed expected information gain by subtracting expected residual entropy from the total entropy of the population. For acuity tests, ETDRS generated more expected information gain than Snellen; scored with VA threshold only or with both VA threshold and VA range, qVA with 15 rows (or 45 optotypes) generated more expected information gain than ETDRS. For contrast sensitivity tests, CSV-1000 generated more expected information gain than Pelli-Robson; scored with AULCSF or with CS at six spatial frequencies, qCSF with 25 trials generated more expected information gain than CSV-1000. The active learning based qVA and qCSF tests can generate more expected information than the traditional paper chart tests. Although we only applied it to compare visual acuity and contrast sensitivity tests, information gain is a general concept that can be used to compare measurements and data analytics in any domain.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China; Center for Neural Science and Department of Psychology, New York University, New York, USA; NYU-ECNU Institute of Brain and Cognitive Neuroscience, Shanghai, China
| | - Yukai Zhao
- Center for Neural Science, New York University, New York, USA
| | | | - Michael Dorr
- Adaptive Sensory Technology Inc., San Diego, CA, USA
| |
Collapse
|
9
|
Zhao Y, Lesmes LA, Dorr M, Lu ZL. Collective endpoint of visual acuity and contrast sensitivity function from hierarchical Bayesian joint modeling. J Vis 2023; 23:13. [PMID: 37378989 DOI: 10.1167/jov.23.6.13] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2023] Open
Abstract
Clinical trials typically analyze multiple endpoints for signals of efficacy. To improve signal detection for treatment effects using the high-dimensional data collected in trials, we developed a hierarchical Bayesian joint model (HBJM) to compute a five-dimensional collective endpoint (CE5D) of contrast sensitivity function (CSF) and visual acuity (VA). The HBJM analyzes row-by-row CSF and VA data across multiple conditions, and describes visual functions across a hierarchy of population, individuals, and tests. It generates joint posterior distributions of CE5D that combines CSF (peak gain, peak frequency, and bandwidth) and VA (threshold and range) parameters. The HBJM was applied to an existing dataset of 14 eyes, each tested with the quantitative VA and quantitative CSF procedures in four Bangerter foil conditions. The HBJM recovered strong correlations among CE5D components at all levels. With 15 qVA and 25 qCSF rows, it reduced the variance of the estimated components by 72% on average. Combining signals from VA and CSF and reducing noises, CE5D exhibited significantly higher sensitivity and accuracy in discriminating performance differences between foil conditions at both the group and test levels than the original tests. The HBJM extracts valuable information about covariance of CSF and VA parameters, improves precision of the estimated parameters, and increases the statistical power in detecting vision changes. By combining signals and reducing noise from multiple tests for detecting vision changes, the HBJM framework exhibits potential to increase statistical power for combining multi-modality data in ophthalmic trials.
Collapse
Affiliation(s)
- Yukai Zhao
- Center for Neural Science, New York University, New York, NY, USA
| | | | - Michael Dorr
- Adaptive Sensory Technology Inc., San Diego, CA, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
- NYU-ECNU Institute of Brain and Cognitive Neuroscience, Shanghai, China
| |
Collapse
|
10
|
Lu ZL, Dosher BA. Hierarchical Bayesian perceptual template modeling of mechanisms of spatial attention in central and peripheral cuing. J Vis 2023; 23:12. [PMID: 36826825 PMCID: PMC9973531 DOI: 10.1167/jov.23.2.12] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023] Open
Abstract
The external noise paradigm and perceptual template model (PTM) have successfully been applied to characterize observer properties and mechanisms of observer state changes (e.g. attention and perceptual learning) in several research domains, focusing on individual level analysis. In this study, we developed a new hierarchical Bayesian perceptual template model (HBPTM) to model the trial-by-trial data from all individuals and conditions in a published spatial cuing study within a single structure and compared its performance to that of a Bayesian Inference Procedure (BIP), which separately infers the posterior distributions of the model parameters for each individual subject without the hierarchical structure. The HBPTM allowed us to compute the joint posterior distribution of the hyperparameters and parameters at the population, observer, and experiment levels and make statistical inferences at all these levels. In addition, we ran a large simulation study that varied the number of observers and number of trials in each condition and demonstrated the advantage of the HBPTM over the BIP across all the simulated datasets. Although it is developed in the context of spatial attention, the HBPTM and its extensions can be used to model data from the external noise paradigm in other domains and enable predictions of human performance at both the population and individual levels.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China Center for Neural Science and Department of Psychology, New York University, New York, NY, USA.,NYU-ECNU Institute of Brain and Cognitive Science, Shanghai, China.,
| | - Barbara Anne Dosher
- Department of Cognitive Sciences, University of California, Irvine, CA, USA.,
| |
Collapse
|