1
|
Madison A, Callahan-Flintoft C, Thurman SM, Hoffing RAC, Touryan J, Ries AJ. Fixation-related potentials during a virtual navigation task: The influence of image statistics on early cortical processing. Atten Percept Psychophys 2025:10.3758/s13414-024-03002-5. [PMID: 39849263 DOI: 10.3758/s13414-024-03002-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2024] [Indexed: 01/25/2025]
Abstract
Historically, electrophysiological correlates of scene processing have been studied with experiments using static stimuli presented for discrete timescales where participants maintain a fixed eye position. Gaps remain in generalizing these findings to real-world conditions where eye movements are made to select new visual information and where the environment remains stable but changes with our position and orientation in space, driving dynamic visual stimulation. Co-recording of eye movements and electroencephalography (EEG) is an approach to leverage fixations as time-locking events in the EEG recording under free-viewing conditions to create fixation-related potentials (FRPs), providing a neural snapshot in which to study visual processing under naturalistic conditions. The current experiment aimed to explore the influence of low-level image statistics-specifically, luminance and a metric of spatial frequency (slope of the amplitude spectrum)-on the early visual components evoked from fixation onsets in a free-viewing visual search and navigation task using a virtual environment. This research combines FRPs with an optimized approach to remove ocular artifacts and deconvolution modeling to correct for overlapping neural activity inherent in any free-viewing paradigm. The results suggest that early visual components-namely, the lambda response and N1-of the FRPs are sensitive to luminance and spatial frequency around fixation, separate from modulation due to underlying differences in eye-movement characteristics. Together, our results demonstrate the utility of studying the influence of image statistics on FRPs using a deconvolution modeling approach to control for overlapping neural activity and oculomotor covariates.
Collapse
Affiliation(s)
- Anna Madison
- U.S. DEVCOM Army Research Laboratory, Humans in Complex Systems, Aberdeen Proving Ground, MD, USA
- Warfighter Effectiveness Research Center, Department of Behavioral Sciences & Leadership, 2354 Fairchild Drive, Suite 6, U.S. Air Force Academy, CO, 80840, USA
| | - Chloe Callahan-Flintoft
- Warfighter Effectiveness Research Center, Department of Behavioral Sciences & Leadership, 2354 Fairchild Drive, Suite 6, U.S. Air Force Academy, CO, 80840, USA
| | - Steven M Thurman
- U.S. DEVCOM Army Research Laboratory, Humans in Complex Systems, Aberdeen Proving Ground, MD, USA
| | - Russell A Cohen Hoffing
- U.S. DEVCOM Army Research Laboratory, Humans in Complex Systems, Aberdeen Proving Ground, MD, USA
| | - Jonathan Touryan
- U.S. DEVCOM Army Research Laboratory, Humans in Complex Systems, Aberdeen Proving Ground, MD, USA
| | - Anthony J Ries
- U.S. DEVCOM Army Research Laboratory, Humans in Complex Systems, Aberdeen Proving Ground, MD, USA.
- Warfighter Effectiveness Research Center, Department of Behavioral Sciences & Leadership, 2354 Fairchild Drive, Suite 6, U.S. Air Force Academy, CO, 80840, USA.
| |
Collapse
|
2
|
Silvestri F, Odisho N, Kumar A, Grigoriadis A. Examining gaze behavior in undergraduate students and educators during the evaluation of tooth preparation: an eye-tracking study. BMC MEDICAL EDUCATION 2024; 24:1030. [PMID: 39300488 DOI: 10.1186/s12909-024-06019-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 09/12/2024] [Indexed: 09/22/2024]
Abstract
BACKGROUND Gaze behavior can serve as an objective tool in undergraduate pre-clinical dental education, helping to identify key areas of interest and common pitfalls in the routine evaluation of tooth preparations. Therefore, this study aimed to investigate the gaze behavior of undergraduate dental students and dental educators while evaluating a single crown tooth preparation. METHODS Thirty-five participants volunteered to participate in the study and were divided into a novice group (dental students, n = 18) and an expert group (dental educators, n = 17). Each participant wore a binocular eye-tracking device, and the total duration of fixation was evaluated as a metric to study the gaze behavior. Sixty photographs of twenty different tooth preparations in three different views (buccal, lingual, and occlusal) were prepared and displayed during the experimental session. The participants were asked to rate the tooth preparations on a 100 mm visual analog rating scale and were also asked to determine whether each tooth preparation was ready to make an impression. Each view was divided into different areas of interest. Statistical analysis was performed with a three-way analysis of the variance model with repeated measures. RESULTS Based on the participants' mean rates, the "best" and the "worst" tooth preparations were selected for analysis. The results showed a significantly longer time to decision in the novices compared to the experts (P = 0.003) and a significantly longer time to decision for both the groups in the best tooth preparation compared to the worst tooth preparation (P = 0.002). Statistical analysis also showed a significantly longer total duration of fixations in the margin compared to all other conditions for both the buccal (P < 0.012) and lingual (P < 0.001) views. CONCLUSIONS The current study showed distinct differences in gaze behavior between the novices and the experts during the evaluation of single crown tooth preparation. Understanding differences in gaze behavior between undergraduate dental students and dental educators could help improve tooth preparation skills and provide constructive customized feedback.
Collapse
Affiliation(s)
- Frédéric Silvestri
- Department of Prosthodontics, School of Dental Medicine, ADES, CNRS, Aix-Marseille University, EFS, Marseille, France
- Division of Oral Rehabilitation, Department of Dental Medicine, Karolinska Institutet, Huddinge, Sweden
| | - Nabil Odisho
- Division of Oral Rehabilitation, Department of Dental Medicine, Karolinska Institutet, Huddinge, Sweden
| | - Abhishek Kumar
- Division of Oral Rehabilitation, Department of Dental Medicine, Karolinska Institutet, Alfred Nobels Allé 8, Box 4064, 141 04, Huddinge, Sweden.
- Academic Center for Geriatric Dentistry, Stockholm, Sweden.
| | - Anastasios Grigoriadis
- Division of Oral Rehabilitation, Department of Dental Medicine, Karolinska Institutet, Huddinge, Sweden
| |
Collapse
|
3
|
Aivar MP, Li CL, Tong MH, Kit DM, Hayhoe MM. Knowing where to go: Spatial memory guides eye and body movements in a naturalistic visual search task. J Vis 2024; 24:1. [PMID: 39226069 PMCID: PMC11373708 DOI: 10.1167/jov.24.9.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024] Open
Abstract
Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.
Collapse
Affiliation(s)
- M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
- https://www.psicologiauam.es/aivar/
| | - Chia-Ling Li
- Institute of Neuroscience, The University of Texas at Austin, Austin, TX, USA
- Present address: Apple Inc., Cupertino, California, USA
| | - Matthew H Tong
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: IBM Research, Cambridge, Massachusetts, USA
| | - Dmitry M Kit
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: F5, Boston, Massachusetts, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
4
|
Gordon SM, Dalangin B, Touryan J. Saccade size predicts onset time of object processing during visual search of an open world virtual environment. Neuroimage 2024; 298:120781. [PMID: 39127183 DOI: 10.1016/j.neuroimage.2024.120781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 08/02/2024] [Accepted: 08/08/2024] [Indexed: 08/12/2024] Open
Abstract
OBJECTIVE To date the vast majority of research in the visual neurosciences have been forced to adopt a highly constrained perspective of the vision system in which stimuli are processed in an open-loop reactive fashion (i.e., abrupt stimulus presentation followed by an evoked neural response). While such constraints enable high construct validity for neuroscientific investigation, the primary outcomes have been a reductionistic approach to isolate the component processes of visual perception. In electrophysiology, of the many neural processes studied under this rubric, the most well-known is, arguably, the P300 evoked response. There is, however, relatively little known about the real-world corollary of this component in free-viewing paradigms where visual stimuli are connected to neural function in a closed-loop. While growing evidence suggests that neural activity analogous to the P300 does occur in such paradigms, it is an open question when this response occurs and what behavioral or environmental factors could be used to isolate this component. APPROACH The current work uses convolutional networks to decode neural signals during a free-viewing visual search task in a closed-loop paradigm within an open-world virtual environment. From the decoded activity we construct fixation-locked response profiles that enable estimations of the variable latency of any P300 analogue around the moment of fixation. We then use these estimates to investigate which factors best reduce variable latency and, thus, predict the onset time of the response. We consider measurable, search-related factors encompassing top-down (i.e., goal driven) and bottom-up (i.e., stimulus driven) processes, such as fixation duration and salience. We also consider saccade size as an intermediate factor reflecting the integration of these two systems. MAIN RESULTS The results show that of these factors only saccade size reliably determines the onset time of P300 analogous activity for this task. Specifically, we find that for large saccades the variability in response onset is small enough to enable analysis using traditional ensemble averaging methods. SIGNIFICANCE The results show that P300 analogous activity does occur during closed-loop, free-viewing visual search while highlighting distinct differences between the open-loop version of this response and its real-world analogue. The results also further establish saccades, and saccade size, as a key factor in real-world visual processing.
Collapse
Affiliation(s)
| | | | - Jonathan Touryan
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| |
Collapse
|
5
|
Malpica S, Martin D, Serrano A, Gutierrez D, Masia B. Task-Dependent Visual Behavior in Immersive Environments: A Comparative Study of Free Exploration, Memory and Visual Search. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4417-4425. [PMID: 37788210 DOI: 10.1109/tvcg.2023.3320259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Visual behavior depends on both bottom-up mechanisms, where gaze is driven by the visual conspicuity of the stimuli, and top-down mechanisms, guiding attention towards relevant areas based on the task or goal of the viewer. While this is well-known, visual attention models often focus on bottom-up mechanisms. Existing works have analyzed the effect of high-level cognitive tasks like memory or visual search on visual behavior; however, they have often done so with different stimuli, methodology, metrics and participants, which makes drawing conclusions and comparisons between tasks particularly difficult. In this work we present a systematic study of how different cognitive tasks affect visual behavior in a novel within-subjects design scheme. Participants performed free exploration, memory and visual search tasks in three different scenes while their eye and head movements were being recorded. We found significant, consistent differences between tasks in the distributions of fixations, saccades and head movements. Our findings can provide insights for practitioners and content creators designing task-oriented immersive applications.
Collapse
|
6
|
Segraves MA. Using Natural Scenes to Enhance our Understanding of the Cerebral Cortex's Role in Visual Search. Annu Rev Vis Sci 2023; 9:435-454. [PMID: 37164028 DOI: 10.1146/annurev-vision-100720-124033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Using natural scenes is an approach to studying the visual and eye movement systems approximating how these systems function in everyday life. This review examines the results from behavioral and neurophysiological studies using natural scene viewing in humans and monkeys. The use of natural scenes for the study of cerebral cortical activity is relatively new and presents challenges for data analysis. Methods and results from the use of natural scenes for the study of the visual and eye movement cortex are presented, with emphasis on new insights that this method provides enhancing what is known about these cortical regions from the use of conventional methods.
Collapse
Affiliation(s)
- Mark A Segraves
- Department of Neurobiology, Northwestern University, Evanston, Illinois, USA;
| |
Collapse
|
7
|
Cheng B, Lin E, Wunderlich A, Gramann K, Fabrikant SI. Using spontaneous eye blink-related brain activity to investigate cognitive load during mobile map-assisted navigation. Front Neurosci 2023; 17:1024583. [PMID: 36866330 PMCID: PMC9971562 DOI: 10.3389/fnins.2023.1024583] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 01/26/2023] [Indexed: 02/16/2023] Open
Abstract
The continuous assessment of pedestrians' cognitive load during a naturalistic mobile map-assisted navigation task is challenging because of limited experimental control over stimulus presentation, human-map-interactions, and other participant responses. To overcome this challenge, the present study takes advantage of navigators' spontaneous eye blinks during navigation to serve as event markers in continuously recorded electroencephalography (EEG) data to assess cognitive load in a mobile map-assisted navigation task. We examined if and how displaying different numbers of landmarks (3 vs. 5 vs. 7) on mobile maps along a given route would influence navigators' cognitive load during navigation in virtual urban environments. Cognitive load was assessed by the peak amplitudes of the blink-related fronto-central N2 and parieto-occipital P3. Our results show increased parieto-occipital P3 amplitude indicating higher cognitive load in the 7-landmark condition, compared to showing 3 or 5 landmarks. Our prior research already demonstrated that participants acquire more spatial knowledge in the 5- and 7-landmark conditions compared to the 3-landmark condition. Together with the current study, we find that showing 5 landmarks, compared to 3 or 7 landmarks, improved spatial learning without overtaxing cognitive load during navigation in different urban environments. Our findings also indicate a possible cognitive load spillover effect during map-assisted wayfinding whereby cognitive load during map viewing might have affected cognitive load during goal-directed locomotion in the environment or vice versa. Our research demonstrates that users' cognitive load and spatial learning should be considered together when designing the display of future navigation aids and that navigators' eye blinks can serve as useful event makers to parse continuous human brain dynamics reflecting cognitive load in naturalistic settings.
Collapse
Affiliation(s)
- Bingjie Cheng
- Department of Geography and Digital Society Initiative, University of Zurich, Zurich, Switzerland,*Correspondence: Bingjie Cheng,
| | - Enru Lin
- Department of Geography and Digital Society Initiative, University of Zurich, Zurich, Switzerland
| | - Anna Wunderlich
- Department of Biopsychology and Neuroergonomics, Technical University of Berlin, Berlin, Germany
| | - Klaus Gramann
- Department of Biopsychology and Neuroergonomics, Technical University of Berlin, Berlin, Germany
| | - Sara I. Fabrikant
- Department of Geography and Digital Society Initiative, University of Zurich, Zurich, Switzerland
| |
Collapse
|
8
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
9
|
Ktistakis E, Skaramagkas V, Manousos D, Tachos NS, Tripoliti E, Fotiadis DI, Tsiknakis M. COLET: A dataset for COgnitive workLoad estimation based on eye-tracking. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:106989. [PMID: 35870415 DOI: 10.1016/j.cmpb.2022.106989] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/02/2022] [Accepted: 06/28/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE The cognitive workload is an important component in performance psychology, ergonomics, and human factors. Publicly available datasets are scarce, making it difficult to establish new approaches and comparative studies. In this work, COLET-COgnitive workLoad estimation based on Eye-Tracking dataset is presented. METHODS Forty-seven (47) individuals' eye movements were monitored as they solved puzzles involving visual search activities of varying complexity and duration. The participants' cognitive workload level was evaluated with the subjective test of NASA-TLX and this score is used as an annotation of the activity. Extensive data analysis was performed in order to derive eye and gaze features from low-level eye recorded metrics, and a range of machine learning models were evaluated and tested regarding the estimation of the cognitive workload level. RESULTS The activities induced four different levels of cognitive workload. Multi tasking and time pressure have induced a higher level of cognitive workload than the one induced by single tasking and absence of time pressure. Multi tasking had a significant effect on 17 eye features while time pressure had a significant effect on 7 eye features. Both binary and multi-class identification attempts were performed by testing a variety of well-known classifiers, resulting in encouraging results towards cognitive workload levels estimation, with up to 88% correct predictions between low and high cognitive workload. CONCLUSIONS Machine learning analysis demonstrated potential in discriminating cognitive workload levels using only eye-tracking characteristics. The proposed dataset includes a much higher sample size and a wider spectrum of eye and gaze metrics than other similar datasets, allowing for the examination of their relations with various cognitive states.
Collapse
Affiliation(s)
- Emmanouil Ktistakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), GR-700 13 Heraklion, Greece; Laboratory of Optics and Vision, School of Medicine, University of Crete, GR-710 03 Heraklion, Greece.
| | - Vasileios Skaramagkas
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), GR-700 13 Heraklion, Greece; Dept. of Electrical and Computer Engineering, Hellenic Mediterranean University, GR-710 04 Heraklion, Crete, Greece
| | - Dimitris Manousos
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), GR-700 13 Heraklion, Greece
| | - Nikolaos S Tachos
- Biomedical Research Institute, FORTH, GR-451 10, Ioannina, Greece and the Dept. of Materials Science and Engineering, Unit of Medical Technology and Intelligent Information Systems, University of Ioannina, GR-451 10, Ioannina, Greece
| | - Evanthia Tripoliti
- Dept. of Materials Science and Engineering, Unit of Medical Technology and Intelligent Information Systems, University of Ioannina, GR-451 10, Ioannina, Greece
| | - Dimitrios I Fotiadis
- Biomedical Research Institute, FORTH, GR-451 10, Ioannina, Greece and the Dept. of Materials Science and Engineering, Unit of Medical Technology and Intelligent Information Systems, University of Ioannina, GR-451 10, Ioannina, Greece
| | - Manolis Tsiknakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), GR-700 13 Heraklion, Greece; Dept. of Electrical and Computer Engineering, Hellenic Mediterranean University, GR-710 04 Heraklion, Crete, Greece
| |
Collapse
|
10
|
Thurman SM, Cohen Hoffing RA, Madison A, Ries AJ, Gordon SM, Touryan J. "Blue Sky Effect": Contextual Influences on Pupil Size During Naturalistic Visual Search. Front Psychol 2022; 12:748539. [PMID: 34992563 PMCID: PMC8725886 DOI: 10.3389/fpsyg.2021.748539] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 11/16/2021] [Indexed: 01/28/2023] Open
Abstract
Pupil size is influenced by cognitive and non-cognitive factors. One of the strongest modulators of pupil size is scene luminance, which complicates studies of cognitive pupillometry in environments with complex patterns of visual stimulation. To help understand how dynamic visual scene statistics influence pupil size during an active visual search task in a visually rich 3D virtual environment (VE), we analyzed the correlation between pupil size and intensity changes of image pixels in the red, green, and blue (RGB) channels within a large window (~14 degrees) surrounding the gaze position over time. Overall, blue and green channels had a stronger influence on pupil size than the red channel. The correlation maps were not consistent with the hypothesis of a foveal bias for luminance, instead revealing a significant contextual effect, whereby pixels above the gaze point in the green/blue channels had a disproportionate impact on pupil size. We hypothesized this differential sensitivity of pupil responsiveness to blue light from above as a “blue sky effect,” and confirmed this finding with a follow-on experiment with a controlled laboratory task. Pupillary constrictions were significantly stronger when blue was presented above fixation (paired with luminance-matched gray on bottom) compared to below fixation. This effect was specific for the blue color channel and this stimulus orientation. These results highlight the differential sensitivity of pupillary responses to scene statistics in studies or applications that involve complex visual environments and suggest blue light as a predominant factor influencing pupil size.
Collapse
Affiliation(s)
- Steven M Thurman
- US DEVCOM Army Research Laboratory, Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Russell A Cohen Hoffing
- US DEVCOM Army Research Laboratory, Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Anna Madison
- US DEVCOM Army Research Laboratory, Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Anthony J Ries
- US DEVCOM Army Research Laboratory, Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | | | - Jonathan Touryan
- US DEVCOM Army Research Laboratory, Human Research and Engineering Directorate, US Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| |
Collapse
|