1
|
Doyon JK, Hwang AD, Jung JH. Understanding viewpoint changes in peripheral prisms for field expansion by virtual reality simulation. Biomed Opt Express 2024; 15:1393-1407. [PMID: 38495729 PMCID: PMC10942672 DOI: 10.1364/boe.513758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 01/22/2024] [Accepted: 01/23/2024] [Indexed: 03/19/2024]
Abstract
Prism field expansion is a common treatment for patients with peripheral field loss, shifting images from the blind field into the seeing field. The shifted image originates from a new viewpoint translated and rotated from the original viewpoint by the prism. To understand such viewpoint changes, we simulated two field expansion methods in virtual reality: 1) angular (i.e., rotational) field expansion and 2) linear field expansion via image crop-and-shift. Changes to object locations, sizes, and optic flow patterns by those methods were demonstrated and analyzed in both static and dynamic conditions, which may affect navigation with such field expansion devices.
Collapse
Affiliation(s)
- Jonathan K. Doyon
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA 02114, USA
| | - Alex D. Hwang
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA 02114, USA
| | - Jae-Hyun Jung
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford St, Boston, MA 02114, USA
| |
Collapse
|
2
|
Hwang AD, Jung J, Bowers A, Peli E. Egocentric Boundaries on Distinguishing Colliding and Non-Colliding Pedestrians while Walking in a Virtual Environment. IS&T Int Symp Electron Imaging 2024; 36:2141-2148. [PMID: 38390289 PMCID: PMC10883473 DOI: 10.2352/ei.2024.36.11.hvei-214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2024]
Abstract
Avoiding person-to-person collisions is critical for visual field loss patients. Any intervention claiming to improve the safety of such patients should empirically demonstrate its efficacy. To design a VR mobility testing platform presenting multiple pedestrians, a distinction between colliding and non-colliding pedestrians must be clearly defined. We measured nine normally sighted subjects' collision envelopes (CE; an egocentric boundary distinguishing collision and non-collision) and found it changes based on the approaching pedestrian's bearing angle and speed. For person-to-person collision events for the VR mobility testing platform, non-colliding pedestrians should not evade the CE.
Collapse
Affiliation(s)
- Alex D Hwang
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Boston, Massachusetts, USA; Harvard Medical School, Cambridge, Massachusetts, USA
| | - Jaehyun Jung
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Boston, Massachusetts, USA; Harvard Medical School, Cambridge, Massachusetts, USA
| | - Alex Bowers
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Boston, Massachusetts, USA; Harvard Medical School, Cambridge, Massachusetts, USA
| | - Eli Peli
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Boston, Massachusetts, USA; Harvard Medical School, Cambridge, Massachusetts, USA
| |
Collapse
|
3
|
Hwang AD, Peli E, Jung JH. Development of Virtual Reality Walking Collision Detection Test on Head-mounted display. Proc SPIE Int Soc Opt Eng 2023; 12449:124491J. [PMID: 36970501 PMCID: PMC10037228 DOI: 10.1117/12.2647141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Detecting and avoiding collisions during walking is critical for safe mobility. To determine the effectiveness of clinical interventions, a realistic objective outcome measure is needed. A real-world obstacle course with moving hazards has numerous limitations (e.g., safety concerns of physical collision, inability to control events, maintaining event consistency, and event randomization). Virtual reality (VR) platforms may overcome such limitations. We developed a VR walking collision detection test using a standalone head-mounted display (HMD, Meta Quest 2) with the Unity 3D engine to enable subjects' physical walking within a VR environment (i.e., a busy shopping mall). The performance measures focus on the detection and avoidance of potential collisions, where a pedestrian may (or may not) walks toward a collision with the subject, while various non-colliding pedestrians are presented simultaneously. The physical space required for the system was minimized. During the development, we addressed expected and unexpected hurdles, such as mismatch of visual perception of VR space, limited field of view (FOV) afforded by the HMD, design of pedestrian paths, design of the subject task, handling of subject's response (detection or avoidance behavior), use of mixed reality (MR) for walking path calibration. We report the initial implementation of the HMD VR walking collision detection and avoidance scenarios that showed promising potential as clinical outcome measures.
Collapse
Affiliation(s)
- Alex D Hwang
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA 02114
| | - Eli Peli
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA 02114
| | - Jae-Hyun Jung
- Schepens Eye Research Institute of Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA 02114
| |
Collapse
|
4
|
Abstract
The most prominent problem in virtual reality (VR) technology is that users may experience motion sickness-like symptoms when they immerse into a VR environment. These symptoms are recognized as visually induced motion sickness (VIMS) or virtual reality motion sickness (VRMS). The objectives of this study were to investigate the association between the electroencephalogram (EEG) and subjectively rated VIMS level (VIMSL) and find the EEG markers for VIMS evaluation. In this study, a VR-based vehicle-driving simulator (VDS) was used to induce VIMS symptoms, and a wearable EEG device with four electrodes, the Muse, was used to collect EEG data of subjects. Our results suggest that individual tolerance, susceptibility, and recoverability to VIMS varied largely among subjects; the following markers were shown to be significantly different from no-VIMS and VIMS states (P < 0.05): (1) Means of gravity frequency (GF) for theta@FP1, alpha@TP9, alpha@FP2, alpha@TP10, and beta@FP1; (2) Standard deviation of GF for alpha@TP9, alpha@FP1, alpha@FP2, alpha@TP10, and alpha@(FP2-FP1); (3) Standard deviation of power spectral entropy (PSE) for FP1; (4) Means of Kolmogorov complexity (KC) for TP9, FP1, and FP2. These results also demonstrate that it is feasible to perform VIMS evaluation using an EEG device with a small number of electrodes.
Collapse
Affiliation(s)
- Ran Liu
- College of Computer Science, Chongqing University, Chongqing, China
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
- School of microelectronics and communication engineering, Chongqing University, Chongqing, China
| | - Miao Xu
- College of Information Science and Engineering, Shanxi Agricultural University, Taigu County, Jinzhong City, Shanxi Province, China
| | - Yanzhen Zhang
- School of microelectronics and communication engineering, Chongqing University, Chongqing, China
| | - Eli Peli
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Alex D Hwang
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Abstract
We analyze the impact of common stereoscopic 3D (S3D) depth distortion on S3D optic flow in virtual reality (VR) environments. The depth distortion is introduced by mismatches between the image acquisition and display parameter. The results show that such S3D distortions induce large S3D optic flow distortions and may even induce partial/full optic flow reversal within a certain depth range, depending on the viewer's moving speed and the magnitude of S3D distortion introduced. We hypothesize that the S3D optic flow distortion may be a source of intra-sensory conflict that may be a source of visually induced motion sickness (VIMS) in S3D.
Collapse
Affiliation(s)
- Alex D Hwang
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
| | - Eli Peli
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
| |
Collapse
|
6
|
Hwang AD, Tuccar-Burak M, Peli E. Comparison of Pedestrian Detection With and Without Yellow-Lens Glasses During Simulated Night Driving With and Without Headlight Glare. JAMA Ophthalmol 2019; 137:1147-1153. [PMID: 31369054 DOI: 10.1001/jamaophthalmol.2019.2893] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Importance Some marketing materials for yellow-lens night-driving glasses claim that they increase nighttime road visibility and reduce oncoming headlight glare (HLG). However, there is no scientific evidence to support these claims. Objective To measure the association between yellow-lens glasses and the detection of pedestrians with and without an oncoming HLG, using a driving simulator equipped with a custom HLG simulator. Design, Setting, and Participants A single-center cohort study was conducted between September 8, 2016, and October 25, 2017, at the Schepens Eye Research Institute. A total of 22 individuals participated in the study, divided into groups to determine response to a pedestrian wearing a navy blue shirt by younger individuals and, to control for participant's age and the interaction of the shirt color with the filter, response to a pedestrian wearing an orange shirt by a group of younger and older participants. Exposures Participants drove scripted night-driving scenarios, 3 times with 3 commercially available yellow-lens glasses and once with clear-lens glasses, with the HLG simulator turned on and off. A total of 8 conditions were used for each participant. Main Outcomes and Measures Pedestrian detection response time. Results The 22 participants who completed the study included 12 younger (mean [SD] age, 28 [7] years; 6 men) individuals who responded to a pedestrian wearing a dark navy blue shirt, as well as 6 younger (mean [SD] age, 27 [4] years; 4 men) and 4 older (mean [SD], 70 [11] years; all men) participants who responded to a pedestrian in an orange shirt. All participants had normal visual acuity (mean [SD], -0.05 [0.06] logMAR). No significant difference in response time with yellow lens was found in all experiment conditions; younger participants for dark navy blue shirt pedestrians (F1,33 = 0.59; P = .45), orange shirt pedestrians (F1,15 = 0.13; P = .72), and older participants for orange shirt pedestrians (F1,9 = 0.84; P = .38). Among all participants (n = 22), no significant main effect of yellow lenses was found (F1,63 = 0.64; P = .42). In all measuring conditions, the response times with the yellow lenses were not better than with the clear lenses. Significant main effects of HLG were found with dark navy blue shirt pedestrian condition for young participants (F1,33 = 7.34; P < .001) and with orange shirt pedestrian condition for older individuals (F1,9 = 75.32; P < .001), where the difference in response time between with and without HLG was larger for older (1.5 seconds) than younger (0.3 seconds) participants. Conclusions and Relevance Using a driver simulator equipped with an HLG simulator, yellow-lens night-driving glasses did not appear to improve pedestrian detection at night or reduce the negative effects of HLG on pedestrian detection performance. These findings do not appear to support having eye care professionals advise patients to use yellow-lens night-driving glasses.
Collapse
Affiliation(s)
- Alex D Hwang
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| | - Merve Tuccar-Burak
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| | - Eli Peli
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
7
|
Manda S, Castle R, Hwang AD, Peli E. IMPACT OF HEADLIGHT GLARE ON PEDESTRIAN DETECTION WITH UNILATERAL CATARACT. Proc Int Driv Symp Hum Factors Driv Assess Train Veh Des 2019; 2019:36-42. [PMID: 31423491 PMCID: PMC6698325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Detecting pedestrians while driving at night is difficult, and is further impeded by oncoming headlight glare (HLG). Cataracts increase intraocular light scattering, making the task even more challenging. We used a within-subjects repeated measures design to determine the impact of HLG on driving with unilateral cataract. Pedestrian detection performance of six young normal vision (NV) subjects was measured with clear lens glasses and with simulated unilateral cataract (0.8 Bangerter foil) glasses. The subjects drove night-time scenarios in a driving simulator with and without custom simulated headlight glare. With simulated unilateral cataracts, pedestrian detection rates decreased and response times increased with oncoming HLG. We verified these effects with six patients who already underwent cataract surgery for one eye and were scheduled to get cataract surgery in the other eye. We measured their performance before and after the second cataract surgery. The results were similar to those obtained with the simulated unilateral cataract, confirming that a negative impact of HLG persists with untreated cataract in one eye.
Collapse
Affiliation(s)
- Sailaja Manda
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Rachel Castle
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Alex D Hwang
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Eli Peli
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
8
|
Hwang AD, Tuccar-Burak M, Goldstein R, Peli E. Impact of Oncoming Headlight Glare With Cataracts: A Pilot Study. Front Psychol 2018; 9:164. [PMID: 29559933 PMCID: PMC5845724 DOI: 10.3389/fpsyg.2018.00164] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Accepted: 01/31/2018] [Indexed: 11/13/2022] Open
Abstract
Purpose: Oncoming headlight glare (HLG) reduces the visibility of objects on the road and may affect the safety of nighttime driving. With cataracts, the impact of oncoming HLG is expected to be more severe. We used our custom HLG simulator in a driving simulator to measure the impact of HLG on pedestrian detection by normal vision subjects with simulated mild cataracts and by patients with real cataracts. Methods: Five normal vision subjects drove nighttime scenarios under two HLG conditions (with and without HLG: HLGY and HLGN, respectively), and three vision conditions (with plano lens, simulated mild cataract, and optically blurred clip-on). Mild cataract was simulated by applying a 0.8 Bangerter diffusion foil to clip-on plano lenses. The visual acuity with the optically blurred lenses was individually chosen to match the visual acuity with the simulated cataract clip-ons under HLGN. Each nighttime driving scenario contains 24 pedestrian encounters, encompassing four pedestrian types; walking along the left side of the road, walking along the right side of the road, crossing the road from left to right, and crossing the road from right to left. Pedestrian detection performances of five patients with mild real cataracts were measured using the same setup. The cataract patients were tested only in HLGY and HLGN conditions. Participants' visual acuity and contrast sensitivity were also measured in the simulator with and without stationary HLG. Results: For normal vision subjects, both the presence of oncoming HLG and wearing the simulated cataract clip-on reduced pedestrian detection performance. The subjects performed worst in events where the pedestrian crossed from the left, followed by events where the pedestrian crossed from the right. Significant interactions between HLG condition and other factors were also found: (1) the impact of oncoming HLG with the simulated cataract clip-on was larger than with the plano lens clip-on, (2) the impact of oncoming HLG was larger with the optically blurred clip-on than with the plano lens clip-on, but smaller than with the simulated cataract clip-on, and (3) the impact was larger for the pedestrians that crossed from the left than those that crossed from the right, and for the pedestrians walking along the left side of the road than walking along the right side of the road, suggesting that the pedestrian proximity to the glare source contributed to the performance reduction. Under HLGN, almost no pedestrians were missed with the plano lens or the simulated cataract clip-on (0 and 0.5%, respectively), but under HLGY, the rate of pedestrian misses increased to 0.5 and 6%, respectively. With the optically blurred clip-on, the percent of missed pedestrians under HLGN and HLGY did not change much (5% and 6%, respectively). Untimely response rate increased under HLGY with the plano lens and simulated cataract clip-ons, but the increase with the simulated cataract clip-on was significantly larger than with the plano lens clip-on. The contrast sensitivity with the simulated cataract clip-on was significantly degraded under HLGY. The visual acuity with the plano lens clip-on was significantly improved under HLGY, possibly due to pupil myosis. The impact of HLG measured for real cataract patients was similar to the impact on performance of normal vision subjects with simulated cataract clip-ons. Conclusion: Even with mild (simulated or real) cataracts, a substantial negative effect of oncoming HLG was measurable in the detection of crossing and walking-along pedestrians. The lowered pedestrian detection rates and longer response times with HLGY demonstrate a possible risk that oncoming HLG poses to patients driving with cataracts.
Collapse
Affiliation(s)
- Alex D Hwang
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| | - Merve Tuccar-Burak
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| | - Robert Goldstein
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| | - Eli Peli
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
9
|
Abstract
Apart from the well-known loss of color vision and of foveal acuity that characterizes human rod-mediated vision, it has also been thought that night vision is very slow (taking up to 40 min) to adapt to changes in light levels. Even cone-mediated, daylight, vision has been thought to take 2 min to recover from light adaptation. Here, we show that most, though not all adaptation is rapid, taking less than 0.6 s. Thus, monochrome (black-white-gray) images can be presented at mesopic light levels and be visible within a few 10th of a second, even if the overall light level, or level of glare (as with passing headlamps while driving), changes abruptly.
Collapse
Affiliation(s)
- Adam Reeves
- Department of Psychology, Northeastern University, Boston, MA, United States
| | - Rebecca Grayhem
- John A. Volpe National Transportation Systems Center, Cambridge, MA, United States
| | - Alex D. Hwang
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
10
|
Hwang AD, Deng H, Gao Z, Peli E. Quantifying Visually Induced Motion Sickness (VIMS) During Stereoscopic 3D Viewing Using Temporal VIMS Rating. J Imaging Sci Technol 2017. [DOI: 10.2352/j.imagingsci.technol.2017.61.6.060405] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
|
11
|
|
12
|
Abstract
The contrast ratio (CR) has been used to describe display's performance. However, CR is unbound, ignores the impact of ambient illumination, or viewer's contrast perception. We propose new metric for display's contrast performance based on a modified Weber contrast definition that considers human contrast adaptation and applies for both opaque and see-through displays.
Collapse
Affiliation(s)
- Alex D Hwang
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
| | - Eli Peli
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA
| |
Collapse
|
13
|
Abstract
Contrast sensitivity (CS) quantifies an observer's ability to detect the smallest (threshold) luminance difference between a target and its surrounding. In clinical settings, printed letter contrast charts are commonly used, and the contrast of the letter stimuli is specified by the Weber contrast definition. Those paper-printed charts use negative polarity contrast (NP, dark letters on bright background) and are not available with positive polarity contrast (PP, bright letters on dark background), as needed in a number of applications. We implemented a mobile CS measuring app supporting both NP and PP contrast stimuli that mimic the paper charts for NP. A novel modified Weber definition was developed to specify the contrast of PP letters. The validity of the app is established in comparison with the paper chart. We found that our app generates more accurate and a wider range of contrast stimuli than the paper chart (especially at the critical high CS, low contrast range), and found a clear difference between NP and PP CS measures (CSNP>CSPP) despite the symmetry afforded by the modified Weber contrast definition. Our app provides a convenient way to measure CS in both lighted and dark environments.
Collapse
Affiliation(s)
- Alex D Hwang
- Schepens Eye Research Institute - Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford Street, Boston, MA02114, USA
| | - Eli Peli
- Schepens Eye Research Institute - Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, 20 Staniford Street, Boston, MA02114, USA
| |
Collapse
|
14
|
Hwang AD, Peli E. Instability of the perceived world while watching 3D stereoscopic imagery: A likely source of motion sickness symptoms. Iperception 2014; 5:515-35. [PMID: 26034562 PMCID: PMC4441027 DOI: 10.1068/i0647] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2014] [Revised: 07/30/2014] [Indexed: 10/31/2022] Open
Abstract
Watching 3D content using a stereoscopic display may cause various discomforting symptoms, including eye strain, blurred vision, double vision, and motion sickness. Numerous studies have reported motion-sickness-like symptoms during stereoscopic viewing, but no causal linkage between specific aspects of the presentation and the induced discomfort has been explicitly proposed. Here, we describe several causes, in which stereoscopic capture, display, and viewing differ from natural viewing resulting in static and, importantly, dynamic distortions that conflict with the expected stability and rigidity of the real world. This analysis provides a basis for suggested changes to display systems that may alleviate the symptoms, and suggestions for future studies to determine the relative contribution of the various effects to the unpleasant symptoms.
Collapse
Affiliation(s)
- Alex D Hwang
- Department of Ophthalmology, Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA; e-mail:
| | - Eli Peli
- Department of Ophthalmology, Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA; e-mail:
| |
Collapse
|
15
|
Abstract
We describe the design and construction of a headlight glare simulator to be used with a driving simulator. The system combines a modified programmable off-the-shelf LED display board and a beamsplitter so that the LED lights, representing the headlights of oncoming cars, are superimposed over the driving simulator headlights image. Ideal spatial arrangement of optical components to avoid misalignments of the superimposed images is hard to achieve in practice and variations inevitably introduce some parallax. Furthermore, the driver's viewing position varies with driver's height and seating position preferences exacerbate such misalignment. We reduce the parallax errors using an intuitive calibration procedure (simple drag-and-drop alignment of nine LED positions with calibration dots on the screen). To simulate the dynamics of headlight brightness changes when two vehicles are approaching, LED intensity control algorithms based on both headlight and LED beam shapes were developed. The simulation errors were estimated and compared to real-world headlight brightness variability.
Collapse
Affiliation(s)
- Alex D. Hwang
- Schepens Eye Research Institute, Massachusetts Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA
| | - Eli Peli
- Schepens Eye Research Institute, Massachusetts Eye and Ear, and Department of Ophthalmology, Harvard Medical School, Boston, MA
| |
Collapse
|
16
|
Hwang AD, Wang HC, Pomplun M. Semantic guidance of eye movements in real-world scenes. Vision Res 2011; 51:1192-205. [PMID: 21426914 DOI: 10.1016/j.visres.2011.03.010] [Citation(s) in RCA: 74] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2010] [Revised: 03/10/2011] [Accepted: 03/16/2011] [Indexed: 10/18/2022]
Abstract
The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control.
Collapse
Affiliation(s)
- Alex D Hwang
- Department of Computer Science, University of Massachusetts Boston, 100 Morrissey Blvd., Boston, MA 02125-3393, USA.
| | | | | |
Collapse
|
17
|
Wang HC, Hwang AD, Pomplun M. Object Frequency and Predictability Effects on Eye Fixation Durations in Real-World Scene Viewing. J Eye Mov Res 2010. [DOI: 10.16910/jemr.3.3.3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
During text reading, the durations of eye fixations decrease with greater frequency and predictability of the currently fixated word (Rayner, 1998; 2009). However, it has not been tested whether those results also apply to scene viewing. We computed object frequency and predictability from both linguistic and visual scene analysis (LabelMe, Russell et al., 2008), and Latent Semantic Analysis (Landauer et al., 1998) was applied to estimate predictability. In a scene-viewing experiment, we found that, for small objects, linguistics-based frequency, but not scene-based frequency, had effects on first fixation duration, gaze duration, and total time. Both linguistic and scene-based predictability affected total time. Similar to reading, fixation duration decreased with higher frequency and predictability. For large objects, we found the direction of effects to be the inverse of those found in reading studies. These results suggest that the recognition of small objects in scene viewing shares some characteristics with the recognition of words in reading.
Collapse
|
18
|
Hwang AD, Higgins EC, Pomplun M. A model of top-down attentional control during visual search in complex scenes. J Vis 2009; 9:25.1-18. [PMID: 19757903 DOI: 10.1167/9.5.25] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2008] [Accepted: 04/14/2009] [Indexed: 11/24/2022] Open
Abstract
Recently, there has been great interest among vision researchers in developing computational models that predict the distribution of saccadic endpoints in naturalistic scenes. In many of these studies, subjects are instructed to view scenes without any particular task in mind so that stimulus-driven (bottom-up) processes guide visual attention. However, whenever there is a search task, goal-driven (top-down) processes tend to dominate guidance, as indicated by attention being systematically biased toward image features that resemble those of the search target. In the present study, we devise a top-down model of visual attention during search in complex scenes based on similarity between the target and regions of the search scene. Similarity is defined for several feature dimensions such as orientation or spatial frequency using a histogram-matching technique. The amount of attentional guidance across visual feature dimensions is predicted by a previously introduced informativeness measure. We use eye-movement data gathered from participants' search of a set of naturalistic scenes to evaluate the model. The model is found to predict the distribution of saccadic endpoints in search displays nearly as accurately as do other observers' eye-movement data in the same displays.
Collapse
Affiliation(s)
- Alex D Hwang
- Department of Computer Science, University of Massachusetts, Boston, MA, USA.
| | | | | |
Collapse
|