1
|
Cao S, Kelly J, Nyugen C, Chow HM, Leonardo B, Sabov A, Ciaramitaro VM. Prior visual experience increases children's use of effective haptic exploration strategies in audio-tactile sound-shape correspondences. J Exp Child Psychol 2024; 241:105856. [PMID: 38306737 DOI: 10.1016/j.jecp.2023.105856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 02/04/2024]
Abstract
Sound-shape correspondence refers to the preferential mapping of information across the senses, such as associating a nonsense word like bouba with rounded abstract shapes and kiki with spiky abstract shapes. Here we focused on audio-tactile (AT) sound-shape correspondences between nonsense words and abstract shapes that are felt but not seen. Despite previous research indicating a role for visual experience in establishing AT associations, it remains unclear how visual experience facilitates AT correspondences. Here we investigated one hypothesis: seeing the abstract shapes improve haptic exploration by (a) increasing effective haptic strategies and/or (b) decreasing ineffective haptic strategies. We analyzed five haptic strategies in video-recordings of 6- to 8-year-old children obtained in a previous study. We found the dominant strategy used to explore shapes differed based on visual experience. Effective strategies, which provide information about shape, were dominant in participants with prior visual experience, whereas ineffective strategies, which do not provide information about shape, were dominant in participants without prior visual experience. With prior visual experience, poking-an effective and efficient strategy-was dominant, whereas without prior visual experience, uncategorizable and ineffective strategies were dominant. These findings suggest that prior visual experience of abstract shapes in 6- to 8-year-olds can increase the effectiveness and efficiency of haptic exploration, potentially explaining why prior visual experience can increase the strength of AT sound-shape correspondences.
Collapse
Affiliation(s)
- Shibo Cao
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Julia Kelly
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Cuong Nyugen
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Hiu Mei Chow
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA; Department of Psychology, St. Thomas University, Fredericton, New Brunswick E3B 5G3, Canada
| | - Brianna Leonardo
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Aleksandra Sabov
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Vivian M Ciaramitaro
- Department of Psychology, University of Massachusetts Boston, Boston, MA 02125, USA.
| |
Collapse
|
2
|
Kreyenmeier P, Bhuiyan I, Gian M, Chow HM, Spering M. Smooth pursuit inhibition reveals audiovisual enhancement of fast movement control. J Vis 2024; 24:3. [PMID: 38558158 PMCID: PMC10996987 DOI: 10.1167/jov.24.4.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 02/03/2024] [Indexed: 04/04/2024] Open
Abstract
The sudden onset of a visual object or event elicits an inhibition of eye movements at latencies approaching the minimum delay of visuomotor conductance in the brain. Typically, information presented via multiple sensory modalities, such as sound and vision, evokes stronger and more robust responses than unisensory information. Whether and how multisensory information affects ultra-short latency oculomotor inhibition is unknown. In two experiments, we investigate smooth pursuit and saccadic inhibition in response to multisensory distractors. Observers tracked a horizontally moving dot and were interrupted by an unpredictable visual, auditory, or audiovisual distractor. Distractors elicited a transient inhibition of pursuit eye velocity and catch-up saccade rate within ∼100 ms of their onset. Audiovisual distractors evoked stronger oculomotor inhibition than visual- or auditory-only distractors, indicating multisensory response enhancement. Multisensory response enhancement magnitudes were equal to the linear sum of responses to component stimuli. These results demonstrate that multisensory information affects eye movements even at ultra-short latencies, establishing a lower time boundary for multisensory-guided behavior. We conclude that oculomotor circuits must have privileged access to sensory information from multiple modalities, presumably via a fast, subcortical pathway.
Collapse
Affiliation(s)
- Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
| | - Ishmam Bhuiyan
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Mathew Gian
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Hiu Mei Chow
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Psychology, St. Thomas University, Fredericton, New Brunswick, Canada
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, BC, Vancouver, Canada
- Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
3
|
Mei Chow H, Spering M. Eye movements during optic flow perception. Vision Res 2023; 204:108164. [PMID: 36566560 DOI: 10.1016/j.visres.2022.108164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 11/22/2022] [Accepted: 12/07/2022] [Indexed: 12/24/2022]
Abstract
Optic flow is an important visual cue for human perception and locomotion and naturally triggers eye movements. Here we investigate whether the perception of optic flow direction is limited or enhanced by eye movements. In Exp. 1, 23 human observers localized the focus of expansion (FOE) of an optic flow pattern; in Exp. 2, 18 observers had to detect brief visual changes at the FOE. Both tasks were completed during free viewing and fixation conditions while eye movements were recorded. Task difficulty was varied by manipulating the coherence of radial motion from the FOE (4 %-90 %). During free viewing, observers tracked the optic flow pattern with a combination of saccades and smooth eye movements. During fixation, observers nevertheless made small-scale eye movements. Despite differences in spatial scale, eye movements during free viewing and fixation were similarly directed toward the FOE (saccades) and away from the FOE (smooth tracking). Whereas FOE localization sensitivity was not affected by eye movement instructions (Exp. 1), observers' sensitivity to detect brief changes at the FOE was 27 % higher (p <.001) during free-viewing compared to fixation (Exp. 2). This performance benefit was linked to reduced saccade endpoint errors, indicating the direct beneficial impact of foveating eye movements on performance in a fine-grain perceptual task, but not during coarse perceptual localization.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Dept. of Psychology, St. Thomas University, Fredericton, Canada; Dept. of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - Miriam Spering
- Dept. of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada; Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, Canada; Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
4
|
Tseng CH, Chow HM, Spillmann L, Oxner M, Sakurai K. Body Pitch Together With Translational Body Motion Biases the Subjective Haptic Vertical. Multisens Res 2022; 36:1-29. [PMID: 36731530 DOI: 10.1163/22134808-bja10086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 11/15/2022] [Indexed: 12/27/2022]
Abstract
Accurate perception of verticality is critical for postural maintenance and successful physical interaction with the world. Although previous research has examined the independent influences of body orientation and self-motion under well-controlled laboratory conditions, these factors are constantly changing and interacting in the real world. In this study, we examine the subjective haptic vertical in a real-world scenario. Here, we report a bias of verticality perception in a field experiment on the Hong Kong Peak Tram as participants traveled on a slope ranging from 6° to 26°. Mean subjective haptic vertical (SHV) increased with slope by as much as 15°, regardless of whether the eyes were open (Experiment 1) or closed (Experiment 2). Shifting the body pitch by a fixed degree in an effort to compensate for the mountain slope failed to reduce the verticality bias (Experiment 3). These manipulations separately rule out visual and vestibular inputs about absolute body pitch as contributors to our observed bias. Observations collected on a tram traveling on level ground (Experiment 4A) or in a static dental chair with a range of inclinations similar to those encountered on the mountain tram (Experiment 4B) showed no significant deviation of the subjective vertical from gravity. We conclude that the SHV error is due to a combination of large, dynamic body pitch and translational motion. These observations made in a real-world scenario represent an incentive to neuroscientists and aviation experts alike for studying perceived verticality under field conditions and raising awareness of dangerous misperceptions of verticality when body pitch and translational self-motion come together.
Collapse
Affiliation(s)
- Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, 980-8577, Japan
| | - Hiu Mei Chow
- Department of Psychology, St. Thomas University, Fredericton, E3B 5G3, Canada
| | - Lothar Spillmann
- Neurology Clinic, University of Freiburg, 79106 Freiburg, Germany
| | - Matt Oxner
- Wilhelm Wundt Institute for Psychology, University of Leipzig, 04109 Leipzig, Germany
| | - Kenzo Sakurai
- Department of Human Science, Tohoku Gakuin University, Sendai, 981-3193, Japan
| |
Collapse
|
5
|
Chow HM, Spering M. The influence of eye movements on optic flow perception. J Vis 2022. [DOI: 10.1167/jov.22.14.4300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Affiliation(s)
- Hiu Mei Chow
- University of British Columbia, Vancouver, Canada
| | | |
Collapse
|
6
|
Abstract
Collinear search impairment (CSI) is a phenomenon where a task-irrelevant collinear structure impairs a target search in a visual display. It has been suggested that CSI is monocular, occurs without the participants' access to consciousness and is possibly processed at an early visual site (e.g. V1). This effect has frequently been compared with a well-documented opposite effect called attentional capture (AC), in which salient and task-irrelevant basic features (e.g. color, orientation) enhance target detection. However, whether this phenomenon can be attributed to non-attentional factors such as collinear facilitation (CF) has not yet been formally tested. Here we used one well-established property of CF, i.e. that target contrast modulates its effect direction (facilitation vs suppression), to examine whether CSI shared similar signature profiles along different contrast levels. In other words, we tested whether CSI previously observed at the supra-threshold level was reduced or reversed at near-threshold contrast levels. Our results showed that, regardless of the luminance contrast levels, participants spent a longer time searching for targets displayed on the salient singleton collinear structure than those displayed off the structure. Contrast invariance suggests that it is unlikely that CSI is exclusively sub-served by an early vision mechanism (e.g. CF).
Collapse
Affiliation(s)
- Chia-huei Tseng
- grid.69566.3a0000 0001 2248 6943Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Hiu Mei Chow
- grid.17091.3e0000 0001 2288 9830Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Jiayu Liang
- grid.194645.b0000000121742757Department of Psychology, University of Hong Kong, Pok Fu Lam, Hong Kong
| | - Satoshi Shioiri
- grid.69566.3a0000 0001 2248 6943Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
| | - Chien-Chung Chen
- grid.19188.390000 0004 0546 0241Department of Psychology, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
7
|
Chow HM, Harris DA, Eid S, Ciaramitaro VM. The feeling of "kiki": Comparing developmental changes in sound-shape correspondence for audio-visual and audio-tactile stimuli. J Exp Child Psychol 2021; 209:105167. [PMID: 33915481 DOI: 10.1016/j.jecp.2021.105167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 03/21/2021] [Accepted: 03/22/2021] [Indexed: 10/21/2022]
Abstract
Sound-shape crossmodal correspondence, the naturally occurring associations between abstract visual shapes and nonsense sounds, is one aspect of multisensory processing that strengthens across early childhood. Little is known regarding whether school-aged children exhibit other variants of sound-shape correspondences such as audio-tactile (AT) associations between tactile shapes and nonsense sounds. Based on previous research in blind individuals suggesting the role of visual experience in establishing sound-shape correspondence, we hypothesized that children would show weaker AT association than adults and that children's AT association would be enhanced with visual experience of the shapes. In Experiment 1, we showed that, when asked to match shapes explored haptically via touch to nonsense words, 6- to 8-year-olds exhibited inconsistent AT associations, whereas older children and adults exhibited the expected AT associations, despite robust audio-visual (AV) associations found across all age groups in a related study. In Experiment 2, we confirmed the role of visual experience in enhancing AT association; here, 6- to 8-year-olds could exhibit the expected AT association if first exposed to the AV condition, whereas adults showed the expected AT association irrespective of whether the AV condition was tested first or second. Our finding suggests that AT sound-shape correspondence is weak early in development relative to AV sound-shape correspondence, paralleling previous findings on the development of other types of multisensory associations. The potential role of visual experience in the development of sound-shape correspondences in other senses is discussed.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA; Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
| | - Daniel A Harris
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA; Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario M5T 3M7, Canada
| | - Sandy Eid
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Vivian M Ciaramitaro
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA.
| |
Collapse
|
8
|
Abstract
When we move through our environment, objects in the visual scene create optic flow patterns on the retina. Even though optic flow is ubiquitous in everyday life, it is not well understood how our eyes naturally respond to it. In small groups of human and non-human primates, optic flow triggers intuitive, uninstructed eye movements to the focus of expansion of the pattern (Knöll, Pillow, & Huk, 2018). Here, we investigate whether such intuitive oculomotor responses to optic flow are generalizable to a larger group of human observers and how eye movements are affected by motion signal strength and task instructions. Observers (N = 43) viewed expanding or contracting optic flow constructed by a cloud of moving dots radiating from or converging toward a focus of expansion that could randomly shift. Results show that 84% of observers tracked the focus of expansion with their eyes without being explicitly instructed to track. Intuitive tracking was tuned to motion signal strength: Saccades landed closer to the focus of expansion, and smooth tracking was more accurate when dot contrast, motion coherence, and translational speed were high. Under explicit tracking instruction, the eyes aligned with the focus of expansion more closely than without instruction. Our results highlight the sensitivity of intuitive eye movements as indicators of visual motion processing in dynamic contexts.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Jonas Knöll
- Institute of Animal Welfare and Animal Husbandry, Friedrich-Loeffler-Institut, Celle, Germany
| | - Matthew Madsen
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
- Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
9
|
Chow HM, Knöll J, Madsen M, Spering M. Look where you go: Humans intuitively track heading direction changes with their eyes. J Vis 2020. [DOI: 10.1167/jov.20.11.443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Hiu Mei Chow
- University of British Columbia, Vancouver, Canada
| | - Jonas Knöll
- Institute of Animal Welfare and Animal Husbandry, Friedrich-Loeffler-Institut, Greifswald, Germany
| | | | | |
Collapse
|
10
|
Abstract
The brain consistently faces a challenge of whether and how to combine the available information sources to estimate the properties of an object explored by hand. While object perception is an inference process involving multisensory inputs, thermal referral (TR) is an illusion demonstrating how the interaction between thermal and tactile systems can lead to deviations from physical reality-when observers touch three stimulators simultaneously with the middle three fingers of one hand but only the outer two stimulators are heated (or cooled), thermal uniformity is perceived across three fingers. Here, we used TR of warmth to examine the thermal-tactile interaction in object temperature perception. We show that TR is consistent with precision-weighted averaging of thermal sensation across tactile locations. Furthermore, we show that prolonged contact with TR stimulation results in adaptation to the local variations of veridical temperatures instead of the thermal uniformity perceived across three fingers. Our results illuminate the flexibility of processing that underlies thermal-tactile interactions and serve as a basis for thermal display design.
Collapse
|
11
|
Ciaramitaro V, Chow HM, Morina E. Crossmodal correspondences between abstract shapes and nonsense words modulate a neuronal signature of visual shape processing. J Vis 2019. [DOI: 10.1167/19.10.270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | - Hiu Mei Chow
- University of Massachusetts Boston, Psychology Department
| | - Erinda Morina
- University of Massachusetts Boston, Psychology Department
| |
Collapse
|
12
|
Chow HM, Ciaramitaro V. Musical expertise modulates the cost of crossmodal divided attention between vision and audition in behavior but not in tonic pupil dilation. J Vis 2018. [DOI: 10.1167/18.10.486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Hiu Mei Chow
- Psychology Department, University of Massachusetts, Boston
| | | |
Collapse
|
13
|
Tseng CH, Chow HM, Ma YK, Ding J. Preverbal infants utilize cross-modal semantic congruency in artificial grammar acquisition. Sci Rep 2018; 8:12707. [PMID: 30139964 PMCID: PMC6107625 DOI: 10.1038/s41598-018-30927-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2017] [Accepted: 07/30/2018] [Indexed: 11/09/2022] Open
Abstract
Learning in a multisensory world is challenging as the information from different sensory dimensions may be inconsistent and confusing. By adulthood, learners optimally integrate bimodal (e.g. audio-visual, AV) stimulation by both low-level (e.g. temporal synchrony) and high-level (e.g. semantic congruency) properties of the stimuli to boost learning outcomes. However, it is unclear how this capacity emerges and develops. To approach this question, we examined whether preverbal infants were capable of utilizing high-level properties with grammar-like rule acquisition. In three experiments, we habituated pre-linguistic infants with an audio-visual (AV) temporal sequence that resembled a grammar-like rule (A-A-B). We varied the cross-modal semantic congruence of the AV stimuli (Exp 1: congruent syllables/faces; Exp 2: incongruent syllables/shapes; Exp 3: incongruent beeps/faces) while all the other low-level properties (e.g. temporal synchrony, sensory energy) were constant. Eight- to ten-month-old infants only learned the grammar-like rule from AV congruent stimuli pairs (Exp 1), not from incongruent AV pairs (Exp 2, 3). Our results show that similar to adults, preverbal infants' learning is influenced by a high-level multisensory integration gating system, pointing to a perceptual origin of bimodal learning advantage that was not previously acknowledged.
Collapse
Affiliation(s)
- Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan.
| | - Hiu Mei Chow
- Department of Psychology, University of Massachusetts Boston, Boston, USA
| | - Yuen Ki Ma
- Department of Psychology, The University of Hong Kong, Hong Kong, SAR, China
| | - Jie Ding
- Department of Psychology, The University of Hong Kong, Hong Kong, SAR, China
| |
Collapse
|
14
|
Deng X, Cheng C, Chow HM, Ding X. Prefer feeling bad? Subcultural differences in emotional preferences between Han Chinese and Mongolian Chinese. Int J Psychol 2018; 54:333-341. [DOI: 10.1002/ijop.12481] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 01/11/2018] [Indexed: 11/09/2022]
Affiliation(s)
- Xinmei Deng
- College of Psychology and SociologyShenzhen University Shenzhen China
- Shenzhen Key Laboratory of Affective and Social Cognitive ScienceShenzhen University Shenzhen China
| | - Chen Cheng
- Department of PsychologyUniversity of Massachusetts Boston Boston MA USA
| | - Hiu Mei Chow
- Department of PsychologyUniversity of Massachusetts Boston Boston MA USA
| | - Xuechen Ding
- Department of PsychologyShanghai Normal University Shanghai China
| |
Collapse
|
15
|
Ciaramitaro VM, Chow HM, Eglington LG. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds. J Vis 2017; 17:20. [PMID: 28355632 DOI: 10.1167/17.3.20] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Collapse
Affiliation(s)
- Vivian M Ciaramitaro
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts, Boston, MA,
| | - Hiu Mei Chow
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts, Boston, MA,
| | - Luke G Eglington
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts, Boston, MA, USADepartment of Psychological and Brain Sciences, Dartmouth College, Hanover, NH,
| |
Collapse
|
16
|
Chow HM, Harris D, Eid S, Ciaramitaro V. Early experience alters the developmental trajectory of visual, auditory and tactile sound-shape correspondences. J Vis 2016. [DOI: 10.1167/16.12.1193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
17
|
Chow HM, Jingling L, Tseng CH. Eye of origin guides attention away: An ocular singleton column impairs visual search like a collinear column. J Vis 2016; 16:12. [PMID: 26790844 DOI: 10.1167/16.1.12] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Collinearity and eye of origin were recently discovered to guide attention: Target search is impaired if it is overlapping with a collinear structure (Jingling & Tseng, 2013) but enhanced if the target is an ocular singleton (Zhaoping, 2008). Both are proposed to occur in V1, and we study their interaction here. In our 9 × 9 search display (Experiment 1), all columns consisted of horizontal bars except for one randomly selected column that contained orthogonal bars (collinear distractor). All columns were presented to one eye except for a randomly selected column that was presented to the other eye (ocular distractor). The target could be located on a distractor column (collinear congruent [CC]/ocular congruent [OC]) or not (collinear incongruent [CI]/ocular incongruent [OI]). We expected to find the best search performance for OC + CI targets and the worst search performance for OI + CC targets. The other combinations would depend on the relative strength of collinearity and ocular information in guiding attention. As expected, we observed collinear impairment, but surprisingly, we did not observe any search advantage for OC targets. Our subsequent experiments confirmed that OC search impairment also occurred when color-defined columns (Experiment 2), ocular singletons (Experiments 4 and 5), and noncollinear columns (Experiment 5) were used instead of collinear columns. However, the ocular effect disappeared when paired with luminance-defined columns (Experiments 3A and 3B). Although our results agree well with earlier findings that eye-of-origin information guides attention, they highlight that our previous understanding of search advantage by ocular singleton targets might have been oversimplified.
Collapse
|
18
|
Tsui ASM, Ma YK, Ho A, Chow HM, Tseng CH. Bimodal emotion congruency is critical to preverbal infants’ abstract rule learning. Dev Sci 2015; 19:382-93. [DOI: 10.1111/desc.12319] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2013] [Accepted: 03/31/2015] [Indexed: 11/28/2022]
Affiliation(s)
- Angeline Sin Mei Tsui
- Department of Psychology; The University of Hong Kong; China
- Department of Psychology; University of Ottawa; Canada
| | - Yuen Ki Ma
- Department of Psychology; The University of Hong Kong; China
| | - Anna Ho
- Department of Psychology; The University of Hong Kong; China
| | - Hiu Mei Chow
- Department of Psychology; The University of Hong Kong; China
- Department of Psychology; University of Massachusetts Boston; USA
| | - Chia-huei Tseng
- Department of Psychology; The University of Hong Kong; China
| |
Collapse
|
19
|
Abstract
Visual attention and perceptual grouping both help us from being overloaded by the vast amount of information, and attentional search is delayed when a target overlaps with a snake-like collinear distractor (Jingling & Tseng, 2013). We assessed whether awareness of the collinear distractor is required for this modulation. We first identified that visible long (=9 elements), but not short (=3 elements) collinear distractor slowed observers' detection of an overlapping target. Then we masked part of a long distractor (=9 elements) with continuous flashing color patches (=6 elements) so that the combined dichoptic percept to observers' awareness was a short collinear distractor (=3 elements). We found that the invisible collinear parts, like visible ones, can form a continuous contour to impair search, suggesting that conscious awareness is not a pre-requisite for contour integration and its interaction with selective attention.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, The University of Hong Kong, Hong Kong
| | - Chia-huei Tseng
- Department of Psychology, The University of Hong Kong, Hong Kong; Department of Psychology, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
20
|
Abstract
Perceptual grouping plays an indispensable role in figure-ground segregation and attention distribution. For example, a column pops out if it contains element bars orthogonal to uniformly oriented element bars. Jingling and Tseng (2013) have reported that contextual grouping in a column matters to visual search behavior: When a column is grouped into a collinear (snakelike) structure, a target positioned on it became harder to detect than on other noncollinear (ladderlike) columns. How and where perceptual grouping interferes with selective attention is still largely unknown. This article contributes to this little-studied area by asking whether collinear contour integration interacts with visual search before or after binocular fusion. We first identified that the previously mentioned search impairment occurs with a distractor of five or nine elements but not one element in a 9 × 9 search display. To pinpoint the site of this effect, we presented the search display with a short collinear bar (one element) to one eye and the extending collinear bars to the other eye, such that when properly fused, the combined binocular collinear length (nine elements) exceeded the critical length. No collinear search impairment was observed, implying that collinear information before binocular fusion shaped participants' search behavior, although contour extension from the other eye after binocular fusion enhanced the effect of collinearity on attention. Our results suggest that attention interacts with perceptual grouping as early as V1.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, University of Hong Kong, Hong Kong
| | | | | |
Collapse
|
21
|
Abstract
The perception of verticality is critical for balance control and interaction with the world. But this complex process fails badly under certain circumstances—usually as the result of an illusion. Here, we report on a real-world example of how the brain fails to disregard body position on a moving mountain tram and adopts an inappropriate frame of reference, which prompts passengers to perceive skyscrapers leaning by as much as 30°. To elucidate the sensory origin of this misperception, we conducted field experiments on the moving tram to systematically disentangle the contributions of four sensory systems known to affect verticality perception, namely, vestibular, tactile, proprioceptive, and visual cues. Our results refute the intuitive assumption that the perceived tilt of the buildings is based on visual error signals and demonstrate instead that a unified percept of verticality is a product of the synergistic interaction among multiple sensory systems and the contextual information available in the real world.
Collapse
Affiliation(s)
| | - Hiu Mei Chow
- Department of Psychology, The University of Hong Kong
| | - Lothar Spillmann
- Graduate Institute of Neural and Cognitive Sciences, China Medical University
- Department of Anatomy, University of Freiburg
| |
Collapse
|
22
|
Chow HM, Chiu PH, Tseng CH, Spillmann L. A Multi-Sensory Illusion: Hong Kong Peak Tram Illusion (II) – Subjective Vertical. Iperception 2011. [DOI: 10.1068/ic892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
|
23
|
Chow HM, Tsui G, Tseng CH. Preverbal Infants Use Object Features and Motion Cues in Social Learning. Iperception 2011. [PMCID: PMC5393798 DOI: 10.1068/ic233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022] Open
Abstract
Studies have shown preverbal infants possess the ability to learn social rules presented in complex perceptual environment, but little is known about how they do it. We investigated the relative contribution of two perceptual cues in social learning. Three groups of six- to twelve-month-old infants were habituated to repeated events in which two agents helped or hindered a climber by pushing it up or down a hill, and who subsequently laughed (when helped) or cried.(when hindered). The three groups then received a test dishabituating stimulus such that for group 1, the climber cried when pushed up the hill and laughed when pushed down; for group 2, the identities of the agents (as defined by geometric shape and color) were reversed; for group 3, the agents kept their identities but reversed their pushing direction. We found infants looked significantly longer in all three dishabituating conditions. The results from group 1 suggest that infants successfully associated the social events with consequential emotions. The discriminability in group 2 and 3 suggests that simple motion direction and complex object (agent identity) cues are both effective for emotion-related social learning.
Collapse
|
24
|
Abstract
An experiment to test the discriminability of shape symbols using the shod foot was performed with 38 blind people (aged 23-72 years). Ten shape symbols which were 5 mm thick and fitted into a 30.5 cm2 tile were presented to subjects to identify by using only their feet. Each subject had 20 trials in which to discriminate the symbols. In each trial, a symbol was selected randomly and presented to the subject in randomized orientation. The subject was instructed to step on the symbol and to identify it using their own method. Time to discriminate a symbol and the accuracy of identification were recorded. A very high accuracy (93% on average) was obtained, which is comparable to the accuracy of tactile symbol discrimination using the hands. Average time to discriminate a symbol was 16 s with a standard deviation of 12.15 s, which indicated the high variability of the results. Owing to the high accuracy of identification, tactile foot-discriminable symbols have great potential as landmarks for blind people and if applied to a tactile guide path they could provide information for orientation and navigation.
Collapse
Affiliation(s)
- A J Courtney
- Department of Industrial and Manufacturing Systems Engineering, The University of Hong Kong, China.
| | | |
Collapse
|