1
|
Gabdreshov G, Magzymov D, Yensebayev N. Preliminary investigation of SEZUAL device for basic material identification and simple spatial navigation for blind and visually impaired people. Disabil Rehabil Assist Technol 2024; 19:1343-1350. [PMID: 36756982 DOI: 10.1080/17483107.2023.2176555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 12/27/2022] [Accepted: 01/31/2023] [Indexed: 02/10/2023]
Abstract
PURPOSE we present a preliminary set of experimental studies that demonstrates device-aided echolocation enabling in blind and visually impaired individuals. The proposed device emits a click-like sound into the surrounding space and returning sound is perceived by participants to infer the surrounding environment. MATERIALS AND METHODS two sets of experiments were set up to evaluate the echolocation abilities of nine blind participants. The first setup was designed to identify four material types based on the sound reflection properties of materials, such as glass, metal, wood, and ceramics. The second setup was navigation through a basic maze with the device. RESULTS experimental data demonstrate that the use of the proposed device enables active echolocation abilities in blind participants, particularly for material identification and spatial mobility. CONCLUSION the proposed device can potentially be used to rehabilitate disabled blind and visually impaired individuals in terms of spatial mobility and orientation.
Collapse
|
2
|
Thaler L, Castillo-Serrano JG, Kish D, Norman LJ. Effects of type of emission and masking sound, and their spatial correspondence, on blind and sighted people's ability to echolocate. Neuropsychologia 2024; 196:108822. [PMID: 38342179 DOI: 10.1016/j.neuropsychologia.2024.108822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 01/30/2024] [Accepted: 02/08/2024] [Indexed: 02/13/2024]
Abstract
Ambient sound can mask acoustic signals. The current study addressed how echolocation in people is affected by masking sound, and the role played by type of sound and spatial (i.e. binaural) similarity. We also investigated the role played by blindness and long-term experience with echolocation, by testing echolocation experts, as well as blind and sighted people new to echolocation. Results were obtained in two echolocation tasks where participants listened to binaural recordings of echolocation and masking sounds, and either localized echoes in azimuth or discriminated echo audibility. Echolocation and masking sounds could be either clicks or broad band noise. An adaptive staircase method was used to adjust signal-to-noise ratios (SNRs) based on participants' responses. When target and masker had the same binaural cues (i.e. both were monoaural sounds), people performed better (i.e. had lower SNRs) when target and masker used different types of sound (e.g. clicks in noise-masker or noise in clicks-masker), as compared to when target and masker used the same type of sound (e.g. clicks in click-, or noise in noise-masker). A very different pattern of results was observed when masker and target differed in their binaural cues, in which case people always performed better when clicks were the masker, regardless of type of emission used. Further, direct comparison between conditions with and without binaural difference revealed binaural release from masking only when clicks were used as emissions and masker, but not otherwise (i.e. when noise was used as masker or emission). This suggests that echolocation with clicks or noise may differ in their sensitivity to binaural cues. We observed the same pattern of results for echolocation experts, and blind and sighted people new to echolocation, suggesting a limited role played by long-term experience or blindness. In addition to generating novel predictions for future work, the findings also inform instruction in echolocation for people who are blind or sighted.
Collapse
Affiliation(s)
- L Thaler
- Department of Psychology, Durham University, South Road, Durham, DH1 5AY, UK.
| | | | - D Kish
- World Access for the Blind, 1007 Marino Drive, Placentia, CA, 92870, USA
| | - L J Norman
- Department of Psychology, Durham University, South Road, Durham, DH1 5AY, UK
| |
Collapse
|
3
|
Teng S, Danforth C, Paternoster N, Ezeana M, Puri A. Object recognition via echoes: quantifying the crossmodal transfer of three-dimensional shape information between echolocation, vision, and haptics. Front Neurosci 2024; 18:1288635. [PMID: 38440393 PMCID: PMC10909950 DOI: 10.3389/fnins.2024.1288635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 02/05/2024] [Indexed: 03/06/2024] Open
Abstract
Active echolocation allows blind individuals to explore their surroundings via self-generated sounds, similarly to dolphins and other echolocating animals. Echolocators emit sounds, such as finger snaps or mouth clicks, and parse the returning echoes for information about their surroundings, including the location, size, and material composition of objects. Because a crucial function of perceiving objects is to enable effective interaction with them, it is important to understand the degree to which three-dimensional shape information extracted from object echoes is useful in the context of other modalities such as haptics or vision. Here, we investigated the resolution of crossmodal transfer of object-level information between acoustic echoes and other senses. First, in a delayed match-to-sample task, blind expert echolocators and sighted control participants inspected common (everyday) and novel target objects using echolocation, then distinguished the target object from a distractor using only haptic information. For blind participants, discrimination accuracy was overall above chance and similar for both common and novel objects, whereas as a group, sighted participants performed above chance for the common, but not novel objects, suggesting that some coarse object information (a) is available to both expert blind and novice sighted echolocators, (b) transfers from auditory to haptic modalities, and (c) may be facilitated by prior object familiarity and/or material differences, particularly for novice echolocators. Next, to estimate an equivalent resolution in visual terms, we briefly presented blurred images of the novel stimuli to sighted participants (N = 22), who then performed the same haptic discrimination task. We found that visuo-haptic discrimination performance approximately matched echo-haptic discrimination for a Gaussian blur kernel σ of ~2.5°. In this way, by matching visual and echo-based contributions to object discrimination, we can estimate the quality of echoacoustic information that transfers to other sensory modalities, predict theoretical bounds on perception, and inform the design of assistive techniques and technology available for blind individuals.
Collapse
Affiliation(s)
- Santani Teng
- Smith-Kettlewell Eye Research Institute, San Francisco, CA, United States
| | - Caroline Danforth
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Nickolas Paternoster
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Department of Psychology, Cornell University, Ithaca, NY, United States
| | - Michael Ezeana
- Department of Biology, University of Central Arkansas, Conway, AR, United States
- Georgetown University School of Medicine, Washington, DC, United States
| | - Amrita Puri
- Department of Biology, University of Central Arkansas, Conway, AR, United States
| |
Collapse
|
4
|
Steffens H, Schutte M, Ewert SD. Auditory orientation and distance estimation of sighted humans using virtual echolocation with artificial and self-generated sounds. JASA EXPRESS LETTERS 2022; 2:124403. [PMID: 36586958 DOI: 10.1121/10.0016403] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Active echolocation of sighted humans using predefined synthetic and self-emitted sounds, as habitually used by blind individuals, was investigated. Using virtual acoustics, distance estimation and directional localization of a wall in different rooms were assessed. A virtual source was attached to either the head or hand with realistic or increased source directivity. A control condition was tested with a virtual sound source located at the wall. Untrained echolocation performance comparable to performance in the control condition was achieved on an individual level. On average, the echolocation performance was considerably lower than in the control condition, however, it benefitted from increased directivity.
Collapse
Affiliation(s)
- Henning Steffens
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, 26111, Germany , ,
| | - Michael Schutte
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, 26111, Germany , ,
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, 26111, Germany , ,
| |
Collapse
|
5
|
Kuc R. Brain-inspired sensorimotor echolocation system for confident landmark recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1272. [PMID: 36182295 DOI: 10.1121/10.0013833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/11/2022] [Indexed: 06/16/2023]
Abstract
A landmark is a familiar target in terms of the echoes that it can produce and is important for echolocation-based navigation by bats, robots, and blind humans. A brain-inspired system (BIS) achieves confident recognition, defined as classification to an arbitrarily small error probability (PE), by employing a voting process with an echo sequence. The BIS contains sensory neurons implemented with binary single-layer perceptrons trained to classify echo spectrograms with PE and generate excitatory and inhibitory votes in face neurons until a landmark-specific face neuron achieves recognition by reaching a confidence vote level (CVL). A discrete random step process models the vote count to show the recognition probability can achieve any desired accuracy by decreasing PE or increasing CVL. A hierarchical approach first classifies surface reflector and volume scatterer target categories and then uses that result to classify two subcategories that form four landmarks. The BIS models blind human echolocation to recognize four human-made and foliage landmarks by acquiring suitably sized and dense audible echo sequences. The sensorimotor BIS employs landmark-specific CVL values and a 2.7° view increment to acquire echo sequences that achieve zero-error recognition of each landmark independent of the initial view.
Collapse
Affiliation(s)
- Roman Kuc
- Department of Electrical Engineering and Wu Tsai Institute, Yale University, New Haven, Connecticut 06511, USA
| |
Collapse
|
6
|
Bujacz M, Królak A, Górski G, Matysik K, Witek P. Echovis – A collection of human echolocation tests performed by blind and sighted individuals: A pilot study. BRITISH JOURNAL OF VISUAL IMPAIRMENT 2022. [DOI: 10.1177/02646196221116728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The article presents research on the echolocation skills of blind and sighted individuals gathered for the purpose of development of an echolocation training app. The goal of the research was to determine the influence of the environment type, reverberation, clicking patterns, and the average differences between the two tested groups. Ten blind and 10 sighted subjects were tested in various echolocation tasks – stationary, moving, indoor and outdoor, using mechanical clickers, and artificially generated clicks. Ten blind children also took part in the static indoor tests. The tests were repeated using binaural recordings and spatially rendered virtual audio. The following parameters and dependencies between them were analyzed: correctness of the obstacle localization, certainty of answer, type of environment and clicker sound, and frequency and number of clicks. It was found that the number of clicks influenced the correctness and certainty of the answer in case of obstacle direction determination, but not the distance. Better results were obtained in outdoor environments and in an empty room, what implies that reverberation has a positive influence on echolocation. The expected success rates in tested echolocation tasks provided a comparison of the echolocation abilities of blind and sighted subjects and set a benchmark for future tests.
Collapse
|
7
|
Branstetter BK, Brietenstein R, Goya G, Tormey M, Wu T, Finneran JJ. Spatial acuity of the bottlenose dolphin (Tursiops truncatus) biosonar system with a bat and human comparison. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3847. [PMID: 35778192 DOI: 10.1121/10.0011676] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Horizontal angular resolution was measured in two bottlenose dolphins using a two-alternative forced-choice, biosonar target discrimination paradigm. The task required a stationary dolphin positioned in a hoop to discriminate two physical targets at a range of 4 m. The angle separating the targets was manipulated to estimate an angular discrimination threshold of 1.5°. In a second experiment, a similar two-target biosonar discrimination task was conducted with one free-swimming dolphin, to test whether its emission beam was a critical factor in discriminating the targets. The spatial separation between two targets was manipulated to measure a discrimination threshold of 6.7 cm. There was a relationship between differences in acoustic signals received at each target and the dolphin's performance. The results of the angular resolution experiment were in good agreement with measures of the minimum audible angle of both dolphins and humans and remarkably similar to measures of angular difference discrimination in echolocating dolphins, bats, and humans. The results suggest that horizontal auditory spatial acuity may be a common feature of the mammalian auditory system rather than a specialized feature exclusive to echolocating auditory predators.
Collapse
Affiliation(s)
- Brian K Branstetter
- National Marine Mammal Foundation, 2240 Shelter Island Drive, #200, San Diego, California 92106, USA
| | - Rachel Brietenstein
- National Marine Mammal Foundation, 2240 Shelter Island Drive, #200, San Diego, California 92106, USA
| | - Gavin Goya
- National Marine Mammal Foundation, 2240 Shelter Island Drive, #200, San Diego, California 92106, USA
| | - Megan Tormey
- National Marine Mammal Foundation, 2240 Shelter Island Drive, #200, San Diego, California 92106, USA
| | - Teri Wu
- National Marine Mammal Foundation, 2240 Shelter Island Drive, #200, San Diego, California 92106, USA
| | - James J Finneran
- United States Navy Marine Mammal Program, Naval Information Warfare Center Pacific, San Diego, California 92152, USA
| |
Collapse
|
8
|
Andrade R, Baker S, Waycott J, Vetere F. A Participatory Design Approach to Creating Echolocation-Enabled Virtual Environments. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2022. [DOI: 10.1145/3516448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
As virtual environments—in the form of videogames and augmented and virtual reality experiences—become more popular, it is important to ensure that they are accessible to all. Previous research has identified echolocation as a useful interaction approach to enable people with visual impairment to access virtual environments. In this paper, we further investigate the usefulness of echolocation to explore virtual environments. We follow a participatory design approach that comprised a focus group session coupled with two fast prototyping and evaluation iterations. During the focus group session, expert echolocators produced a series of seven design recommendations, of which we implemented and trialed four. Our trials revealed that the use of ambient sounds, the ability to place landmarks, directional control, and the ability to use pre-recorded mouth-clicks produced by expert echolocators improved the overall experience of our participants by facilitating the detection of openings and obstacles. The recommendations presented and evaluated in this paper may help to develop virtual environments that support a broader range of users while recognising the value of the lived experience of people with disability as a source of knowledge.
Collapse
|
9
|
Smarsh GC, Tarnovsky Y, Yovel Y. Hearing, echolocation, and beam steering from day 0 in tongue-clicking bats. Proc Biol Sci 2021; 288:20211714. [PMID: 34702074 PMCID: PMC8548796 DOI: 10.1098/rspb.2021.1714] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 10/06/2021] [Indexed: 11/12/2022] Open
Abstract
Little is known about the ontogeny of lingual echolocation. We examined the echolocation development of Rousettus aegyptiacus, the Egyptian fruit bat, which uses rapid tongue movements to produce hyper-short clicks and steer the beam's direction. We recorded from day 0 to day 35 postbirth and assessed hearing and beam-steering abilities. On day 0, R. aegyptiacus pups emit isolation calls and hyper-short clicks in response to acoustic stimuli, demonstrating hearing. Auditory brainstem response recordings show that pups are sensitive to pure tones of the main hearing range of adult Rousettus and to brief clicks. Newborn pups produced clicks in the adult paired pattern and were able to use their tongues to steer the sonar beam. As they aged, pups produced click pairs faster, converging with adult intervals by age of first flights (7-8 weeks). In contrast with laryngeal bats, Rousettus echolocation frequency and duration are stable through to day 35, but shift by the time pups begin to fly, possibly owing to tongue-diet maturation effects. Furthermore, frequency and duration shift in the opposite direction of mammalian laryngeal vocalizations. Rousettus lingual echolocation thus appears to be a highly functional sensory system from birth and follows a different ontogeny from that of laryngeal bats.
Collapse
Affiliation(s)
- Grace C. Smarsh
- School of Zoology, Faculty of Life Sciences, Tel Aviv University, Tel Aviv, IL 6997801, Israel
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, IL 7610001, Israel
| | - Yifat Tarnovsky
- School of Zoology, Faculty of Life Sciences, Tel Aviv University, Tel Aviv, IL 6997801, Israel
- School of Neurobiology, Biochemistry, and Biophysics, Faculty of Life Sciences, Tel Aviv University, Tel Aviv, IL 6997801, Israel
| | - Yossi Yovel
- School of Zoology, Faculty of Life Sciences, Tel Aviv University, Tel Aviv, IL 6997801, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, IL 6997801, Israel
| |
Collapse
|
10
|
Eklund R, Gerdfeldter B, Wiens S. The early but not the late neural correlate of auditory awareness reflects lateralized experiences. Neuropsychologia 2021; 158:107910. [PMID: 34090867 DOI: 10.1016/j.neuropsychologia.2021.107910] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/31/2021] [Accepted: 06/01/2021] [Indexed: 01/08/2023]
Abstract
Theories disagree as to whether it is the early or the late neural correlate of awareness that plays a critical role in phenomenal awareness. According to recurrent processing theory, early activity in primary sensory areas corresponds closely to phenomenal awareness. In support, research with electroencephalography found that in the visual and somatosensory modality, an early neural correlate of awareness is contralateral to the perceived side of stimulation. Thus, early activity is sensitive to the perceived side of visual and somatosensory stimulation. Critically, it is unresolved whether this is true also for hearing. In the present study (N = 26 students), Bayesian analyses showed that the early neural correlate of awareness (auditory awareness negativity, AAN) was stronger for contralateral than ipsilateral electrodes whereas the late correlate of auditory awareness (late positivity, LP) was not lateralized. These findings demonstrate that the early but not the late neural correlate of auditory awareness reflects lateralized experiences. Thus, these findings imply that AAN is a more suitable NCC than LP because it correlates more closely with lateralized experiences.
Collapse
Affiliation(s)
- Rasmus Eklund
- Gösta Ekmans Laboratorium, Stockholm University, Sweden.
| | | | - Stefan Wiens
- Gösta Ekmans Laboratorium, Stockholm University, Sweden
| |
Collapse
|
11
|
Kritly L, Sluyts Y, Pelegrín-García D, Glorieux C, Rychtáriková M. Discrimination of 2D wall textures by passive echolocation for different reflected-to-direct level difference configurations. PLoS One 2021; 16:e0251397. [PMID: 34043655 PMCID: PMC8158938 DOI: 10.1371/journal.pone.0251397] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 04/25/2021] [Indexed: 11/19/2022] Open
Abstract
In this work, we study people's ability to discriminate between different 2D textures of walls by passive listening to a pre-recorded tongue click in an auralized echolocation scenario. In addition, the impact of artificially enhancing the early reflection magnitude by 6dB and of removing the direct component while equalizing the loudness was investigated. Listening test results for different textures, ranging from a flat wall to a staircase, were assessed using a 2 Alternative-Forced-Choice (2AFC) method, in which 14 sighted, untrained participants were indicating 2 equally perceived stimuli out of 3 presented stimuli. The average performance of the listening subjects to discriminate between different textures was found to be significantly higher for walls at 5m distance, without overlap between the reflected and direct sound, compared to the same walls at 0.8m distance. Enhancing the reflections as well as removing the direct sound were found to be beneficial to differentiate textures. This finding highlights the importance of forward masking in the discrimination process. The overall texture discriminability was found to be larger for the walls reflecting with a higher spectral coloration.
Collapse
Affiliation(s)
- Léopold Kritly
- Research Department of Architecture—Building and Room Acoustics, Faculty of Architecture, KU Leuven, Brussel, Belgium
- EPF–Graduate School of Engineering, Sceaux, France
| | - Yannick Sluyts
- Research Department of Architecture—Building and Room Acoustics, Faculty of Architecture, KU Leuven, Brussel, Belgium
| | - David Pelegrín-García
- ZMB Lab. of Acoustics, Department of Physics and Astronomy, KU Leuven, Heverlee, Belgium
| | - Christ Glorieux
- ZMB Lab. of Acoustics, Department of Physics and Astronomy, KU Leuven, Heverlee, Belgium
| | - Monika Rychtáriková
- Research Department of Architecture—Building and Room Acoustics, Faculty of Architecture, KU Leuven, Brussel, Belgium
- Faculty of Civil Engineering, STU Bratislava, Bratislava, Slovakia
| |
Collapse
|
12
|
Tirado C, Gerdfeldter B, Nilsson ME. Individual differences in the ability to access spatial information in lag-clicks. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:2963. [PMID: 34241133 DOI: 10.1121/10.0004821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 04/11/2021] [Indexed: 06/13/2023]
Abstract
It may be difficult to determine whether a dichotic lag-click points to the left or right when preceded by a diotic lead-click. Previous research suggests that this loss of spatial information is most prominent at inter-click intervals (ICIs) <10 ms. However, a study by Nilsson, Tirado, and Szychowska [(2019). J. Acoust. Soc. Am. 145, 512-524] found support for loss of spatial information in lag-clicks at much longer ICIs using a stimulus setup differing from those in previous research. The present study used a setup similar to that of the Nilsson, Tirado, and Szychowska study [(2019). J. Acoust. Soc. Am. 145, 512-524] to measure 13 listeners' ability to lateralize (left versus right) and detect (present versus absent) the lag-click in lead-lag click pairs with ICIs of 6-48 ms. The main finding was distinct individual differences in performance. Some listeners could lateralize lag-clicks all the way down to their detection threshold, whereas others had lateralization thresholds substantially higher than their detection thresholds, i.e., they could not lateralize lag-clicks that they could easily detect. Two such listeners trained for 30 days and managed to improve their lateralization thresholds to reach their detection thresholds, but only at longer ICIs (>20 ms), suggesting different mechanisms underlying lag-click lateralization at short versus long ICIs.
Collapse
Affiliation(s)
- Carlos Tirado
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Billy Gerdfeldter
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Mats E Nilsson
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
13
|
Andrade R, Waycott J, Baker S, Vetere F. Echolocation as a Means for People with Visual Impairment (PVI) to Acquire Spatial Knowledge of Virtual Space. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2021. [DOI: 10.1145/3448273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
In virtual environments, spatial information is communicated visually. This prevents people with visual impairment (PVI) from accessing such spaces. In this article, we investigate whether echolocation could be used as a tool to convey spatial information by answering the following research questions: What features of virtual space can be perceived by PVI through the use of echolocation? How does active echolocation support PVI in acquiring spatial knowledge of a virtual space? And what are PVI’s opinions regarding the use of echolocation to acquire landmark and survey knowledge of virtual space? To answer these questions, we conducted a two-part within-subjects experiment with 12 people who were blind or had a visual impairment and found that size and materials of rooms and 90-degree turns were detectable through echolocation, participants preferred using echoes derived from footsteps rather than from artificial sound pulses, and echolocation supported the acquisition of mental maps of a virtual space. Ultimately, we propose that appropriately designed echolocation in virtual environments improves understanding of spatial information and access to digital games for PVI.
Collapse
Affiliation(s)
- Ronny Andrade
- The University of Melbourne, Parkville, VIC, Australia
| | - Jenny Waycott
- The University of Melbourne, Parkville, VIC, Australia
| | - Steven Baker
- The University of Melbourne, Parkville, VIC, Australia
| | - Frank Vetere
- The University of Melbourne, Parkville, VIC, Australia
| |
Collapse
|
14
|
Tirado C, Gerdfeldter B, Kärnekull SC, Nilsson ME. Comparing Echo-Detection and Echo-Localization in Sighted Individuals. Perception 2021; 50:308-327. [PMID: 33673742 PMCID: PMC8044610 DOI: 10.1177/03010066211000617] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Echolocation is the ability to gather information from sound reflections. Most previous studies have focused on the ability to detect sound reflections, others on the ability to localize sound reflections, but no previous study has compared the two abilities in the same individuals. Our study compared echo-detection (reflecting object present or not?) and echo-localization (reflecting object to the left or right?) in 10 inexperienced sighted participants across 10 distances (1-4.25 m) to the reflecting object, using an automated system for studying human echolocation. There were substantial individual differences, particularly in the performance on the echo-localization task. However, most participants performed better on the detection than the localization task, in particular at the closest distances (1 and 1.7 m), illustrating that it sometimes may be hard to perceive whether an audible reflection came from the left or right.
Collapse
|
15
|
Castillo-Serrano JG, Norman LJ, Foresteire D, Thaler L. Increased emission intensity can compensate for the presence of noise in human click-based echolocation. Sci Rep 2021; 11:1750. [PMID: 33462283 PMCID: PMC7813859 DOI: 10.1038/s41598-021-81220-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Accepted: 01/04/2021] [Indexed: 11/09/2022] Open
Abstract
Echolocating bats adapt their emissions to succeed in noisy environments. In the present study we investigated if echolocating humans can detect a sound-reflecting surface in the presence of noise and if intensity of echolocation emissions (i.e. clicks) changes in a systematic pattern. We tested people who were blind and had experience in echolocation, as well as blind and sighted people who had no experience in echolocation prior to the study. We used an echo-detection paradigm where participants listened to binaural recordings of echolocation sounds (i.e. they did not make their own click emissions), and where intensity of emissions and echoes changed adaptively based on participant performance (intensity of echoes was yoked to intensity of emissions). We found that emission intensity had to systematically increase to compensate for weaker echoes relative to background noise. In fact, emission intensity increased so that spectral power of echoes exceeded spectral power of noise by 12 dB in 4-kHz and 5-kHz frequency bands. The effects were the same across all participant groups, suggesting that this effect occurs independently of long-time experience with echolocation. Our findings demonstrate for the first time that people can echolocate in the presence of noise and suggest that one potential strategy to deal with noise is to increase emission intensity to maintain signal-to-noise ratio of certain spectral components of the echoes.
Collapse
Affiliation(s)
- J G Castillo-Serrano
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK
| | - L J Norman
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK
| | - D Foresteire
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK
| | - L Thaler
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK.
| |
Collapse
|
16
|
Abstract
Making sense of the world requires perceptual constancy—the stable perception of an object across changes in one’s sensation of it. To investigate whether constancy is intrinsic to perception, we tested whether humans can learn a form of constancy that is unique to a novel sensory skill (here, the perception of objects through click-based echolocation). Participants judged whether two echoes were different either because: (a) the clicks were different, or (b) the objects were different. For differences carried through spectral changes (but not level changes), blind expert echolocators spontaneously showed a high constancy ability (mean d′ = 1.91) compared to sighted and blind people new to echolocation (mean d′ = 0.69). Crucially, sighted controls improved rapidly in this ability through training, suggesting that constancy emerges in a domain with which the perceiver has no prior experience. This provides strong evidence that constancy is intrinsic to human perception. This study shows that people who learn a new skill to sense their environment - here: listening to sound echoes - can correctly represent the physical properties of objects. This result has implications for effectively rehabilitating people with sensory loss.
Collapse
|
17
|
Kuc R. Artificial neural network classification of foliage targets from spectrograms of sequential echoes using a biomimetic audible sonar. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:3270. [PMID: 33261369 DOI: 10.1121/10.0002651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 10/28/2020] [Indexed: 06/12/2023]
Abstract
Classifying foliage targets using echolocation is important for recognizing landmarks by bats using ultrasonic emissions and blind human echolocators (BEs) using palatal clicks. Previous attempts to classify foliage used ultrasonic frequencies and single sensor (monaural) detection. Motivated by the echolocation capabilities of BEs, a biomimetic sonar emitting audible clicks acquired 5600 binaural echoes from five sequential emissions that probed two foliage targets at aspect angles separated by 18°. Echo spectrograms formed feature vector inputs to artificial neural networks (ANNs) for classifying two targets, Ficus benjamina and Schefflera arboricola, with leaf areas that differ by a factor of four. Classification performances of ANNs without and with hidden layers were analyzed using tenfold cross-validation. Performance improved with input feature size, with binaural echo classification outperforming that using monaural echoes for the same number of emissions and for the same number of echoes. Linear classification accuracy was comparable to that using nonlinear classification with both achieving fewer than 1% errors with binaural spectrogram features from five sequential emissions. This result was better by a factor of 20 compared to previous classification of these targets using only the time envelopes of the same echoes.
Collapse
Affiliation(s)
- Roman Kuc
- Department of Electrical Engineering, Yale University, New Haven, Connecticut 06511, USA
| |
Collapse
|
18
|
Norman LJ, Thaler L. Stimulus uncertainty affects perception in human echolocation: Timing, level, and spectrum. J Exp Psychol Gen 2020; 149:2314-2331. [PMID: 32324025 PMCID: PMC7727089 DOI: 10.1037/xge0000775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The human brain may use recent sensory experience to create sensory templates that are then compared to incoming sensory input, that is, "knowing what to listen for." This can lead to greater perceptual sensitivity, as long as the relevant properties of the target stimulus can be reliably estimated from past sensory experiences. Echolocation is an auditory skill probably best understood in bats, but humans can also echolocate. Here we investigated for the first time whether echolocation in humans involves the use of sensory templates derived from recent sensory experiences. Our results showed that when there was certainty in the acoustic properties of the echo relative to the emission, either in temporal onset, spectral content or level, people detected the echo more accurately than when there was uncertainty. In addition, we found that people were more accurate when the emission's spectral content was certain but, surprisingly, not when either its level or temporal onset was certain. Importantly, the lack of an effect of temporal onset of the emission is counter to that found previously for tasks using nonecholocation sounds, suggesting that the underlying mechanisms might be different for echolocation and nonecholocation sounds. Importantly, the effects of stimulus certainty were no different for people with and without experience in echolocation, suggesting that stimulus-specific sensory templates can be used in a skill that people have never used before. From an applied perspective our results suggest that echolocation instruction should encourage users to make clicks that are similar to one another in their spectral content. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
19
|
Kuc R. Artificial neural network classification of surface reflectors and volume scatterers using sequential echoes acquired with a biomimetic audible sonar. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:2357. [PMID: 32359283 DOI: 10.1121/10.0001083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 03/25/2020] [Indexed: 06/11/2023]
Abstract
This paper investigates classifying two target groups, surface reflectors (SR) and volume scatterers (VS), using echo envelope features. SR targets have convex surface patches that exhibit echo persistence over aspect angle, while VS targets are composed of random range-distributed and oriented reflectors producing echoes that become uncorrelated with small changes in aspect angle. The SR target group contains single-post (P1) and multiple-post (PM) types and the VS group contains Ficus benjamina (F) and Schefflera arboricola (S) foliage types with leaf areas that differ by a factor of 4. A biomimetic sonar emitting audible clicks acquired sequences of up to three binaural echoes from target views separated by 18°. Two artificial neural networks performing linear and nonlinear classification first differentiated SR/VS target groups and then P1/PM and F/S types. Classification performance improved with echo number, from a single monaural echo to three pairs of binaural echoes, demonstrating the benefit of sequential echoes. Linear and nonlinear classification of SR/VS targets achieved a minimum generalization error probability PEG = 0.003. Nonlinear P1/PM classification achieved PEG = 0.009 that was four times smaller than linear classification. Nonlinear F/S classification achieved PEG = 0.220, indicating that envelope features by themselves are inadequate to accurately differentiate foliage targets.
Collapse
Affiliation(s)
- Roman Kuc
- Department of Electrical Engineering, Yale University, New Haven, Connecticut 06511, USA
| |
Collapse
|
20
|
Shrew twittering call rate is high in novel environments—a lab-study. MAMMAL RES 2020. [DOI: 10.1007/s13364-020-00488-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
21
|
Navigation and perception of spatial layout in virtual echo-acoustic space. Cognition 2020; 197:104185. [PMID: 31951856 PMCID: PMC7033557 DOI: 10.1016/j.cognition.2020.104185] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 01/03/2020] [Accepted: 01/07/2020] [Indexed: 11/20/2022]
Abstract
Successful navigation involves finding the way, planning routes, and avoiding collisions. Whilst previous research has shown that people can navigate using non-visual cues, it is not clear to what degree learned non-visual navigational abilities generalise to 'new' environments. Furthermore, the ability to successfully avoid collisions has not been investigated separately from the ability to perceive spatial layout or to orient oneself in space. Here, we address these important questions using a virtual echolocation paradigm in sighted people. Fourteen sighted blindfolded participants completed 20 virtual navigation training sessions over the course of 10 weeks. In separate sessions, before and after training, we also tested their ability to perceive the spatial layout of virtual echo-acoustic space. Furthermore, three blind echolocation experts completed the tasks without training, thus validating our virtual echo-acoustic paradigm. We found that over the course of 10 weeks sighted people became better at navigating, i.e. they reduced collisions and time needed to complete the route, and increased success rates. This also generalised to 'new' (i.e. untrained) virtual spaces. In addition, after training, their ability to judge spatial layout was better than before training. The data suggest that participants acquired a 'true' sensory driven navigational ability using echo-acoustics. In addition, we show that people not only developed navigational skills related to avoidance of collisions and finding safe passage, but also processes related to spatial perception and orienting. In sum, our results provide strong support for the idea that navigation is a skill which people can achieve via various modalities, here: echolocation.
Collapse
|
22
|
Doi H, Sulpizio S, Esposito G, Katou M, Nishina E, Iriguchi M, Honda M, Oohashi T, Bornstein MH, Shinohara K. Inaudible components of the human infant cry influence haemodynamic responses in the breast region of mothers. J Physiol Sci 2019; 69:1085-1096. [PMID: 31786800 PMCID: PMC10717493 DOI: 10.1007/s12576-019-00729-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 11/05/2019] [Indexed: 11/30/2022]
Abstract
Distress vocalizations are fundamental for survival, and both sonic and ultrasonic components of such vocalizations are preserved phylogenetically among many mammals. On this basis, we hypothesized that ultrasonic inaudible components of the acoustic signal might play a heretofore hidden role in humans as well. By investigating the human distress vocalization (infant cry), here we show that, similar to other species, the human infant cry contains ultrasonic components that modulate haemodynamic responses in mothers, without the mother being consciously aware of those modulations. In two studies, we measured the haemodynamic activity in the breasts of mothers while they were exposed to the ultrasonic components of infant cries. Although mothers were not aware of ultrasounds, the presence of the ultrasounds in combination with the audible components increased oxygenated haemoglobin concentration in the mothers' breast region. This modulation was observed only when the body surface was exposed to the ultrasonic components. These findings provide the first evidence indicating that the ultrasonic components of the acoustic signal play a role in human mother-infant interaction.
Collapse
Affiliation(s)
- Hirokazu Doi
- Department of Neurobiology and Behavior, Graduate School of Biomedical Sciences, Nagasaki University, 1-12-4 Sakamoto-cho, Nagasaki, Nagasaki, 852-8523, Japan
| | - Simone Sulpizio
- Faculty of Psychology, Vita-Salute San Raffaele University, Milan, Italy
- Centre for Neurolinguistics and Psycholinguistics, Vita-Salute San Raffaele University, Milan, Italy
| | - Gianluca Esposito
- Department of Psychology and Cognitive Science, University of Trento, Trento, Italy
- Psychology Program, Nanyang Technological University, Singapore, Singapore
| | | | - Emi Nishina
- Department of Liberal Arts, The Open University of Japan, Chiba, Japan
| | - Mayuko Iriguchi
- Department of Neurobiology and Behavior, Graduate School of Biomedical Sciences, Nagasaki University, 1-12-4 Sakamoto-cho, Nagasaki, Nagasaki, 852-8523, Japan
| | - Manabu Honda
- Department of Information Medicine, National Center of Neurology and Psychiatry, Tokyo, Japan
| | - Tsutomu Oohashi
- Department of Research and Development, Foundation for Advancement of International Science, Tokyo, Japan
| | - Marc H Bornstein
- Eunice Kennedy Shriver National Institute of Child Health and Human Development, Bethesda, USA
- Institute for Fiscal Studies, London, UK
| | - Kazuyuki Shinohara
- Department of Neurobiology and Behavior, Graduate School of Biomedical Sciences, Nagasaki University, 1-12-4 Sakamoto-cho, Nagasaki, Nagasaki, 852-8523, Japan.
| |
Collapse
|
23
|
The Echobot: An automated system for stimulus presentation in studies of human echolocation. PLoS One 2019; 14:e0223327. [PMID: 31584971 PMCID: PMC6777781 DOI: 10.1371/journal.pone.0223327] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 09/18/2019] [Indexed: 11/19/2022] Open
Abstract
Echolocation is the detection and localization of objects by listening to the sounds they reflect. Early studies of human echolocation used real objects that the experimental leader positioned manually before each experimental trial. The advantage of this procedure is the use of realistic stimuli; the disadvantage is that manually shifting stimuli between trials is very time consuming making it difficult to use psychophysical methods based on the presentation of hundreds of stimuli. The present study tested a new automated system for stimulus presentation, the Echobot, that overcomes this disadvantage. We tested 15 sighted participants with no prior experience of echolocation on their ability to detect the reflection of a loudspeaker-generated click from a 50 cm circular aluminum disk. The results showed that most participants were able to detect the sound reflections. Performance varied considerably, however, with mean individual thresholds of detection ranging from 1 to 3.2 m distance from the disk. Three participants in the loudspeaker experiment also tested using self-generated vocalization. One participant performed better using vocalization and one much worse than in the loudspeaker experiment, illustrating that performance in echolocation experiments using vocalizations not only measures the ability to detect sound reflections, but also the ability to produce efficient echolocation signals. Overall, the present experiments show that the Echobot may be a useful tool in research on human echolocation.
Collapse
|
24
|
Texture Classification Using Spectral Entropy of Acoustic Signal Generated by a Human Echolocator. ENTROPY 2019. [PMCID: PMC7514294 DOI: 10.3390/e21100963] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Human echolocation is a biological process wherein the human emits a punctuated acoustic signal, and the ear analyzes the echo in order to perceive the surroundings. The peculiar acoustic signal is normally produced by clicking inside the mouth. This paper utilized this unique acoustic signal from a human echolocator as a source of transmitted signal in a synthetic human echolocation technique. Thus, the aim of the paper was to extract information from the echo signal and develop a classification scheme to identify signals reflected from different textures at various distance. The scheme was based on spectral entropy extracted from Mel-scale filtering output in the Mel-frequency cepstrum coefficient of a reflected echo signal. The classification process involved data mining, features extraction, clustering, and classifier validation. The reflected echo signals were obtained via an experimental setup resembling a human echolocation scenario, configured for synthetic data collection. Unlike in typical speech signals, extracted entropy from the formant characteristics was likely not visible for the human mouth-click signals. Instead, multiple peak spectral features derived from the synthesis signal of the mouth-click were assumed as the entropy obtained from the Mel-scale filtering output. To realize the classification process, K-means clustering and K-nearest neighbor processes were employed. Moreover, the impacts of sound propagation toward the extracted spectral entropy used in the classification outcome were also investigated. The outcomes of the classifier performance herein indicated that spectral entropy is essential for human echolocation.
Collapse
|
25
|
Thaler L, De Vos HPJC, Kish D, Antoniou M, Baker CJ, Hornikx MCJ. Human Click-Based Echolocation of Distance: Superfine Acuity and Dynamic Clicking Behaviour. J Assoc Res Otolaryngol 2019; 20:499-510. [PMID: 31286299 PMCID: PMC6797687 DOI: 10.1007/s10162-019-00728-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 06/06/2019] [Indexed: 01/25/2023] Open
Abstract
Some people who are blind have trained themselves in echolocation using mouth clicks. Here, we provide the first report of psychophysical and clicking data during echolocation of distance from a group of 8 blind people with experience in mouth click-based echolocation (daily use for > 3 years). We found that experienced echolocators can detect changes in distance of 3 cm at a reference distance of 50 cm, and a change of 7 cm at a reference distance of 150 cm, regardless of object size (i.e. 28.5 cm vs. 80 cm diameter disk). Participants made mouth clicks that were more intense and they made more clicks for weaker reflectors (i.e. same object at farther distance, or smaller object at same distance), but number and intensity of clicks were adjusted independently from one another. The acuity we found is better than previous estimates based on samples of sighted participants without experience in echolocation or individual experienced participants (i.e. single blind echolocators tested) and highlights adaptation of the perceptual system in blind human echolocators. Further, the dynamic adaptive clicking behaviour we observed suggests that number and intensity of emissions serve separate functions to increase SNR. The data may serve as an inspiration for low-cost (i.e. non-array based) artificial 'cognitive' sonar and radar systems, i.e. signal design, adaptive pulse repetition rate and intensity. It will also be useful for instruction and guidance for new users of echolocation.
Collapse
Affiliation(s)
- Lore Thaler
- Department of Psychology, Durham University, Science Site, South Road, Durham, DH1 3LE, UK.
| | - H P J C De Vos
- Eindhoven University of Technology, Eindhoven, The Netherlands
| | - D Kish
- World Access for the Blind, Placentia, CA, USA
| | - M Antoniou
- Department of Electronic Electrical and Systems Engineering, University of Birmingham, Birmingham, UK
| | - C J Baker
- Department of Electronic Electrical and Systems Engineering, University of Birmingham, Birmingham, UK
| | - M C J Hornikx
- Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
26
|
Thaler L, Zhang X, Antoniou M, Kish DC, Cowie D. The flexible action system: Click-based echolocation may replace certain visual functionality for adaptive walking. J Exp Psychol Hum Percept Perform 2019; 46:21-35. [PMID: 31556685 PMCID: PMC6936248 DOI: 10.1037/xhp0000697] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People use sensory, in particular visual, information to guide actions such as walking around obstacles, grasping or reaching. However, it is presently unclear how malleable the sensorimotor system is. The present study investigated this by measuring how click-based echolocation may be used to avoid obstacles while walking. We tested 7 blind echolocation experts, 14 sighted, and 10 blind echolocation beginners. For comparison, we also tested 10 sighted participants, who used vision. To maximize the relevance of our research for people with vision impairments, we also included a condition where the long cane was used and considered obstacles at different elevations. Motion capture and sound data were acquired simultaneously. We found that echolocation experts walked just as fast as sighted participants using vision, and faster than either sighted or blind echolocation beginners. Walking paths of echolocation experts indicated early and smooth adjustments, similar to those shown by sighted people using vision and different from later and more abrupt adjustments of beginners. Further, for all participants, the use of echolocation significantly decreased collision frequency with obstacles at head, but not ground level. Further analyses showed that participants who made clicks with higher spectral frequency content walked faster, and that for experts higher clicking rates were associated with faster walking. The results highlight that people can use novel sensory information (here, echolocation) to guide actions, demonstrating the action system’s ability to adapt to changes in sensory input. They also highlight that regular use of echolocation enhances sensory-motor coordination for walking in blind people. Vision loss has negative consequences for people’s mobility. The current report demonstrates that echolocation might replace certain visual functionality for adaptive walking. Importantly, the report also highlights that echolocation and long cane are complementary mobility techniques. The findings have direct relevance for professionals involved in mobility instruction and for people who are blind.
Collapse
Affiliation(s)
| | - Xinyu Zhang
- School of Information and Electronics, Beijing Institute of Technology
| | - Michail Antoniou
- Department of Electronic Electrical and Systems Engineering, School of Engineering, University of Birmingham
| | | | | |
Collapse
|
27
|
Andreasen A, Geronazzo M, Nilsson NC, Zovnercuka J, Konovalov K, Serafin S. Auditory Feedback for Navigation with Echoes in Virtual Environments: Training Procedure and Orientation Strategies. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1876-1886. [PMID: 30794514 DOI: 10.1109/tvcg.2019.2898787] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Being able to hear objects in an environment, for example using echolocation, is a challenging task. The main goal of the current work is to use virtual environments (VEs) to train novice users to navigate using echolocation. Previous studies have shown that musicians are able to differentiate sound pulses from reflections. This paper presents design patterns for VE simulators for both training and testing procedures, while classifying users' navigation strategies in the VE. Moreover, the paper presents features that increase users' performance in VEs. We report the findings of two user studies: a pilot test that helped improve the sonic interaction design, and a primary study exposing participants to a spatial orientation task during four conditions which were early reflections (RF), late reverberation (RV), early reflections-reverberation (RR) and visual stimuli (V). The latter study allowed us to identify navigation strategies among the users. Some users (10/26) reported an ability to create spatial cognitive maps during the test with auditory echoes, which may explain why this group performed better than the remaining participants in the RR condition.
Collapse
|
28
|
Sumiya M, Ashihara K, Yoshino K, Gogami M, Nagatani Y, Kobayasi KI, Watanabe Y, Hiryu S. Bat-inspired signal design for target discrimination in human echolocation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2221. [PMID: 31046316 DOI: 10.1121/1.5097166] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 03/22/2019] [Indexed: 06/09/2023]
Abstract
Echolocating bats exhibit sophisticated sonar behaviors using ultrasounds with actively adjusted acoustic characteristics (e.g., frequency and time-frequency structure) depending on the situation. In this study, the utility of ultrasound in human echolocation was examined. By listening to ultrasonic echoes with a shifted pitch to be audible, the participants (i.e., sighted echolocation novices) could discriminate the three-dimensional (3D) roundness of edge contours. This finding suggests that sounds with suitable wavelengths (i.e., ultrasounds) can provide useful information about 3D shapes. In addition, the shape, texture, and material discrimination experiments were conducted using ultrasonic echoes binaurally measured with a 1/7 scaled miniature dummy head. The acoustic and statistical analyses showed that intensity and timbre cues were useful for shape and texture discriminations, respectively. Furthermore, in the discrimination of objects with various features (e.g., acrylic board and artificial grass), the perceptual distances between objects were more dispersed when frequency-modulated sweep signals were used than when a constant-frequency signal was used. These suggest that suitable signal design, i.e., echolocation sounds employed by bats, allowed echolocation novices to discriminate the 3D shape and texture. This top-down approach using human subjects may be able to efficiently help interpret the sensory perception, "seeing by sound," in bat biosonar.
Collapse
Affiliation(s)
- Miwa Sumiya
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| | - Kaoru Ashihara
- Human Informatics Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba 305-8568, Japan
| | - Kazuki Yoshino
- Department of Electronic Engineering, Kobe City College of Technology, Kobe, 651-2194, Japan
| | - Masaki Gogami
- Department of Electronic Engineering, Kobe City College of Technology, Kobe, 651-2194, Japan
| | - Yoshiki Nagatani
- Department of Electronic Engineering, Kobe City College of Technology, Kobe, 651-2194, Japan
| | - Kohta I Kobayasi
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| | - Yoshiaki Watanabe
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| | - Shizuko Hiryu
- Faculty of Life and Medical Sciences, Doshisha University, Kyotanabe, 610-0394, Japan
| |
Collapse
|
29
|
Kuc R. Generating cognitive maps using echo features from a biomimetic audible sonar. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2084. [PMID: 31046333 DOI: 10.1121/1.5096534] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Accepted: 03/08/2019] [Indexed: 06/09/2023]
Abstract
A sonar cognitive map displays target components that are specified by signal features extracted from a single binaural echo pair. A biomimetic audible sonar probes targets configured using posts connected by tangential planes. Echo envelopes are processed to extract values of eight parameters that govern the mapping process. Being tuned to recognize posts and planes, a cognitive map is composed of these two components using the posts' centers and radii as landmarks. A platform with translational and rotational degrees of freedom implements a landmark-centric scanning trajectory whose step size adaptively changes with echo information. The sonar tracks the target surface by maintaining a constant first-echo arrival time and by equalizing binaural echo times to form singular echoes that identify landmarks. The mapping process employs five states from detection to termination that pass through the singular echo state. Separate states process echo interference caused by two posts and echoes from planar surfaces. Sonar scanning stops when the current landmark parameters match those of the first landmark. Two targets configured with three posts and an added plane illustrate the procedure. Cognitive maps exhibit landmark locations that are accurate to ±5% with post radius estimates accurate to ±20%.
Collapse
Affiliation(s)
- Roman Kuc
- Department of Electrical Engineering, Yale University, New Haven, Connecticut 06511, USA
| |
Collapse
|
30
|
Nilsson ME, Tirado C, Szychowska M. Psychoacoustic evidence for stronger discrimination suppression of spatial information conveyed by lag-click interaural time than interaural level differences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:512. [PMID: 30710980 DOI: 10.1121/1.5087707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 12/29/2018] [Indexed: 06/09/2023]
Abstract
Listeners have limited access to spatial information in lagging sound, a phenomenon known as discrimination suppression. It is unclear whether discrimination suppression works differently for interaural time differences (ITDs) and interaural level differences (ILDs). To explore this, three listeners assessed the lateralization (left or right) and detection (present or not) of lag clicks with a large fixed ITD (350 μs) or ILD (10 dB) following a diotic lead click, with inter-click intervals (ICIs) of 0.125-256 ms. Performance was measured on a common scale for both cues: the lag-lead amplitude ratio [dB] at 75% correct answers. The main finding was that the lateralization thresholds, but not detection thresholds, were more strongly elevated for ITD-only than ILD-only clicks at intermediate ICIs (1-8 ms) in which previous research has found the strongest discrimination suppression effects. Altogether, these findings suggest that discrimination suppression involves mechanisms that make spatial information conveyed by lag-click ITDs less accessible to listeners than spatial information conveyed by lag-click ILDs.
Collapse
Affiliation(s)
- Mats E Nilsson
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Carlos Tirado
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Malina Szychowska
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
31
|
Ton C, Omar A, Szedenko V, Tran VH, Aftab A, Perla F, Bernstein MJ, Yang Y. LIDAR Assist Spatial Sensing for the Visually Impaired and Performance Analysis. IEEE Trans Neural Syst Rehabil Eng 2018; 26:1727-1734. [PMID: 30047892 DOI: 10.1109/tnsre.2018.2859800] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Echolocation enables people with impaired or no vision to comprehend the surrounding spatial information through the reflected sound. However, this technique often requires substantial training, and the accuracy of echolocation is subject to various conditions. Furthermore, the individuals who practice this sensing method must simultaneously generate the sound and process the received audio information. This paper proposes and evaluates a proof-of-concept light detection and ranging (LIDAR) assist spatial sensing (LASS) system, which intends to overcome these restrictions by obtaining the spatial information of the user's surroundings through a LIDAR sensor and translating the spatial information into the stereo sound of various pitches. The stereo sound of relative pitch represents the information regarding objects' angular orientation and horizontal distance, respectively, thus granting visually impaired users an enhanced spatial perception of his or her surrounding areas and potential obstacles. This paper is divided into two phases: Phase I is to engineer the hardware and software of the LASS system and Phase II focuses on the system efficacy study. The study, approved by the Penn State Institutional Review Board, included 18 student volunteers, who were recruited through the Penn State Department of Psychology Subject Pool. This paper demonstrates that the blindfolded individuals equipped with the LASS system are able to quantitatively identify the surrounding obstacles, differentiate their relative distance, and distinguish the angular location of multiple objects with minimal training.
Collapse
|
32
|
Norman LJ, Thaler L. Human Echolocation for Target Detection Is More Accurate With Emissions Containing Higher Spectral Frequencies, and This Is Explained by Echo Intensity. Iperception 2018; 9:2041669518776984. [PMID: 29854377 PMCID: PMC5968665 DOI: 10.1177/2041669518776984] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 04/21/2018] [Indexed: 12/01/2022] Open
Abstract
Humans can learn to use acoustic echoes to detect and classify objects. Echolocators typically use tongue clicks to induce these echoes, and there is some evidence that higher spectral frequency content of an echolocator’s tongue click is associated with better echolocation performance. This may be explained by the intensity of the echoes. The current study tested experimentally (a) if emissions with higher spectral frequencies lead to better performance for target detection, and (b) if this is mediated by echo intensity. Participants listened to sound recordings that contained an emission and sometimes an echo from an object. The peak spectral frequency of the emission was varied between 3.5 and 4.5 kHz. Participants judged whether they heard the object in these recordings and did the same under conditions in which the intensity of the echoes had been digitally equated. Participants performed better using emissions with higher spectral frequencies, but this advantage was eliminated when the intensity of the echoes was equated. These results demonstrate that emissions with higher spectral frequencies can benefit echolocation performance in conditions where they lead to an increase in echo intensity. The findings suggest that people who train to echolocate should be instructed to make emissions (e.g. mouth clicks) with higher spectral frequency content.
Collapse
Affiliation(s)
- L J Norman
- Department of Psychology, Durham University, Durham, UK
| | - L Thaler
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
33
|
Thaler L, De Vos R, Kish D, Antoniou M, Baker C, Hornikx M. Human echolocators adjust loudness and number of clicks for detection of reflectors at various azimuth angles. Proc Biol Sci 2018; 285:20172735. [PMID: 29491173 PMCID: PMC5832709 DOI: 10.1098/rspb.2017.2735] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Accepted: 02/06/2018] [Indexed: 11/15/2022] Open
Abstract
In bats it has been shown that they adjust their emissions to situational demands. Here we report similar findings for human echolocation. We asked eight blind expert echolocators to detect reflectors positioned at various azimuth angles. The same 17.5 cm diameter circular reflector placed at 100 cm distance at 0°, 45° or 90° with respect to straight ahead was detected with 100% accuracy, but performance dropped to approximately 80% when it was placed at 135° (i.e. somewhat behind) and to chance levels (50%) when placed at 180° (i.e. right behind). This can be explained based on poorer target ensonification owing to the beam pattern of human mouth clicks. Importantly, analyses of sound recordings show that echolocators increased loudness and numbers of clicks for reflectors at farther angles. Echolocators were able to reliably detect reflectors when level differences between echo and emission were as low as -27 dB, which is much lower than expected based on previous work. Increasing intensity and numbers of clicks improves signal-to-noise ratio and in this way compensates for weaker target reflections. Our results are, to our knowledge, the first to show that human echolocation experts adjust their emissions to improve sensory sampling. An implication from our findings is that human echolocators accumulate information from multiple samples.
Collapse
Affiliation(s)
- L Thaler
- Department of Psychology, Durham University, Science Site, South Road, Durham DH1 3LE, UK
| | - R De Vos
- Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands
| | - D Kish
- World Access for the Blind, Placentia 92870, CA, USA
| | - M Antoniou
- Department of Electronic Electrical and Systems Engineering, School of Engineering, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - C Baker
- Department of Electronic Electrical and Systems Engineering, School of Engineering, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - M Hornikx
- Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands
| |
Collapse
|