1
|
Zakrajsek AD, Foulkes S, Nagel N, Neurohr F, Nauman EA. Biomechanical Considerations of Refreshable Braille and Tactile Graphics Toward Equitable Access: A Review. J Biomech Eng 2024; 146:060907. [PMID: 38421346 DOI: 10.1115/1.4064964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 02/22/2024] [Indexed: 03/02/2024]
Abstract
This review highlights the biomechanical foundations of braille and tactile graphic discrimination within the context of design innovations in information access for the blind and low-vision community. Braille discrimination is a complex and poorly understood process that necessitates the coordination of motor control, mechanotransduction, and cognitive-linguistic processing. Despite substantial technological advances and multiple design attempts over the last fifty years, a low-cost, high-fidelity refreshable braille and tactile graphics display has yet to be delivered. Consequently, the blind and low-vision communities are left with limited options for information access. This is amplified by the rapid adoption of graphical user interfaces for human-computer interaction, a move that the blind and low vision community were effectively excluded from. Text-to-speech screen readers lack the ability to convey the nuances necessary for science, technology, engineering, arts, and math education and offer limited privacy for the user. Printed braille and tactile graphics are effective modalities but are time and resource-intensive, difficult to access, and lack real-time rendering. Single- and multiline refreshable braille devices either lack functionality or are extremely cost-prohibitive. Early computational models of mechanotransduction through complex digital skin tissue and the kinematics of the braille reading finger are explored as insight into device design specifications. A use-centered, convergence approach for future designs is discussed in which the design space is defined by both the end-user requirements and the available technology.
Collapse
Affiliation(s)
- Anne D Zakrajsek
- Department of Biomedical Engineering, University of Cincinnati, 2901 Woodside Drive, Cincinnati, OH 45221
| | - Samuel Foulkes
- Clovernook Center for the Blind and Visually Impaired, 7000 Hamilton Avenue, Cincinnati, OH 45231
| | - Nicole Nagel
- School of Biomedical Engineering, Purdue University, 610 Purdue Mall, West Lafayette, IN 47907
| | - Fred Neurohr
- Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue Cincinnati, OH 45229
| | - Eric A Nauman
- Department of Biomedical Engineering, University of Cincinnati, 2901 Woodside Dr., Cincinnati, OH 45221
| |
Collapse
|
2
|
Tivadar RI, Franceschiello B, Minier A, Murray MM. Learning and navigating digitally rendered haptic spatial layouts. NPJ SCIENCE OF LEARNING 2023; 8:61. [PMID: 38102127 PMCID: PMC10724186 DOI: 10.1038/s41539-023-00208-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/28/2023] [Indexed: 12/17/2023]
Abstract
Learning spatial layouts and navigating through them rely not simply on sight but rather on multisensory processes, including touch. Digital haptics based on ultrasounds are effective for creating and manipulating mental images of individual objects in sighted and visually impaired participants. Here, we tested if this extends to scenes and navigation within them. Using only tactile stimuli conveyed via ultrasonic feedback on a digital touchscreen (i.e., a digital interactive map), 25 sighted, blindfolded participants first learned the basic layout of an apartment based on digital haptics only and then one of two trajectories through it. While still blindfolded, participants successfully reconstructed the haptically learned 2D spaces and navigated these spaces. Digital haptics were thus an effective means to learn and translate, on the one hand, 2D images into 3D reconstructions of layouts and, on the other hand, navigate actions within real spaces. Digital haptics based on ultrasounds represent an alternative learning tool for complex scenes as well as for successful navigation in previously unfamiliar layouts, which can likely be further applied in the rehabilitation of spatial functions and mitigation of visual impairments.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- Centre for Integrative and Complementary Medicine, Department of Anesthesiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Cognitive Computational Neuroscience Group, Institute for Computer Science, University of Bern, Bern, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Benedetta Franceschiello
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
- Institute of Systems Engineering, School of Engineering, University of Applied Sciences Western Switzerland (HES-SO Valais), Sion, Switzerland
| | - Astrid Minier
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| |
Collapse
|
3
|
Trinh V, Manduchi R, Giudice NA. Experimental Evaluation of Multi-scale Tactile Maps Created with SIM, a Web App for Indoor Map Authoring. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2023; 16:1-26. [PMID: 37427355 PMCID: PMC10327626 DOI: 10.1145/3590775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 03/21/2023] [Indexed: 07/11/2023]
Abstract
In this article, we introduce Semantic Interior Mapology (SIM), a web app that allows anyone to quickly trace the floor plan of a building, generating a vectorized representation that can be automatically converted into a tactile map at the desired scale. The design of SIM is informed by a focus group with seven blind participants. Maps generated by SIM at two different scales have been tested by a user study with 10 participants, who were asked to perform a number of tasks designed to ascertain the spatial knowledge acquired through map exploration. These tasks included cross-map pointing and path finding, and determination of turn direction/walker orientation during imagined path traversal. By and large, participants were able to successfully complete the tasks, suggesting that these types of maps could be useful for pre-journey spatial learning.
Collapse
Affiliation(s)
- Viet Trinh
- University of California, Santa Cruz, CA (U.S.A); University of Maine, Orono, ME (U.S.A)
| | - Roberto Manduchi
- University of California, Santa Cruz, CA (U.S.A); University of Maine, Orono, ME (U.S.A)
| | - Nicholas A Giudice
- University of California, Santa Cruz, CA (U.S.A); University of Maine, Orono, ME (U.S.A)
| |
Collapse
|
4
|
Wu CF, Wu HP, Tu YH, Yeh IT, Chang CT. Constituent Elements Affecting the Recognition of Tactile Graphics. JOURNAL OF VISUAL IMPAIRMENT & BLINDNESS 2022. [DOI: 10.1177/0145482x221092031] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Introduction: Many tactile graphics designed for individuals with visual impairments consider single factors. According to the results of our previous study, there may be interactions among scale, representation, and complexity factors. We conducted this integrative study with these three factors. Additionally, for the representation factor, we introduced a new level that mixed the two common levels (line drawing, (LD), and texture picture, (TP)), into a textured-line drawing, (TLD). Methods: We included 18 participants with congenital blindness. They were asked to identify and name tactile graphics. The design of tactile graphics involved three factors, each at different levels, including scale (large, medium, and small), representation (TP, LD, and TLD), and complexity (easy and complex). We recorded the identification time and accuracy and conducted three-way analysis of variance to investigate interactions. Results: The identification time for small-scale graphics was shorter than that for large-scale graphics. The accuracy for small-scale graphics was higher than medium-scale graphics. Under the TLD mode, the accuracy for small and medium-scale graphics was higher than that for large-scale graphics. For medium-scale graphics, TLD performed better than LD. Discussion: Because the sizes of small-scale graphics were similar to those of actual objects, they were easy to identify. If the TLD mode is used for medium-scale graphics, the components in the operation area can be completely presented, which is helpful for identification. However, if large-scale graphics are used under the TLD mode, the operation area is relatively small and difficult to identify. Implications for Practitioners: It is recommended to present objects at 1:1 under the small-scale. Under the medium-scale, the operation area can be presented by closed plains, while non-operation areas can be presented using lines. Under the large-scale, it is recommended to reduce the size of the graphics to an extent where both hands can be used to explore.
Collapse
Affiliation(s)
- Chih-Fu Wu
- Department of Industrial Design, Tatung University, Taiwan
| | - Hsiang-Ping Wu
- Department of Industrial Design, Tatung University, Taiwan
- Department of Product Design, Ming Chuan University, Taiwan
| | - Yung-Hsiang Tu
- Department of Industrial Design, Tatung University, Taiwan
| | - I-Ting Yeh
- Department of Industrial Design, Tatung University, Taiwan
| | - Chin-Te Chang
- Taipei School for the Visually Impaired Teacher, Taiwan
| |
Collapse
|
5
|
Comparing Map Learning between Touchscreen-Based Visual and Haptic Displays: A Behavioral Evaluation with Blind and Sighted Users. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti6010001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The ubiquity of multimodal smart devices affords new opportunities for eyes-free applications for conveying graphical information to both sighted and visually impaired users. Using previously established haptic design guidelines for generic rendering of graphical content on touchscreen interfaces, the current study evaluates the learning and mental representation of digital maps, representing a key real-world translational eyes-free application. Two experiments involving 12 blind participants and 16 sighted participants compared cognitive map development and test performance on a range of spatio-behavioral tasks across three information-matched learning-mode conditions: (1) our prototype vibro-audio map (VAM), (2) traditional hardcopy-tactile maps, and (3) visual maps. Results demonstrated that when perceptual parameters of the stimuli were matched between modalities during haptic and visual map learning, test performance was highly similar (functionally equivalent) between the learning modes and participant groups. These results suggest equivalent cognitive map formation between both blind and sighted users and between maps learned from different sensory inputs, providing compelling evidence supporting the development of amodal spatial representations in the brain. The practical implications of these results include empirical evidence supporting a growing interest in the efficacy of multisensory interfaces as a primary interaction style for people both with and without vision. Findings challenge the long-held assumption that blind people exhibit deficits on global spatial tasks compared to their sighted peers, with results also providing empirical support for the methodological use of sighted participants in studies pertaining to technologies primarily aimed at supporting blind users.
Collapse
|
6
|
Chen D, Liu J, Tian L, Hu X, Song A. Research on the Method of Displaying the Contour Features of Image to the Visually Impaired on the Touch Screen. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2260-2270. [PMID: 34705649 DOI: 10.1109/tnsre.2021.3123394] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Conveying image information to the blind or visually impaired (BVI) is an important means to improve their quality of life. The touch screen devices used daily are the potential carriers for BVI to perceive image information through touch. However, touch screen devices also have the disadvantages of limited computing power and lack of rich tactile experience. In order to help BVI to access images conveniently through the touch screen, we built an image contour display system based on vibrotactile feedback. In this paper, an image smoothing algorithm based on convolutional neural network that can run quickly on the touch screen device is first used to preprocess the image to improve the effect of contour extraction. Then, based on the haptic physiological characteristics of human beings, this paper proposes a method of using the improved MH-Pen to guide the BVI to perceive image contour on the touch screen. This paper introduces the extraction and expression methods of image contours in detail, and compares and analyzes the effects of the subjects' perception of image contours in two haptic display modes through two types of user experiments. The experimental results show that the image smoothing algorithm is useful and necessary to help obtain the main contour of the image and to ensure the real-time display of the contour, and the contour expression method based on the motion direction guidance helps the subjects recognize the contour of the image more effectively.
Collapse
|
7
|
Yang W, Huang J, Wang R, Zhang W, Liu H, Xiao J. A Survey on Tactile Displays For Visually Impaired People. IEEE TRANSACTIONS ON HAPTICS 2021; 14:712-721. [PMID: 34077370 DOI: 10.1109/toh.2021.3085915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Traditional paper documents with Braille characters and tangible graphics have obvious defects to disseminate knowledge in the information age. Information accessibility is an urgent challenge for blind individuals. Although many types of tactile displays were created for different applications, we especially focus on the tactile display for visually impaired people, which can dynamically generate tangible graphics and Braille characters, to help the blind obtain information conveniently. In this article, we present the state-of-the-art of graphic tactile displays (GTDs) and refreshable Braille displays (RBDs), then discuss their common kernel technologies about actuators and latch structures. This article summarizes the performance of typical actuators of tactile displays and analyzes the working principles of some latch structures. This article systematically summarizes latch structures of GTDs and RBDs, for the first time. Several comments in this paper will be useful to develop high-performance tactile displays for visually impaired people.
Collapse
|
8
|
Creem-Regehr SH, Barhorst-Cates EM, Tarampi MR, Rand KM, Legge GE. How can basic research on spatial cognition enhance the visual accessibility of architecture for people with low vision? COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:3. [PMID: 33411062 PMCID: PMC7790979 DOI: 10.1186/s41235-020-00265-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 11/19/2020] [Indexed: 11/10/2022]
Abstract
People with visual impairment often rely on their residual vision when interacting with their spatial environments. The goal of visual accessibility is to design spaces that allow for safe travel for the large and growing population of people who have uncorrectable vision loss, enabling full participation in modern society. This paper defines the functional challenges in perception and spatial cognition with restricted visual information and reviews a body of empirical work on low vision perception of spaces on both local and global navigational scales. We evaluate how the results of this work can provide insights into the complex problem that architects face in the design of visually accessible spaces.
Collapse
Affiliation(s)
| | | | - Margaret R Tarampi
- Department of Psychology, University of Hartford, West Hartford, CT, USA
| | - Kristina M Rand
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Gordon E Legge
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
9
|
Anitha M, Kumar VDA, Malathi S, Kumar VDA, Ramakrishnan M, Kumar A, Ali R. A Survey on the Usage of Pattern Recognition and Image Analysis Methods for the Lifestyle Improvement on Low Vision and Visually Impaired People. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [DOI: 10.1134/s105466182101003x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
10
|
Gorlewicz JL, Tennison JL, Uesbeck PM, Richard ME, Palani HP, Stefik A, Smith DW, Giudice NA. Design Guidelines and Recommendations for Multimodal, Touchscreen-based Graphics. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2020. [DOI: 10.1145/3403933] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
With content rapidly moving to the electronic space, access to graphics for individuals with visual impairments is a growing concern. Recent research has demonstrated the potential for representing basic graphical content on touchscreens using vibrations and sounds, yet few guidelines or processes exist to guide the design of multimodal, touchscreen-based graphics. In this work, we seek to address this gap by synergizing our collective research efforts over the past eight years and implementing our findings into a compilation of recommendations, which we validate through an iterative design process and user study. We start by reviewing previous work and then collate findings into a set of design guidelines for generating basic elements of touchscreen-based multimodal graphics. We then use these guidelines to generate exemplary graphics in mathematics, specifically bar charts and geometry concepts. We discuss the iterative design process of moving from guidelines to actual graphics and highlight challenges. We then present a formal user study with 22 participants with visual impairments, comparing learning performance on using touchscreen-rendered graphics to embossed graphics. We conclude with qualitative feedback from participants on the touchscreen-based approach and offer areas of future investigation as these recommendation are expanded to include more complex graphical concepts.
Collapse
|
11
|
Giudice NA, Guenther BA, Jensen NA, Haase KN. Cognitive Mapping Without Vision: Comparing Wayfinding Performance After Learning From Digital Touchscreen-Based Multimodal Maps vs. Embossed Tactile Overlays. Front Hum Neurosci 2020; 14:87. [PMID: 32256329 PMCID: PMC7090157 DOI: 10.3389/fnhum.2020.00087] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 02/27/2020] [Indexed: 11/24/2022] Open
Abstract
This article starts by discussing the state of the art in accessible interactive maps for use by blind and visually impaired (BVI) people. It then describes a behavioral experiment investigating the efficacy of a new type of low-cost, touchscreen-based multimodal interface, called a vibro-audio map (VAM), for supporting environmental learning, cognitive map development, and wayfinding behavior on the basis of nonvisual sensing. In the study, eight BVI participants learned two floor-maps of university buildings, one using the VAM and the other using an analogous hardcopy tactile map (HTM) overlaid on the touchscreen. They were asked to freely explore each map, with the task of learning the entire layout and finding three hidden target locations. After meeting a learning criterion, participants performed an environmental transfer test, where they were brought to the corresponding physical layout and were asked to plan/navigate routes between learned target locations from memory, i.e., without access to the map used at learning. The results using Bayesian analyses aimed at assessing equivalence showed highly similar target localization accuracy and route efficiency performance between conditions, suggesting that the VAM supports the same level of environmental learning, cognitive map development, and wayfinding performance as is possible from interactive displays using traditional tactile map overlays. These results demonstrate the efficacy of the VAM for supporting complex spatial tasks without vision using a commercially available, low-cost interface and open the door to a new era of mobile interactive maps for spatial learning and wayfinding by BVI navigators.
Collapse
Affiliation(s)
- Nicholas A. Giudice
- Spatial Informatics Program: School of Computing and Information Science, The University of Maine, Orono, ME, United States
- Virtual Environments and Multimodal Interaction (VEMI) Laboratory, The University of Maine, Orono, ME, United States
- Department of Psychology, The University of Maine, Orono, ME, United States
| | - Benjamin A. Guenther
- Virtual Environments and Multimodal Interaction (VEMI) Laboratory, The University of Maine, Orono, ME, United States
- Department of Psychology, The University of Maine, Orono, ME, United States
| | - Nicholas A. Jensen
- Virtual Environments and Multimodal Interaction (VEMI) Laboratory, The University of Maine, Orono, ME, United States
- Department of Psychology, The University of Maine, Orono, ME, United States
| | - Kaitlyn N. Haase
- Spatial Informatics Program: School of Computing and Information Science, The University of Maine, Orono, ME, United States
- Virtual Environments and Multimodal Interaction (VEMI) Laboratory, The University of Maine, Orono, ME, United States
| |
Collapse
|
12
|
Murillo-Morales T, Miesenberger K. AUDiaL: A Natural Language Interface to Make Statistical Charts Accessible to Blind Persons. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7479797 DOI: 10.1007/978-3-030-58796-3_44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
This paper discusses the design and evaluation of AUDiaL (Accessible Universal Diagrams through Language). AUDiaL is a web-based, accessible natural language interface (NLI) prototype that allows blind persons to access statistical charts, such as bar and line charts, by means of free-formed analytical and navigational queries expressed in natural language. Initial evaluation shows that NLIs are an innovative, promising approach to accessibility of knowledge representation graphics, since, as opposed to traditional approaches, they do not require of additional software/hardware nor user training while allowing users to carry out most tasks commonly supported by data visualization techniques in an efficient, natural manner.
Collapse
|
13
|
Hahn ME, Mueller CM, Gorlewicz JL. The Comprehension of STEM Graphics via a Multisensory Tablet Electronic Device by Students with Visual Impairments. JOURNAL OF VISUAL IMPAIRMENT & BLINDNESS 2019. [DOI: 10.1177/0145482x19876463] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Introduction: The current work probes the effectiveness of multimodal touch screen tablet electronic devices in conveying science, technology, engineering, and mathematics graphics via vibrations and sounds to individuals who are visually impaired (i.e., blind or low vision) and compares it with similar graphics presented in an embossed format. Method: A volunteer sample of 22 participants who are visually impaired, selected from a summer camp and local schools for blind students, were recruited for the current study. Participants were first briefly (∼30 min) trained on how to explore graphics via a multimodal touch screen tablet. They then explored six graphic types (number line, table, pie chart, bar chart, line graph, and map) displayed via embossed paper and tablet. Participants answered three content questions per graphic type following exploration. Results: Participants were only 6% more accurate when answering questions regarding an embossed graphic as opposed to a tablet graphic. A paired-samples t test indicated that this difference was not significant, t(14) = 1.91, p = .07. Follow-up analyses indicated that presentation medium did not interact with graphic type, F(5, 50) = 0.43, p = .83, nor visual ability, F(1, 13) = 0.00, p = .96. Discussion: The findings demonstrate that multimodal touch screen tablets may be comparable to embossed graphics in conveying iconographic science and mathematics content to individuals with visual impairments, regardless of the severity of impairment. The relative equivalence in response accuracy between mediums was unexpected, given that most students who participated were braille readers and had experience reading embossed graphics, whereas they were introduced to the tablet the day of testing. Implications for practitioners: This work illustrates that multimodal touch screen tablets may be an effective option for general education teachers or teachers of students with visual impairments to use in their educational practices. Currently, preparation of accessible graphics is time consuming and requires significant preparation, but such tablets provide solutions for offering “real-time” displays of these graphics for presentation in class.
Collapse
Affiliation(s)
| | - Corrine M. Mueller
- Vibratory Touchscreen Applications for Learning (ViTAL), St. Louis, MO, USA
| | - Jenna L. Gorlewicz
- Saint Louis University, MO, USA
- Vibratory Touchscreen Applications for Learning (ViTAL), St. Louis, MO, USA
| |
Collapse
|
14
|
Bao T, Su L, Kinnaird C, Kabeto M, Shull PB, Sienko KH. Vibrotactile display design: Quantifying the importance of age and various factors on reaction times. PLoS One 2019; 14:e0219737. [PMID: 31398207 PMCID: PMC6688825 DOI: 10.1371/journal.pone.0219737] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Accepted: 07/02/2019] [Indexed: 11/19/2022] Open
Abstract
Numerous factors affect reaction times to vibrotactile cues. Therefore, it is important to consider the relative magnitudes of these time delays when designing vibrotactile displays for real-time applications. The objectives of this study were to quantify reaction times to typical vibrotactile stimuli parameters through direct comparison within a single experimental setting, and to determine the relative importance of these factors on reaction times. Young (n = 10, 21.9 ± 1.3 yrs) and older adults (n = 13, 69.4 ± 5.0 yrs) performed simple reaction time tasks by responding to vibrotactile stimuli using a thumb trigger while frequency, location, auditory cues, number of tactors in the same location, and tactor type were varied. Participants also performed a secondary task in a subset of the trials. The factors investigated in this study affected reaction times by 20-300 ms (reaction time findings are noted in parentheses) depending on the specific stimuli condition. In general, auditory cues generated by the tactors (<20 ms), vibration frequency (<20 ms), number of tactors in the same location (<30 ms) and tactor type (<50 ms) had relatively small effects on reaction times, while stimulus location (20-120 ms) and secondary cognitive task (>130 ms) had relatively large effects. Factors affected young and older adults' reaction times in a similar manner, but with different magnitudes. These findings can inform the development of vibrotactile displays by enabling designers to directly compare the relative effects of key factors on reaction times.
Collapse
Affiliation(s)
- Tian Bao
- Dept. of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Lydia Su
- Dept. of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Catherine Kinnaird
- Dept. of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Mohammed Kabeto
- Internal Medicine, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Peter B. Shull
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, China
| | - Kathleen H. Sienko
- Dept. of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan, United States of America
| |
Collapse
|
15
|
Automatic (Tactile) Map Generation—A Systematic Literature Review. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2019. [DOI: 10.3390/ijgi8070293] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper presents a systematic literature review that reflects the current state of research in the field of algorithms and models for map generalization, the existing solutions for automatic (tactile) map generation, as well as good practices for designing spatial databases for the purposes of automatic map development. A total number of over 500 primary studies were screened in order to identify the most relevant research on automatic (tactile) map generation from the last decade. The reviewed papers revealed many existing solutions in the field of automatic map production, as well as algorithms (e.g., Douglas–Peucker, Visvalingam–Whyatt) and models (e.g., GAEL, CartACom) for data generalization that might be used to transform traditional spatial data into the haptic form, suitable for blind and visually impaired people. However, it turns out that a comprehensive solution for automatic tactile map generation does not exist.
Collapse
|
16
|
Comparing Haptic Pattern Matching on Tablets and Phones: Large Screens Are Not Necessarily Better. Optom Vis Sci 2018; 95:720-726. [PMID: 30169351 DOI: 10.1097/opx.0000000000001274] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
Abstract
Touchscreen-based, multimodal graphics represent an area of increasing research in digital access for individuals with blindness or visual impairments; yet, little empirical research on the effects of screen size on graphical exploration exists. This work probes if and when more screen area is necessary in supporting a pattern-matching task. PURPOSE Larger touchscreens are thought to have distinct benefit over smaller touchscreens for the amount of space available to convey graphical information nonvisually. The current study investigates two questions: (1) Do screen size and grid density impact a user's accuracy on pattern-matching tasks? (2) Do screen size and grid density impact a user's time on task? METHODS Fourteen blind and visually impaired individuals were given a pattern-matching task to complete on either a 10.5-in tablet or a 5.1-in phone. The patterns consisted of five vibrating targets imposed on sonified grids that varied in density (higher density = more grid squares). At test, participants compared the touchscreen pattern with a group of physical, embossed patterns and selected the matching pattern. Participants were evaluated on time exploring the pattern on the device and their pattern-matching accuracy. Multiple and logistic regressions were performed on the data. RESULTS Device size, grid density, and age had no statistically significant effects on the model of pattern-matching accuracy. However, device size, grid density, and age had significant effects on the model for grid exploration. Using the phone, exploring low-density grids, and being older were indicative of faster exploration time. CONCLUSIONS A trade-off of time and accuracy exists between devices that seems to be task dependent. Users may find a tablet most useful in situations where the accuracy of graphic interpretation is important and is not limited by time. Smaller screen sizes afforded comparable accuracy performance to tablets and were faster to explore overall.
Collapse
|
17
|
Brayda L, Leo F, Baccelliere C, Ferrari E, Vigini C. Updated Tactile Feedback with a Pin Array Matrix Helps Blind People to Reduce Self-Location Errors. MICROMACHINES 2018; 9:E351. [PMID: 30424284 PMCID: PMC6082250 DOI: 10.3390/mi9070351] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 06/28/2018] [Accepted: 07/09/2018] [Indexed: 11/16/2022]
Abstract
Autonomous navigation in novel environments still represents a challenge for people with visual impairment (VI). Pin array matrices (PAM) are an effective way to display spatial information to VI people in educative/rehabilitative contexts, as they provide high flexibility and versatility. Here, we tested the effectiveness of a PAM in VI participants in an orientation and mobility task. They haptically explored a map showing a scaled representation of a real room on the PAM. The map further included a symbol indicating a virtual target position. Then, participants entered the room and attempted to reach the target three times. While a control group only reviewed the same, unchanged map on the PAM between trials, an experimental group also received an updated map representing, in addition, the position they previously reached in the room. The experimental group significantly improved across trials by having both reduced self-location errors and reduced completion time, unlike the control group. We found that learning spatial layouts through updated tactile feedback on programmable displays outperforms conventional procedures on static tactile maps. This could represent a powerful tool for navigation, both in rehabilitation and everyday life contexts, improving spatial abilities and promoting independent living for VI people.
Collapse
Affiliation(s)
- Luca Brayda
- Research Unit of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia, Genoa 16153, Italy.
| | - Fabrizio Leo
- Research Unit of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia, Genoa 16153, Italy.
| | - Caterina Baccelliere
- Research Unit of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia, Genoa 16153, Italy.
| | - Elisabetta Ferrari
- Research Unit of Robotics, Brain and Cognitive Sciences, Fondazione Istituto Italiano di Tecnologia, Genoa 16153, Italy.
| | | |
Collapse
|
18
|
Morash VS, Russomanno A, Gillespie RB, O'Modhrain S. Evaluating Approaches to Rendering Braille Text on a High-Density Pin Display. IEEE TRANSACTIONS ON HAPTICS 2018; 11:476-481. [PMID: 29035226 DOI: 10.1109/toh.2017.2762666] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Refreshable displays for tactile graphics are typically composed of pins that have smaller diameters and spacing than standard braille dots. We investigated configurations of high-density pins to form braille text on such displays using non-refreshable stimuli produced with a 3D printer. Normal dot braille (diameter 1.5 mm) was compared to high-density dot braille (diameter 0.75 mm) wherein each normal dot was rendered by high-density simulated pins alone or in a cluster of pins configured in a diamond, X, or square; and to "blobs" that could result from covering normal braille and high-density multi-pin configurations with a thin membrane. Twelve blind participants read MNREAD sentences displayed in these conditions. For high-density simulated pins, single pins were as quickly and easily read as normal braille, but diamond, X, and square multi-pin configurations were slower and/or harder to read than normal braille. We therefore conclude that as long as center-to-center dot spacing and dot placement is maintained, the dot diameter may be open to variability for rendering braille on a high density tactile display.
Collapse
|
19
|
Reichinger A, Carrizosa HG, Wood J, Schröder S, Löw C, Luidolt LR, Schimkowitsch M, Fuhrmann A, Maierhofer S, Purgathofer W. Pictures in Your Mind. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2018. [DOI: 10.1145/3155286] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Tactile reliefs offer many benefits over the more classic raised line drawings or tactile diagrams, as depth, 3D shape, and surface textures are directly perceivable. Although often created for blind and visually impaired (BVI) people, a wider range of people may benefit from such multimodal material. However, some reliefs are still difficult to understand without proper guidance or accompanying verbal descriptions, hindering autonomous exploration.
In this work, we present a gesture-controlled interactive audio guide (IAG) based on recent low-cost depth cameras that can be operated directly with the hands on relief surfaces during tactile exploration. The interactively explorable, location-dependent verbal and captioned descriptions promise rapid tactile accessibility to 2.5D spatial information in a home or education setting, to online resources, or as a kiosk installation at public places.
We present a working prototype, discuss design decisions, and present the results of two evaluation studies: the first with 13 BVI test users and the second follow-up study with 14 test users across a wide range of people with differences and difficulties associated with perception, memory, cognition, and communication. The participant-led research method of this latter study prompted new, significant and innovative developments.
Collapse
Affiliation(s)
- Andreas Reichinger
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Wien, Austria
| | | | - Joanna Wood
- University of Sussex, Brighton, United Kingdom
| | | | | | | | | | | | | | | |
Collapse
|
20
|
Sorgini F, Caliò R, Carrozza MC, Oddo CM. Haptic-assistive technologies for audition and vision sensory disabilities. Disabil Rehabil Assist Technol 2017; 13:394-421. [DOI: 10.1080/17483107.2017.1385100] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Francesca Sorgini
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Pisa, Italy
| | - Renato Caliò
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Pisa, Italy
| | | | - Calogero Maria Oddo
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pontedera, Pisa, Italy
| |
Collapse
|
21
|
Papadopoulos K, Koustriava E, Koukourikos P. Orientation and mobility aids for individuals with blindness: Verbal description vs. audio-tactile map. Assist Technol 2017; 30:191-200. [PMID: 28471302 DOI: 10.1080/10400435.2017.1307879] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
Individuals with visual impairment face significant challenges traveling in the physical environment. Independent movement is directly connected to the quality of someone's life, and thus, orientation and mobility issues are always listed among the top priorities of research in the field. The aim of the present research was to examine the level of accuracy of the cognitive map developed through the use of a verbal description versus the cognitive map developed using an audio-tactile map. A comparison of the effectiveness of the two mobility aids in detecting specific points of interest in the physical environment was an objective of the research. The procedure involved the study of a map using the two mobility aids, and an assessment through the transfer to the corresponding physical environment. The results suggest that an individual with visual impairment can acquire and use a functional cognitive map through the use of an audio-tactile map, while relying on a verbal description entails greater difficulty in detecting specific points of interest when he/she comes into the physical environment.
Collapse
Affiliation(s)
| | - Eleni Koustriava
- a Department of Educational and Social Policy , University of Macedonia , Thessaloniki , Greece
| | | |
Collapse
|
22
|
Palani HP, Giudice NA. Principles for Designing Large-Format Refreshable Haptic Graphics Using Touchscreen Devices. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2017. [DOI: 10.1145/3035537] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Touchscreen devices, such as smartphones and tablets, represent a modern solution for providing graphical access to people with blindness and visual impairment (BVI). However, a significant problem with these solutions is their limited screen real estate, which necessitates panning or zooming operations for accessing large-format graphical materials such as maps. Non-visual interfaces cannot directly employ traditional panning or zooming techniques due to various perceptual and cognitive limitations (e.g., constraints of the haptic field of view and disorientation due to loss of one's reference point after performing these operations). This article describes the development of four novel non-visual panning methods designed from the onset with consideration of these perceptual and cognitive constraints. Two studies evaluated the usability of these panning methods in comparison with a non-panning control condition. Results demonstrated that the exploration, learning, and subsequent spatial behaviors were similar between panning and non-panning conditions, with one panning mode, based on a two-finger drag technique, revealing the overall best performance. Findings provide compelling evidence that incorporating panning operations on touchscreen devices -- the fastest growing computational platform among the BVI demographic -- is a viable, low-cost, and immediate solution for providing BVI people with access to a broad range of large-format digital graphical information.
Collapse
Affiliation(s)
- Hari Prasath Palani
- Spatial Informatics Program: School of Computing and Information Science, The University of Maine; Virtual Environments and Multimodal Interaction (VEMI) Laboratory, The University of Maine, Orono, Maine
| | - Nicholas A. Giudice
- Spatial Informatics Program: School of Computing and Information Science, The University of Maine; Virtual Environments and Multimodal Interaction (VEMI) Laboratory, The University of Maine, Orono, Maine
| |
Collapse
|
23
|
Leo F, Cocchi E, Brayda L. The Effect of Programmable Tactile Displays on Spatial Learning Skills in Children and Adolescents of Different Visual Disability. IEEE Trans Neural Syst Rehabil Eng 2016; 25:861-872. [PMID: 27775905 DOI: 10.1109/tnsre.2016.2619742] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Vision loss has severe impacts on physical, social and emotional well-being. The education of blind children poses issues as many scholar disciplines (e.g., geometry, mathematics) are normally taught by heavily relying on vision. Touch-based assistive technologies are potential tools to provide graphical contents to blind users, improving learning possibilities and social inclusion. Raised-lines drawings are still the golden standard, but stimuli cannot be reconfigured or adapted and the blind person constantly requires assistance. Although much research concerns technological development, little work concerned the assessment of programmable tactile graphics, in educative and rehabilitative contexts. Here we designed, on programmable tactile displays, tests aimed at assessing spatial memory skills and shapes recognition abilities. Tests involved a group of blind and a group of low vision children and adolescents in a four-week longitudinal schedule. After establishing subject-specific difficulty levels, we observed a significant enhancement of performance across sessions and for both groups. Learning effects were comparable to raised paper control tests: however, our setup required minimal external assistance. Overall, our results demonstrate that programmable maps are an effective way to display graphical contents in educative/rehabilitative contexts. They can be at least as effective as traditional paper tests yet providing superior flexibility and versatility.
Collapse
|
24
|
Horton EL, Renganathan R, Toth BN, Cohen AJ, Bajcsy AV, Bateman A, Jennings MC, Khattar A, Kuo RS, Lee FA, Lim MK, Migasiuk LW, Zhang A, Zhao OK, Oliveira MA. A review of principles in design and usability testing of tactile technology for individuals with visual impairments. Assist Technol 2016; 29:28-36. [PMID: 27187665 DOI: 10.1080/10400435.2016.1176083] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
Abstract
To lay the groundwork for devising, improving, and implementing new technologies to meet the needs of individuals with visual impairments, a systematic literature review was conducted to: a) describe hardware platforms used in assistive devices, b) identify their various applications, and c) summarize practices in user testing conducted with these devices. A search in relevant EBSCO databases for articles published between 1980 and 2014 with terminology related to visual impairment, technology, and tactile sensory adaptation yielded 62 articles that met the inclusion criteria for final review. It was found that while earlier hardware development focused on pin matrices, the emphasis then shifted toward force feedback haptics and accessible touch screens. The inclusion of interactive and multimodal features has become increasingly prevalent. The quantity and consistency of research on navigation, education, and computer accessibility suggest that these are pertinent areas of need for the visually impaired community. Methodologies for usability testing ranged from case studies to larger cross-sectional studies. Many studies used blindfolded sighted users to draw conclusions about design principles and usability. Altogether, the findings presented in this review provide insight on effective design strategies and user testing methodologies for future research on assistive technology for individuals with visual impairments.
Collapse
Affiliation(s)
- Emily L Horton
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Ramkesh Renganathan
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Bryan N Toth
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Alexa J Cohen
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Andrea V Bajcsy
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Amelia Bateman
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Mathew C Jennings
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Anish Khattar
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Ryan S Kuo
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Felix A Lee
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Meilin K Lim
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Laura W Migasiuk
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Amy Zhang
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Oliver K Zhao
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| | - Marcio A Oliveira
- a Division of Information Technology , University of Maryland , College Park , Maryland , USA
| |
Collapse
|