1
|
Tan HL, Aplin T, McAuliffe T, Gullo H. An exploration of smartphone use by, and support for people with vision impairment: a scoping review. Disabil Rehabil Assist Technol 2024; 19:407-432. [PMID: 35776428 DOI: 10.1080/17483107.2022.2092223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 06/15/2022] [Indexed: 10/17/2022]
Abstract
PURPOSE Smartphones have become a core piece of assistive technology (AT) for people with vision impairment (PVI) around the world. This scoping review sought to provide a comprehensive picture of the current evidence base of smartphones for PVI. METHODS Seven electronic databases (CINAHL, Cochrane Library, EMBASE, IEEE Xplore, Scopus, PubMed and Web of Science) were searched for papers published from 2007 to 2021. Peer-reviewed articles published in English which discussed smartphones use by PVI; smartphone technologies designed for PVI or training and learning support on the use of smartphones were included. RESULTS There were 16,899 records retrieved and 65 articles were included in this review. The majority (48%) of the papers focussed on developing better interfaces and Apps for PVI. Contrastingly, there was a paucity of papers (5%) discussing training or learning support for PVI to use smartphones and Apps effectively, even though it was highlighted to be important. Proper training will ensure that PVI can use this everyday technology as an AT to increase participation, enhance independence and improve quality of life overall. CONCLUSIONS The findings highlighted that smartphones and Apps can be used as effective and affordable AT by PVI. The many recent developments and research interest in smartphone technologies can further support its use. However, good training and learning support on the use of smartphones and Apps by PVI, is lacking. Future research should focus on the development, provision and evaluation of evidence based tailored training and support, especially in low- and middle-income countries. Implications for rehabilitationThere is a need for more training and learning support for people with vision impairment (PVI) on the use of smartphones and Apps.Individualized and a graded approach to training has been recommended for PVI to learn to use smartphones.When supporting or training people to use smartphones, the person's level of vision impairment as well as their age, are important considerations.Health professionals should be cognizant of the steep learning curve that some PVI may experience when using smartphones and Apps, especially when they switch from a phone with physical buttons to touchscreen.Certain smartphones features are useful to particular vision loss conditions. For example, zoom and magnification are helpful for those with low vision but text input and output, and commands using speech (e.g., Siri and TalkBack) are useful for those who are blind.
Collapse
Affiliation(s)
- Hwei Lan Tan
- School of Health and Rehabilitation Sciences, The University of Queensland, Saint Lucia, Australia
- Singapore Institute of Technology, Health and Social Sciences, Singapore, Singapore
| | - Tammy Aplin
- School of Health and Rehabilitation Sciences, The University of Queensland, Saint Lucia, Australia
- The Prince Charles Hospital, Metro North Hospital and Health Service, Chermside, Australia
| | - Tomomi McAuliffe
- School of Health and Rehabilitation Sciences, The University of Queensland, Saint Lucia, Australia
| | - Hannah Gullo
- School of Health and Rehabilitation Sciences, The University of Queensland, Saint Lucia, Australia
| |
Collapse
|
2
|
Zhu HY, Hossain SN, Jin C, Singh AK, Nguyen MTD, Deverell L, Nguyen V, Gates FS, Fernandez IG, Melencio MV, Bell JAR, Lin CT. An investigation into the effectiveness of using acoustic touch to assist people who are blind. PLoS One 2023; 18:e0290431. [PMID: 37878584 PMCID: PMC10599575 DOI: 10.1371/journal.pone.0290431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 08/09/2023] [Indexed: 10/27/2023] Open
Abstract
Wearable smart glasses are an emerging technology gaining popularity in the assistive technologies industry. Smart glasses aids typically leverage computer vision and other sensory information to translate the wearer's surrounding into computer-synthesized speech. In this work, we explored the potential of a new technique known as "acoustic touch" to provide a wearable spatial audio solution for assisting people who are blind in finding objects. In contrast to traditional systems, this technique uses smart glasses to sonify objects into distinct sound auditory icons when the object enters the device's field of view. We developed a wearable Foveated Audio Device to study the efficacy and usability of using acoustic touch to search, memorize, and reach items. Our evaluation study involved 14 participants, 7 blind or low-visioned and 7 blindfolded sighted (as a control group) participants. We compared the wearable device to two idealized conditions, a verbal clock face description and a sequential audio presentation through external speakers. We found that the wearable device can effectively aid the recognition and reaching of an object. We also observed that the device does not significantly increase the user's cognitive workload. These promising results suggest that acoustic touch can provide a wearable and effective method of sensory augmentation.
Collapse
Affiliation(s)
| | | | - Craig Jin
- University of Sydney, Sydney, Australia
| | | | | | | | | | | | | | | | | | | |
Collapse
|
3
|
Cloutier M, DeLucia PR. Topical Review: Impact of Central Vision Loss on Navigation and Obstacle Avoidance while Walking. Optom Vis Sci 2022; 99:890-899. [PMID: 36594757 PMCID: PMC9813875 DOI: 10.1097/opx.0000000000001960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
SIGNIFICANCE Individuals with central vision loss are at higher risk of injury when walking and thus may limit trips outside the home. Understanding the mobility challenges associated with central vision loss (CVL) can lead to more effective interventions.A systematic literature review focusing on mobility in CVL was conducted. Using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses method, 2424 articles were identified in 4 databases (PsycINFO, APA PsycArticles, PubMed, and Web of Science). To be included within this review, the study methodology needed to be related to the three components of walking: (1) navigation, defined as the ability to reach a target destination; (2) obstacle avoidance, defined as the ability to avoid collisions with obstacles located at various heights and directions; and (3) street crossing, defined as the ability to both navigate a path and avoid collisions in a traffic environment. The methodology also needed to be empirical. Case studies, unstructured observational studies, studies based on self-report, research proposals, and existing systematic reviews were excluded. Titles, abstracts, and full text of identified articles were screened, yielding 26 articles included in the review. Results showed that, in many tasks, individuals with CVL can accomplish a level of performance comparable with individuals with normal vision. Differences between normal and impaired vision were due to either age or how the groups completed the task. For example, individuals with CVL could cross a street successfully but did so less safely (i.e., smaller safety margins) than individuals with normal vision. To identify new interventions for CVL, future research should focus on the differences in the mechanisms underlying mobility between individuals with normal and impaired vision rather than solely on performance differences.
Collapse
Affiliation(s)
- Melissa Cloutier
- Department of Psychological Sciences, Rice University, Houston, Texas
| | | |
Collapse
|
4
|
Busaeed S, Katib I, Albeshri A, Corchado JM, Yigitcanlar T, Mehmood R. LidSonic V2.0: A LiDAR and Deep-Learning-Based Green Assistive Edge Device to Enhance Mobility for the Visually Impaired. SENSORS (BASEL, SWITZERLAND) 2022; 22:7435. [PMID: 36236546 PMCID: PMC9570831 DOI: 10.3390/s22197435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/20/2022] [Accepted: 09/26/2022] [Indexed: 06/16/2023]
Abstract
Over a billion people around the world are disabled, among whom 253 million are visually impaired or blind, and this number is greatly increasing due to ageing, chronic diseases, and poor environments and health. Despite many proposals, the current devices and systems lack maturity and do not completely fulfill user requirements and satisfaction. Increased research activity in this field is required in order to encourage the development, commercialization, and widespread acceptance of low-cost and affordable assistive technologies for visual impairment and other disabilities. This paper proposes a novel approach using a LiDAR with a servo motor and an ultrasonic sensor to collect data and predict objects using deep learning for environment perception and navigation. We adopted this approach using a pair of smart glasses, called LidSonic V2.0, to enable the identification of obstacles for the visually impaired. The LidSonic system consists of an Arduino Uno edge computing device integrated into the smart glasses and a smartphone app that transmits data via Bluetooth. Arduino gathers data, operates the sensors on the smart glasses, detects obstacles using simple data processing, and provides buzzer feedback to visually impaired users. The smartphone application collects data from Arduino, detects and classifies items in the spatial environment, and gives spoken feedback to the user on the detected objects. In comparison to image-processing-based glasses, LidSonic uses far less processing time and energy to classify obstacles using simple LiDAR data, according to several integer measurements. We comprehensively describe the proposed system's hardware and software design, having constructed their prototype implementations and tested them in real-world environments. Using the open platforms, WEKA and TensorFlow, the entire LidSonic system is built with affordable off-the-shelf sensors and a microcontroller board costing less than USD 80. Essentially, we provide designs of an inexpensive, miniature green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. Our approach enables faster inference and decision-making using relatively low energy with smaller data sizes, as well as faster communications for edge, fog, and cloud computing.
Collapse
Affiliation(s)
- Sahar Busaeed
- Faculty of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh 11564, Saudi Arabia
| | - Iyad Katib
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Aiiad Albeshri
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Juan M. Corchado
- Bisite Research Group, University of Salamanca, 37007 Salamanca, Spain
- Air Institute, IoT Digital Innovation Hub, 37188 Salamanca, Spain
- Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, Japan
| | - Tan Yigitcanlar
- School of Architecture and Built Environment, Queensland University of Technology, 2 George Street, Brisbane, QLD 4000, Australia
| | - Rashid Mehmood
- High Performance Computing Center, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
5
|
Humphreys JD, Sivaprasad S. Living Without a Diagnosis: A Patient's Perspective on Diabetic Macular Ischemia. Ophthalmol Ther 2022; 11:1617-1628. [PMID: 35821381 PMCID: PMC9437185 DOI: 10.1007/s40123-022-00546-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 06/24/2022] [Indexed: 11/25/2022] Open
Abstract
Diabetic macular ischemia (DMI) is a common complication of diabetic retinopathy (DR) that can result in progressive and irreversible vision loss. DMI is associated with damage in the vessels that supply blood to the retina and the enlargement of the foveal avascular zone. Currently, there are no approved treatments specifically for DMI. Furthermore, there is limited published information about the prognosis, prevalence or outcomes of DMI, and there is no consensus regarding diagnostic criteria. It is vital to ensure that there is sufficient, accessible and accurate information available to support patients, caregivers and physicians. To lay the foundation for more research into DMI and its impact on patients, we (a patient with DMI and an expert ophthalmologist) have worked together to interweave our personal perspectives and clinical experiences with a review of currently available literature on DMI. The development of a set of confirmed diagnostic criteria for DMI would assist both patients and physicians, allowing patients to access validated information about their condition and supporting the development of clinical trials for treatments of DMI. Training for physicians must continue to emphasise the importance of treating a patient holistically, rather than only treating their symptoms. Most importantly, developing trust and a healthy rapport between a patient and their physician is important in managing health anxiety and ensuring adherence to beneficial treatments or lifestyle adjustments; physicians must cultivate an open and flexible management approach with their patients. Finally, holistic educational programmes for patients, physicians and the general public around DMI and how it can affect daily functioning would facilitate general understanding and disease awareness. Diabetic macular ischemia (DMI) is a common problem for patients with diabetic retinopathy that can lead to sight loss. There is very little information available about DMI, particularly from a patient’s point of view. To address the lack of information about DMI, we (a person with DMI and her eye doctor) have worked together to examine what it is like to live with DMI.
It is important to provide clear and accessible information about diseases to patients and carers. The lack of information about DMI may be upsetting for some people, and should be addressed with more research. Developing of a set of confirmed signs and symptoms for the diagnosis of DMI would allow people to be more confident in the information that they receive about their disease, and support the development of treatments for DMI.
The support of others is central to the wellbeing of people with vision loss. Although people with vision loss may also lose independence, care from loved ones can help to improve quality of life. Most importantly, developing trust between a patient and their doctor is central to managing people’s fears about their eyesight, and making sure that they follow helpful advice. Doctors must use an open and flexible approach with their patients, providing information in an honest and understandable way. Living Without a Diagnosis: A Patient’s Perspective on Diabetic Macular Ischemia; Audioslides. (MP4 23566 kb)
Collapse
Affiliation(s)
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital, 162 City Rd, London, EC1V 2PD, UK.
| |
Collapse
|
6
|
LidSonic for Visually Impaired: Green Machine Learning-Based Assistive Smart Glasses with Smart App and Arduino. ELECTRONICS 2022. [DOI: 10.3390/electronics11071076] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Smart wearable technologies such as fitness trackers are creating many new opportunities to improve the quality of life for everyone. It is usually impossible for visually impaired people to orientate themselves in large spaces and navigate an unfamiliar area without external assistance. The design space for assistive technologies for the visually impaired is complex, involving many design parameters including reliability, transparent object detection, handsfree operations, high-speed real-time operations, low battery usage, low computation and memory requirements, ensuring that it is lightweight, and price affordability. State-of-the-art visually impaired devices lack maturity, and they do not fully meet user satisfaction, thus more effort is required to bring innovation to this field. In this work, we develop a pair of smart glasses called LidSonic that uses machine learning, LiDAR, and ultrasonic sensors to identify obstacles. The LidSonic system comprises an Arduino Uno device located in the smart glasses and a smartphone app that communicates data using Bluetooth. Arduino collects data, manages the sensors on smart glasses, detects objects using simple data processing, and provides buzzer warnings to visually impaired users. The smartphone app receives data from Arduino, detects and identifies objects in the spatial environment, and provides verbal feedback about the object to the user. Compared to image processing-based glasses, LidSonic requires much less processing time and energy to classify objects using simple LiDAR data containing 45-integer readings. We provide a detailed description of the system hardware and software design, and its evaluation using nine machine learning algorithms. The data for the training and validation of machine learning models are collected from real spatial environments. We developed the complete LidSonic system using off-the-shelf inexpensive sensors and a microcontroller board costing less than USD 80. The intention is to provide a design of an inexpensive, miniature, green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. This work is expected to open new directions for smart glasses design using open software tools and off-the-shelf hardware.
Collapse
|
7
|
Kilian J, Neugebauer A, Scherffig L, Wahl S. The Unfolding Space Glove: A Wearable Spatio-Visual to Haptic Sensory Substitution Device for Blind People. SENSORS 2022; 22:s22051859. [PMID: 35271009 PMCID: PMC8914703 DOI: 10.3390/s22051859] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 02/18/2022] [Accepted: 02/22/2022] [Indexed: 02/04/2023]
Abstract
This paper documents the design, implementation and evaluation of the Unfolding Space Glove—an open source sensory substitution device. It transmits the relative position and distance of nearby objects as vibratory stimuli to the back of the hand and thus enables blind people to haptically explore the depth of their surrounding space, assisting with navigation tasks such as object recognition and wayfinding. The prototype requires no external hardware, is highly portable, operates in all lighting conditions, and provides continuous and immediate feedback—all while being visually unobtrusive. Both blind (n = 8) and blindfolded sighted participants (n = 6) completed structured training and obstacle courses with both the prototype and a white long cane to allow performance comparisons to be drawn between them. The subjects quickly learned how to use the glove and successfully completed all of the trials, though still being slower with it than with the cane. Qualitative interviews revealed a high level of usability and user experience. Overall, the results indicate the general processability of spatial information through sensory substitution using haptic, vibrotactile interfaces. Further research would be required to evaluate the prototype’s capabilities after extensive training and to derive a fully functional navigation aid from its features.
Collapse
Affiliation(s)
- Jakob Kilian
- Köln International School of Design, TH Köln, 50678 Köln, Germany; (J.K.); (L.S.)
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
| | - Alexander Neugebauer
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
| | - Lasse Scherffig
- Köln International School of Design, TH Köln, 50678 Köln, Germany; (J.K.); (L.S.)
| | - Siegfried Wahl
- ZEISS Vision Science Laboratory, Eberhard-Karls-University Tübingen, 72076 Tübingen, Germany;
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
- Correspondence: ; Tel.: +49-7071-29-84512
| |
Collapse
|
8
|
A Survey on Recent Advances in AI and Vision-Based Methods for Helping and Guiding Visually Impaired People. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12052308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
We present in this paper the state of the art and an analysis of recent research work and achievements performed in the domain of AI-based and vision-based systems for helping blind and visually impaired people (BVIP). We start by highlighting the recent and tremendous importance that AI has acquired following the use of convolutional neural networks (CNN) and their ability to solve image classification tasks efficiently. After that, we also note that VIP have high expectations about AI-based systems as a possible way to ease the perception of their environment and to improve their everyday life. Then, we set the scope of our survey: we concentrate our investigations on the use of CNN or related methods in a vision-based system for helping BVIP. We analyze the existing surveys, and we study the current work (a selection of 30 case studies) using several dimensions such as acquired data, learned models, and human–computer interfaces. We compare the different approaches, and conclude by analyzing future trends in this domain.
Collapse
|
9
|
Longin L, Deroy O. Augmenting perception: How artificial intelligence transforms sensory substitution. Conscious Cogn 2022; 99:103280. [PMID: 35114632 DOI: 10.1016/j.concog.2022.103280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 11/26/2021] [Accepted: 01/12/2022] [Indexed: 01/28/2023]
Abstract
What happens when artificial sensors are coupled with the human senses? Using technology to extend the senses is an old human dream, on which sensory substitution and other augmentation technologies have already delivered. Laser tactile canes, corneal implants and magnetic belts can correct or extend what individuals could otherwise perceive. Here we show why accommodating intelligent sensory augmentation devices not just improves but also changes the way of thinking and classifying former sensory augmentation devices. We review the benefits in terms of signal processing and show why non-linear transformation is more than a mere improvement compared to classical linear transformation.
Collapse
Affiliation(s)
- Louis Longin
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU-Munich, Geschwister-Scholl-Platz 1, 80359 Munich, Germany.
| | - Ophelia Deroy
- Faculty of Philosophy, Philosophy of Science and the Study of Religion, LMU-Munich, Geschwister-Scholl-Platz 1, 80359 Munich, Germany; Munich Center for Neurosciences-Brain & Mind, Großhaderner Str. 2, 82152 Planegg-Martinsried, Germany; Institute of Philosophy, School of Advanced Study, University of London, London WC1E 7HU, United Kingdom
| |
Collapse
|
10
|
Analysis and Validation of Cross-Modal Generative Adversarial Network for Sensory Substitution. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18126216. [PMID: 34201269 PMCID: PMC8228544 DOI: 10.3390/ijerph18126216] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/03/2021] [Accepted: 06/03/2021] [Indexed: 11/20/2022]
Abstract
Visual-auditory sensory substitution has demonstrated great potential to help visually impaired and blind groups to recognize objects and to perform basic navigational tasks. However, the high latency between visual information acquisition and auditory transduction may contribute to the lack of the successful adoption of such aid technologies in the blind community; thus far, substitution methods have remained only laboratory-scale research or pilot demonstrations. This high latency for data conversion leads to challenges in perceiving fast-moving objects or rapid environmental changes. To reduce this latency, prior analysis of auditory sensitivity is necessary. However, existing auditory sensitivity analyses are subjective because they were conducted using human behavioral analysis. Therefore, in this study, we propose a cross-modal generative adversarial network-based evaluation method to find an optimal auditory sensitivity to reduce transmission latency in visual-auditory sensory substitution, which is related to the perception of visual information. We further conducted a human-based assessment to evaluate the effectiveness of the proposed model-based analysis in human behavioral experiments. We conducted experiments with three participant groups, including sighted users (SU), congenitally blind (CB) and late-blind (LB) individuals. Experimental results from the proposed model showed that the temporal length of the auditory signal for sensory substitution could be reduced by 50%. This result indicates the possibility of improving the performance of the conventional vOICe method by up to two times. We confirmed that our experimental results are consistent with human assessment through behavioral experiments. Analyzing auditory sensitivity with deep learning models has the potential to improve the efficiency of sensory substitution.
Collapse
|