1
|
Bilal Salih HE, Takeda K, Kobayashi H, Kakizawa T, Kawamoto M, Zempo K. Use of Auditory Cues and Other Strategies as Sources of Spatial Information for People with Visual Impairment When Navigating Unfamiliar Environments. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:3151. [PMID: 35328840 PMCID: PMC8955554 DOI: 10.3390/ijerph19063151] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/25/2022] [Accepted: 03/05/2022] [Indexed: 01/27/2023]
Abstract
This paper explores strategies that the visually impaired use to obtain information in unfamiliar environments. This paper also aims to determine how natural sounds that often exist in the environment or the auditory cues that are installed in various facilities as a source of guidance are prioritized and selected in different countries. The aim was to evaluate the utilization of natural sounds and auditory cues by users who are visually impaired during mobility. The data were collected by interviewing 60 individuals with visual impairments who offered their insights on the ways they use auditory cues. The data revealed a clear contrast in methods used to obtain information at unfamiliar locations and in the desire for the installation of auditory cues in different locations between those who use trains and those who use different transportation systems. The participants demonstrated a consensus on the need for devices that provide on-demand minimal auditory feedback. The paper discusses the suggestions offered by the interviewees and details their hopes for adjusted auditory cues. The study argues that auditory cues have high potential for improving the quality of life of people who are visually impaired by increasing their mobility range and independence level. Additionally, this study emphasizes the importance of a standardized design for auditory cues, which is a change desired by the interviewees. Standardization is expected to boost the efficiency of auditory cues in providing accurate information and assistance to individuals with visual impairment regardless of their geographical location. Regarding implications for practitioners, the study presents the need to design systems that provide minimal audio feedback to reduce the masking of natural sounds. The design of new auditory cues should utilize the already-existing imagination skills that people who have a visual impairment possess. For example, the pitch of the sound should change to indicate the direction of escalators and elevators and to distinguish the location of male and female toilets.
Collapse
Affiliation(s)
- Hisham E. Bilal Salih
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, Tsukuba 305-8572, Japan;
| | - Kazunori Takeda
- Faculty of Human Sciences, University of Tsukuba, Tsukuba 305-8572, Japan; (K.T.); (H.K.); (T.K.)
| | - Hideyuki Kobayashi
- Faculty of Human Sciences, University of Tsukuba, Tsukuba 305-8572, Japan; (K.T.); (H.K.); (T.K.)
| | - Toshibumi Kakizawa
- Faculty of Human Sciences, University of Tsukuba, Tsukuba 305-8572, Japan; (K.T.); (H.K.); (T.K.)
| | - Masayuki Kawamoto
- Headquarters for International Industry-University Collaboration, University of Tsukuba, Tsukuba 305-8550, Japan;
| | - Keiichi Zempo
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba 305-8573, Japan
| |
Collapse
|
2
|
Thirumalaiah G, Immanuel Alex Pandian S. An optimized complex motion prediction approach based on a video synopsis. INTERNATIONAL JOURNAL OF INTELLIGENT UNMANNED SYSTEMS 2021. [DOI: 10.1108/ijius-08-2021-0090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
The space-time variants algorithm will not give good results in practical scenarios; when no tubes increase, these techniques will not give the results. It is challenging to reduce the energy of the output synopsis videos. In this paper, a new optimized technique has been implemented that models and covers every frame in the output video.
Design/methodology/approach
In the video synopsis, condensing a video to produce a low frame rate (FR) video using their spatial and temporal coefficients is vital in complex environments. Maintaining a database is also feasible and consumes space. In recent years, many algorithms were proposed.
Findings
The main advantage of this proposed technique is that the output frames are selected by the user definitions and stored in low-intensity communication systems and also it gives tremendous support to the user to select desired tubes and thereby stops the criterion in the output video, which can be further suitable for the user's knowledge and creates nonoverlapping tube-oriented synopsis that can provide excellent visual experience.
Research limitations/implications
In this research paper, four test videos are utilized with complex environments (high-density objects) and show that the proposed technique gives better results when compared to other existing techniques.
Originality/value
The proposed method provides a unique technique in video synopsis for compressing the data without loss.
Collapse
|
3
|
Wu X, Hu R, Bao Y. A regression approach to zebra crossing detection based on convolutional neural networks. IET CYBER-SYSTEMS AND ROBOTICS 2021. [DOI: 10.1049/csy2.12006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Xue‐Hua Wu
- School of Electrical Engineering Southeast University Nanjing China
| | - Renjie Hu
- School of Electrical Engineering Southeast University Nanjing China
| | - Yu‐Qing Bao
- School of Electrical and Automation Engineering Nanjing Normal University Nanjing China
| |
Collapse
|
4
|
Budrionis A, Plikynas D, Daniušis P, Indrulionis A. Smartphone-based computer vision travelling aids for blind and visually impaired individuals: A systematic review. Assist Technol 2020; 34:178-194. [PMID: 32207640 DOI: 10.1080/10400435.2020.1743381] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Given the growth in the numbers of visually impaired (VI) people in low-income countries, the development of affordable electronic travel aid (ETA) systems employing devices, sensors, and apps embedded in ordinary smartphones becomes a potentially cost-effective and reasonable all-in-one solution of utmost importance for the VI. This paper offers an overview of recent ETA research prototypes that employ smartphones for assisted orientation and navigation in indoor and outdoor spaces by providing additional information about the surrounding objects. Scientific achievements in the field were systematically reviewed using PRISMA methodology. Comparative meta-analysis showed how various smartphone-based ETA prototypes could assist with better orientation, navigation, and wayfinding in indoor and outdoor environments. The analysis found limited interest among researchers in combining haptic interfaces and computer vision capabilities in smartphone-based ETAs for the blind, few attempts to employ novel state-of-the-art computer vision methods based on deep neural networks, and no evaluations of existing off-the-shelf navigation solutions. These results were contrasted with findings from a survey of blind expert users on their problems in navigating in indoor and outdoor environments. This revealed a major mismatch between user needs and academic development in the field.
Collapse
Affiliation(s)
- Andrius Budrionis
- Department of Business Technologies and Entrepreneurship, Vilnius Gediminas Technical University, Vilnius, Lithuania.,Norwegian Centre for E-health Research, University Hospital of North Norway, Tromsø, Norway
| | - Darius Plikynas
- Department of Business Technologies and Entrepreneurship, Vilnius Gediminas Technical University, Vilnius, Lithuania
| | - Povilas Daniušis
- Department of Business Technologies and Entrepreneurship, Vilnius Gediminas Technical University, Vilnius, Lithuania
| | - Audrius Indrulionis
- Department of Business Technologies and Entrepreneurship, Vilnius Gediminas Technical University, Vilnius, Lithuania
| |
Collapse
|
5
|
Rehabilitation Engineering: A perspective on the past 40-years and thoughts for the future. Med Eng Phys 2019; 72:3-12. [DOI: 10.1016/j.medengphy.2019.08.011] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Accepted: 08/28/2019] [Indexed: 11/23/2022]
|
6
|
Nakamura D, Takizawa H, Aoyagi M, Ezaki N, Mizuno S. Smartphone-Based Escalator Recognition for the Visually Impaired. SENSORS 2017; 17:s17051057. [PMID: 28481270 PMCID: PMC5469662 DOI: 10.3390/s17051057] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2017] [Revised: 04/29/2017] [Accepted: 05/03/2017] [Indexed: 11/29/2022]
Abstract
It is difficult for visually impaired individuals to recognize escalators in everyday environments. If the individuals ride on escalators in the wrong direction, they will stumble on the steps. This paper proposes a novel method to assist visually impaired individuals in finding available escalators by the use of smartphone cameras. Escalators are recognized by analyzing optical flows in video frames captured by the cameras, and auditory feedback is provided to the individuals. The proposed method was implemented on an Android smartphone and applied to actual escalator scenes. The experimental results demonstrate that the proposed method is promising for helping visually impaired individuals use escalators.
Collapse
Affiliation(s)
- Daiki Nakamura
- Department of Computer Science, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8573, Japan.
| | - Hotaka Takizawa
- Department of Computer Science, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8573, Japan.
| | - Mayumi Aoyagi
- Aichi University of Education, 1 Hirosawa, Igaya, Kariya 448-8542, Japan.
| | - Nobuo Ezaki
- Toba National College of Maritime Technology, 1-1 Ikegami, Toba 517-8501, Japan.
| | - Shinji Mizuno
- Aichi Institute of Technology, 1247 Yachigusa, Yakusa, Toyota 470-0392, Japan.
| |
Collapse
|
7
|
Wang S, Yang X, Tian Y. Detecting Signage and Doors for Blind Navigation and Wayfinding. NETWORK MODELING AND ANALYSIS IN HEALTH INFORMATICS AND BIOINFORMATICS 2013; 2:81-93. [PMID: 23914345 PMCID: PMC3728285 DOI: 10.1007/s13721-013-0027-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Signage plays a very important role to find destinations in applications of navigation and wayfinding. In this paper, we propose a novel framework to detect doors and signage to help blind people accessing unfamiliar indoor environments. In order to eliminate the interference information and improve the accuracy of signage detection, we first extract the attended areas by using a saliency map. Then the signage is detected in the attended areas by using a bipartite graph matching. The proposed method can handle multiple signage detection. Furthermore, in order to provide more information for blind users to access the area associated with the detected signage, we develop a robust method to detect doors based on a geometric door frame model which is independent to door appearances. Experimental results on our collected datasets of indoor signage and doors demonstrate the effectiveness and efficiency of our proposed method.
Collapse
|
8
|
Tian Y, Yang X, Yi C, Arditi A. Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments. MACHINE VISION AND APPLICATIONS 2013; 24:521-535. [PMID: 23630409 PMCID: PMC3636776 DOI: 10.1007/s00138-012-0431-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.
Collapse
Affiliation(s)
- YingLi Tian
- Electrical Engineering Department, The City College, and Graduate Center, City University of New York, New York, NY 10031
| | - Xiaodong Yang
- Electrical Engineering Department, The City College, and Graduate Center, City University of New York, New York, NY 10031
| | - Chucai Yi
- The Graduate Center, City University of New York, New York, NY 10036
| | | |
Collapse
|
9
|
Coughlan JM, Shen H. Crosswatch: a System for Providing Guidance to Visually Impaired Travelers at Traffic Intersections. ACTA ACUST UNITED AC 2013; 7. [PMID: 24353745 DOI: 10.1108/17549451311328808] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PURPOSE This paper describes recent progress on the "Crosswatch" project, a smartphone-based system developed for providing guidance to blind and visually impaired travelers at traffic intersections. Building on past work on Crosswatch functionality to help the user achieve proper alignment with the crosswalk and read the status of walk lights to know when it is time to cross, we outline the directions Crosswatch is now taking to help realize its potential for becoming a practical system: namely, augmenting computer vision with other information sources, including geographic information systems (GIS) and sensor data, and inferring the user's location much more precisely than is possible through GPS alone, to provide a much larger range of information about traffic intersections to the pedestrian. DESIGN/METHODOLOGY/APPROACH The paper summarizes past progress on Crosswatch and describes details about the development of new Crosswatch functionalities. One such functionality, which is required for determination of the user's precise location, is studied in detail, including the design of a suitable user interface to support this functionality and preliminary tests of this interface with visually impaired volunteer subjects. FINDINGS The results of the tests of the new Crosswatch functionality demonstrate that the functionality is feasible in that it is usable by visually impaired persons. RESEARCH LIMITATIONS/IMPLICATIONS While the tests that were conducted of the new Crosswatch functionality are preliminary, the results of the tests have suggested several possible improvements, to be explored in the future. PRACTICAL IMPLICATIONS The results described in this paper suggest that the necessary technologies used by the Crosswatch system are rapidly maturing, implying that the system has an excellent chance of becoming practical in the near future. ORIGINALITY/VALUE The paper addresses an innovative solution to a key problem faced by blind and visually impaired travelers, which has the potential to greatly improve independent travel for these individuals.
Collapse
Affiliation(s)
| | - Huiying Shen
- The Smith-Kettlewell Eye Research Institute San Francisco, CA ,
| |
Collapse
|
10
|
Asad M, Ikram W. Smartphone based guidance system for visually impaired person. 2012 3RD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS (IPTA) 2012. [DOI: 10.1109/ipta.2012.6469553] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
11
|
Abstract
Computer vision holds great promise for helping persons with blindness or visual impairments (VI) to interpret and explore the visual world. To this end, it is worthwhile to assess the situation critically by understanding the actual needs of the VI population and which of these needs might be addressed by computer vision. This article reviews the types of assistive technology application areas that have already been developed for VI, and the possible roles that computer vision can play in facilitating these applications. We discuss how appropriate user interfaces are designed to translate the output of computer vision algorithms into information that the user can quickly and safely act upon, and how system-level characteristics affect the overall usability of an assistive technology. Finally, we conclude by highlighting a few novel and intriguing areas of application of computer vision to assistive technology.
Collapse
Affiliation(s)
- Roberto Manduchi
- Department of Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA 95064
| | - James Coughlan
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, San Francisco, CA 94115
| |
Collapse
|
12
|
Abstract
Crossing an urban traffic intersection is one of the most dangerous activities of a blind or visually impaired person's travel. Building on past work by the authors on the issue of proper alignment with the crosswalk, this paper addresses the complementary issue of knowing when it is time to cross. We describe a prototype portable system that alerts the user in real time once the Walk light is illuminated. The system runs as a software application on an off-the-shelf Nokia N95 mobile phone, using computer vision algorithms to analyze video acquired by the built-in camera to determine in real time if a Walk light is currently visible. Once a Walk light is detected, an audio tone is sounded to alert the user. Experiments with a blind volunteer subject at urban traffic intersections demonstrate proof of concept of the system, which successfully alerted the subject when the Walk light appeared.
Collapse
|
13
|
Shen H, Chan KY, Coughlan J, Brabyn J. A mobile phone system to find crosswalks for visually impaired pedestrians. TECHNOLOGY AND DISABILITY 2008; 20:217-224. [PMID: 20411035 PMCID: PMC2856957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Urban intersections are the most dangerous parts of a blind or visually impaired pedestrian's travel. A prerequisite for safely crossing an intersection is entering the crosswalk in the right direction and avoiding the danger of straying outside the crosswalk. This paper presents a proof of concept system that seeks to provide such alignment information. The system consists of a standard mobile phone with built-in camera that uses computer vision algorithms to detect any crosswalk visible in the camera's field of view; audio feedback from the phone then helps the user align him/herself to it. Our prototype implementation on a Nokia mobile phone runs in about one second per image, and is intended for eventual use in a mobile phone system that will aid blind and visually impaired pedestrians in navigating traffic intersections.
Collapse
Affiliation(s)
- Huiying Shen
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| | - Kee-Yip Chan
- Department of Computer Engineering, University of California, Santa Cruz, CA, USA
| | - James Coughlan
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| | - John Brabyn
- The Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| |
Collapse
|
14
|
Ivanchenko V, Coughlan J, Shen H. Detecting and Locating Crosswalks using a Camera Phone. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2008; 2008:4563143. [PMID: 20502533 DOI: 10.1109/cvprw.2008.4563143] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Urban intersections are the most dangerous parts of a blind or visually impaired person's travel. To address this problem, this paper describes the novel "Crosswatch" system, which uses computer vision to provide information about the location and orientation of crosswalks to a blind or visually impaired pedestrian holding a camera cell phone. A prototype of the system runs on an off-the-shelf Nokia N95 camera phone in real time, which automatically takes a few images per second, analyzes each image in a fraction of a second and sounds an audio tone when it detects a crosswalk. Real-time performance on the cell phone, whose computational resources are limited compared to the type of desktop platform usually used in computer vision, is made possible by coding in Symbian C++. Tests with blind subjects demonstrate the feasibility of the system.
Collapse
|