1
|
Cheng MJ, Rohan EMF, Rai BB, Sabeti F, Maddess T, Lane J. The experience of visual art for people living with mild-to-moderate vision loss. Arts Health 2024; 16:147-166. [PMID: 37012640 DOI: 10.1080/17533015.2023.2192741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 03/04/2023] [Indexed: 04/05/2023]
Abstract
BACKGROUND Visual art can enhance wellbeing and quality-of-life; however, the experience of visual art for people with mild-to-moderate vision loss has not been examined. METHODS Eight participants (6 females, 2 males; Mean age = 81 years, SD = 7.9, range 70-91 years; 4 with mild vision loss and 4 with moderate vision loss based on binocular visual acuity) completed a mixed-methods study comprising: a semi-structured interview on visual art experience; an eye examination; and questionnaires about visual functioning and quality-of-life. RESULTS Various themes were identified: visual perception of art (e.g. altered colours, visual distortions, etc.), viewing conditions, elements of art, personal preference, deriving meaning, appreciation of art, impact of impaired visual perception, and social aspects of art. CONCLUSIONS The overall experience of art is influenced by how an individual sees, perceives, and makes meaning from art. Even mild vision loss can impair this experience and impact emotional and social wellbeing.
Collapse
Affiliation(s)
- Meredith J Cheng
- Australian National University Medical School, College of Health and Medicine, Canberra, ACT, Australia
| | - Emilie M F Rohan
- Eccles Institute for Neuroscience, The John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia
| | - Bhim B Rai
- Eccles Institute for Neuroscience, The John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia
| | - Faran Sabeti
- Eccles Institute for Neuroscience, The John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia
- Discipline of Optometry, Faculty of Health, University of Canberra, ACT, Australia
| | - Ted Maddess
- Eccles Institute for Neuroscience, The John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia
| | - Jo Lane
- National Centre for Epidemiology and Population Health, College of Health and Medicine, Australian National University, Canberra, ACT, Australia
| |
Collapse
|
2
|
Jiang P, Kent C, Rossiter J. Towards sensory substitution and augmentation: Mapping visual distance to audio and tactile frequency. PLoS One 2024; 19:e0299213. [PMID: 38530828 DOI: 10.1371/journal.pone.0299213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 02/07/2024] [Indexed: 03/28/2024] Open
Abstract
Multimodal perception is the predominant means by which individuals experience and interact with the world. However, sensory dysfunction or loss can significantly impede this process. In such cases, cross-modality research offers valuable insight into how we can compensate for these sensory deficits through sensory substitution. Although sight and hearing are both used to estimate the distance to an object (e.g., by visual size and sound volume) and the perception of distance is an important element in navigation and guidance, it is not widely studied in cross-modal research. We investigate the relationship between audio and vibrotactile frequencies (in the ranges 47-2,764 Hz and 10-99 Hz, respectively) and distances uniformly distributed in the range 1-12 m. In our experiments participants mapped the distance (represented by an image of a model at that distance) to a frequency via adjusting a virtual tuning knob. The results revealed that the majority (more than 76%) of participants demonstrated a strong negative monotonic relationship between frequency and distance, across both vibrotactile (represented by a natural log function) and auditory domains (represented by an exponential function). However, a subgroup of participants showed the opposite positive linear relationship between frequency and distance. The strong cross-modal sensory correlation could contribute to the development of assistive robotic technologies and devices to augment human perception. This work provides the fundamental foundation for future assisted HRI applications where a mapping between distance and frequency is needed, for example for people with vision or hearing loss, drivers with loss of focus or response delay, doctors undertaking teleoperation surgery, and users in augmented reality (AR) or virtual reality (VR) environments.
Collapse
Affiliation(s)
- Pingping Jiang
- Department of Engineering Mathematics, University of Bristol, Bristol, United Kingdom
- SoftLab, Bristol Robotics Laboratory, Bristol, United Kingdom
| | - Christopher Kent
- School of Psychological Science, University of Bristol, Bristol, United Kingdom
| | - Jonathan Rossiter
- Department of Engineering Mathematics, University of Bristol, Bristol, United Kingdom
- SoftLab, Bristol Robotics Laboratory, Bristol, United Kingdom
| |
Collapse
|
3
|
Sensory Perception Mechanism for Preparing the Combinations of Stimuli Operation in the Architectural Experience. SUSTAINABILITY 2022. [DOI: 10.3390/su14137885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Sensory stimuli in an architectural space play an important role in the human perception of the indoor environment, no matter whether they are static or dynamic, isolated, or combined. By enhancing some perceptions in the sensory stimuli, the overall perceptions of an architectural space can be improved, especially for an intelligent architectural space. As yet, there are few studies reported about the sensory perception mechanism for the sensory stimuli operation in the architectural experience. In this research, a wooden micro building was prepared for the study of the sensitivity level of participants to various sensory stimuli in the same and in different sensory domains. Participants’ visual, auditory, olfactory, tactile and kinaesthesia perceptions were discussed statistically in terms of the sensitivity level. Based on the study, the effect of a single dynamic sensory stimulus (a dynamically coloured light) on the participants’ perception was studied in a paper architectural model from two aspects including preference and emotion. The dynamically coloured light was discussed statistically in terms of the level of preference. The study showed that there are significant differences among participants’ levels of sensitivity to the different sensory domains and to the different sensory stimuli. In particular, the sensitivity level to the stimulus that is the colour of a space is the highest of all stimuli. As a single changing sensory stimulus, a dynamically coloured light can lead to significant mood fluctuations and changes in the preference level. In particular, yellow is the favourite colour of light. The object of this study is expected to provide a theoretical foundation that is related to sensory choice, sensory perception enhancement and the combination forms of sensory perceptions. Based on the theoretical foundation, the perception design of overlapped multi-sensory stimuli and a single dynamic stimulus can be conducted to improve the quality of the indoor environment of normal and intelligent multi-sensory architecture.
Collapse
|
4
|
AI Ekphrasis: Multi-Modal Learning with Foundation Models for Fine-Grained Poetry Retrieval. ELECTRONICS 2022. [DOI: 10.3390/electronics11081275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Artificial intelligence research in natural language processing in the context of poetry struggles with the recognition of holistic content such as poetic symbolism, metaphor, and other fine-grained attributes. Given these challenges, multi-modal image–poetry reasoning and retrieval remain largely unexplored. Our recent accessibility study indicates that poetry is an effective medium to convey visual artwork attributes for improved artwork appreciation of people with visual impairments. We, therefore, introduce a deep learning approach for the automatic retrieval of poetry suitable to the input images. The recent state-of-the-art CLIP provides a way for multi-modal visual and text features matched using cosine similarity. However, it lacks shared cross-modality attention features to model fine-grained relationships. The proposed approach in this work takes advantage of strong pre-training of the CLIP model and overcomes its limitations by introducing shared attention parameters to better model the fine-grained relationship between both modalities. We test and compare our proposed approach using the expertly annotated MiltiM-Poem dataset, which is considered the largest public image–poetry pair dataset for English poetry. The proposed approach aims to solve the problems of image-based attribute recognition and automatic retrieval for fine-grained poetic verses. The test results reflect that the shared attention parameters alleviate fine-grained attribute recognition, and the proposed approach is a significant step towards automatic multi-modal retrieval for improved artwork appreciation of people with visual impairments.
Collapse
|
5
|
Abstract
The development of assistive technologies is improving the independent access of blind and visually impaired people to visual artworks through non-visual channels. Current single modality tactile and auditory approaches to communicate color contents must compromise between conveying a broad color palette, ease of learning, and suffer from limited expressiveness. In this work, we propose a multi-sensory color code system that uses sound and scent to represent colors. Melodies express each color’s hue and scents the saturated, light, and dark color dimensions for each hue. In collaboration with eighteen participants, we evaluated the color identification rate achieved when using the multi-sensory approach. Seven (39%) of the participants improved their identification rate, five (28%) remained the same, and six (33%) performed worse when compared to an audio-only color code alternative. The participants then evaluated and compared a color content exploration prototype that uses the proposed color code with a tactile graphic equivalent using the System Usability Scale. For a visual artwork color exploration task, the multi-sensory color code integrated prototype received a score of 78.61, while the tactile graphics equivalent received 61.53. User feedback indicates that the multi-sensory color code system improved the convenience and confidence of the participants.
Collapse
|
6
|
ColorPoetry: Multi-Sensory Experience of Color with Poetry in Visual Arts Appreciation of Persons with Visual Impairment. ELECTRONICS 2021. [DOI: 10.3390/electronics10091064] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Visually impaired visitors experience many limitations when visiting museum exhibits, such as a lack of cognitive and sensory access to exhibits or replicas. Contemporary art is evolving in the direction of appreciation beyond simply looking at works, and the development of various sensory technologies has had a great influence on culture and art. Thus, opportunities for people with visual impairments to appreciate visual artworks through various senses such as hearing, touch, and smell are expanding. However, it is uncommon to provide a multi-sensory interactive interface for color recognition, such as integrating patterns, sounds, temperature, and scents. This paper attempts to convey a color cognition to the visually impaired, taking advantage of multisensory coding color. In our previous works, musical melodies with different combinations of pitch, timbre, velocity, and tempo were used to distinguish vivid (i.e., saturated), light, and dark colors. However, it was rather difficult to distinguish among warm/cool/light/dark colors with using sound cues only. Therefore, in this paper, we aim to build a multisensory color-coding system with combining sound and poem such that poem leads to represent more color dimensions, such as including warm and cool colors for red, orange, yellow, green, blue, and purple. To do this, we first performed an implicit association test to identify the most suitable poem among the candidate poems to represent colors in artwork by finding the common semantic directivity between the given candidate poem with voice modulation and the artwork in terms of light/dark/warm/color dimensions. Finally, we conducted a system usability test on the proposed color-coding system, confirming that poem will be an effective supplement for distinguishing between vivid, light, and dark colors with different color appearance dimensions, such as warm and cold colors. The user experience score of 15 college students was 75.1%, that was comparable with the color-music coding system that received a user experience rating of 74.1%. with proven usability.
Collapse
|
7
|
Abstract
Contemporary art is evolving beyond simply looking at works, and the development of various sensory technologies has had a great influence on culture and art. Accordingly, opportunities for the visually impaired to appreciate visual artworks through various senses such as auditory and tactile senses are expanding. However, insufficient sound expression and lack of portability make it less understandable and accessible. This paper attempts to convey a color and depth coding scheme to the visually impaired, based on alternative sensory modalities, such as hearing (by encoding the color and depth information with 3D sounds of audio description) and touch (to be used for interface-triggering information such as color and depth). The proposed color-coding scheme represents light, saturated, and dark colors for red, orange, yellow, yellow-green, green, blue-green, blue, and purple. The paper’s proposed system can be used for both mobile platforms and 2.5D (relief) models.
Collapse
|
8
|
ColorWatch: Color Perceptual Spatial Tactile Interface for People with Visual Impairments. ELECTRONICS 2021. [DOI: 10.3390/electronics10050596] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Tactile perception enables people with visual impairments (PVI) to engage with artworks and real-life objects at a deeper abstraction level. The development of tactile and multi-sensory assistive technologies has expanded their opportunities to appreciate visual arts. We have developed a tactile interface based on the proposed concept design under considerations of PVI tactile actuation, color perception, and learnability. The proposed interface automatically translates reference colors into spatial tactile patterns. A range of achromatic colors and six prominent basic colors with three levels of chroma and values are considered for the cross-modular association. In addition, an analog tactile color watch design has been proposed. This scheme enables PVI to explore artwork or real-life object color by identifying the reference colors through a color sensor and translating them to the tactile interface. The color identification tests using this scheme on the developed prototype exhibit good recognition accuracy. The workload assessment and usability evaluation for PVI demonstrate promising results. This suggest that the proposed scheme is appropriate for tactile color exploration.
Collapse
|