1
|
Zhang S, Song J. An empirical investigation into the preferences of the elderly for user interface design in personal electronic health record systems. Front Digit Health 2024; 5:1289904. [PMID: 38348367 PMCID: PMC10859482 DOI: 10.3389/fdgth.2023.1289904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 12/26/2023] [Indexed: 02/15/2024] Open
Abstract
Background With the continuous advancement of digital technologies, electronic Personal Health Records (ePHR) offer end-users greater control and convenience over their health data. Although ePHR are perceived as innovative tools in medical services that provide patient-centered care and disease prevention, many system interfaces are inclined toward younger users, overlooking investigations pertinent to elderly users. Our objective is to uncover the preferences of the elderly for an ideal ePHR system interface. Materials and methods Relying on a literature review, we identified six interface attributes. Utilizing conjoint analysis, we constructed 16 representative design scenarios based on orthogonal design by combining different attribute levels. We invited 187 elderly participants to evaluate these scenarios. Data analysis was performed using SPSS 26.0. The results indicate that among the ePHR interface design attributes, the elderly prioritize color attributes, followed by the notification method. Designs with contrasting color schemes, skeuomorphic design approaches, and icon-centric menu navigation with segmented layouts, and voice notifications when a message is received, are the most preferred interface design choices. Discussion This research elucidates the ideal interface design elements for ePHR as perceived by the elderly, offering valuable references for age-friendly design considerations in ePHR systems. Results Implementing these insights can aid in promoting mobile health services among the elderly demographic, enhancing their user experience in health management interfaces. This, in turn, fosters the widespread adoption of mobile health service technologies, further advancing the development of a healthy aging society.
Collapse
Affiliation(s)
| | - Jisung Song
- Graduate School of Communication Design, Hanyang University, Ansan, Republic of Korea
| |
Collapse
|
2
|
Takashima A, Carota F, Schoots V, Redmann A, Jehee J, Indefrey P. Tomatoes Are Red: The Perception of Achromatic Objects Elicits Retrieval of Associated Color Knowledge. J Cogn Neurosci 2024; 36:24-45. [PMID: 37847811 DOI: 10.1162/jocn_a_02068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
When preparing to name an object, semantic knowledge about the object and its attributes is activated, including perceptual properties. It is unclear, however, whether semantic attribute activation contributes to lexical access or is a consequence of activating a concept irrespective of whether that concept is to be named or not. In this study, we measured neural responses using fMRI while participants named objects that are typically green or red, presented in black line drawings. Furthermore, participants underwent two other tasks with the same objects, color naming and semantic judgment, to see if the activation pattern we observe during picture naming is (a) similar to that of a task that requires accessing the color attribute and (b) distinct from that of a task that requires accessing the concept but not its name or color. We used representational similarity analysis to detect brain areas that show similar patterns within the same color category, but show different patterns across the two color categories. In all three tasks, activation in the bilateral fusiform gyri ("Human V4") correlated with a representational model encoding the red-green distinction weighted by the importance of color feature for the different objects. This result suggests that when seeing objects whose color attribute is highly diagnostic, color knowledge about the objects is retrieved irrespective of whether the color or the object itself have to be named.
Collapse
Affiliation(s)
- Atsuko Takashima
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Francesca Carota
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Vincent Schoots
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Heinrich Heine University Düsseldorf, Germany
| | | | - Janneke Jehee
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Peter Indefrey
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Heinrich Heine University Düsseldorf, Germany
| |
Collapse
|
3
|
Stanikunas R, Soliunas A, Bliumas R, Jocbalyte K, Novickovas A. Differences in color fading and recovery under sustained fixation. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:A33-A39. [PMID: 37133000 DOI: 10.1364/josaa.476533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
More than two centuries ago, Swiss philosopher I. P. V. Troxler announced in 1804 that fixated images fade away during normal vision. Since this declaration, the phenomenon now known as Troxler fading has become the subject of intensive research. Many researchers were eager to find out why we experience image fading and under what conditions image restoration happens. Here, we investigated the dynamics of color stimulus fading and recovery under sustained eye fixation. The objective of the experiments was to find out which colors fade and recover faster under isoluminant conditions. The stimuli were eight blurred color rings extending to 13° in size. Four unique colors (red, yellow, green, and blue) and four intermediate colors (magenta, cyan, yellow-green, and orange) were used. Stimuli were displayed on a computer monitor with a gray background and were isoluminant to the background. The presentation of the stimulus lasted 2 min and subjects were required to look at the fixation point in the middle of the ring and suppress eye movements. The task for subjects was to report the moments of change in the stimulus visibility by four stages of stimulus completeness. We found that all investigated colors undergo fading and recovery cycles during 2 min of observation. The data suggest that magenta and cyan colors have faster stimulus fading and undergo more recovery cycles, while longer wavelength colors slow down stimulus fading.
Collapse
|
4
|
Anderson MD, Elder JH, Graf EW, Adams WJ. The time-course of real-world scene perception: Spatial and semantic processing. iScience 2022; 25:105633. [PMID: 36505927 PMCID: PMC9732406 DOI: 10.1016/j.isci.2022.105633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/16/2022] [Accepted: 11/16/2022] [Indexed: 11/21/2022] Open
Abstract
Real-world scene perception unfolds remarkably quickly, yet the underlying visual processes are poorly understood. Space-centered theory maintains that a scene's spatial structure (e.g., openness, mean depth) can be rapidly recovered from low-level image statistics. In turn, the statistical relationship between a scene's spatial properties and semantic content allows for semantic identity to be inferred from its layout. We tested this theory by investigating (1) the temporal dynamics of spatial and semantic perception in real-world scenes, and (2) dependencies between spatial and semantic judgments. Participants viewed backward-masked images for 13.3 to 106.7 ms, and identified the semantic (e.g., beach, road) or spatial structure (e.g., open, closed-off) category. We found no temporal precedence of spatial discrimination relative to semantic discrimination. Computational analyses further suggest that, instead of using spatial layout to infer semantic categories, humans exploit semantic information to discriminate spatial structure categories. These findings challenge traditional 'bottom-up' views of scene perception.
Collapse
Affiliation(s)
- Matt D. Anderson
- Centre for Perception and Cognition, Psychology, University of Southampton, Southampton, UK,Corresponding author
| | - James H. Elder
- Centre for Vision Research, Department of Psychology, Department of Electrical Engineering and Computer Science, York University, Toronto, Canada
| | - Erich W. Graf
- Centre for Perception and Cognition, Psychology, University of Southampton, Southampton, UK
| | - Wendy J. Adams
- Centre for Perception and Cognition, Psychology, University of Southampton, Southampton, UK
| |
Collapse
|
5
|
Introduction to Artificial Intelligence in Medicine. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
Palazzo S, Spampinato C, Kavasidis I, Giordano D, Schmidt J, Shah M. Decoding Brain Representations by Multimodal Learning of Neural Activity and Visual Features. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:3833-3849. [PMID: 32750768 DOI: 10.1109/tpami.2020.2995909] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This work presents a novel method of exploring human brain-visual representations, with a view towards replicating these processes in machines. The core idea is to learn plausible computational and biological representations by correlating human neural activity and natural images. Thus, we first propose a model, EEG-ChannelNet, to learn a brain manifold for EEG classification. After verifying that visual information can be extracted from EEG data, we introduce a multimodal approach that uses deep image and EEG encoders, trained in a siamese configuration, for learning a joint manifold that maximizes a compatibility measure between visual features and brain representations. We then carry out image classification and saliency detection on the learned manifold. Performance analyses show that our approach satisfactorily decodes visual information from neural signals. This, in turn, can be used to effectively supervise the training of deep learning models, as demonstrated by the high performance of image classification and saliency detection on out-of-training classes. The obtained results show that the learned brain-visual features lead to improved performance and simultaneously bring deep models more in line with cognitive neuroscience work related to visual perception and attention.
Collapse
|
7
|
Harris E, Mihai D, Hare J. How Convolutional Neural Network Architecture Biases Learned Opponency and Color Tuning. Neural Comput 2021; 33:858-898. [PMID: 33400902 DOI: 10.1162/neco_a_01356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 10/06/2020] [Indexed: 11/04/2022]
Abstract
Recent work suggests that changing convolutional neural network (CNN) architecture by introducing a bottleneck in the second layer can yield changes in learned function. To understand this relationship fully requires a way of quantitatively comparing trained networks. The fields of electrophysiology and psychophysics have developed a wealth of methods for characterizing visual systems that permit such comparisons. Inspired by these methods, we propose an approach to obtaining spatial and color tuning curves for convolutional neurons that can be used to classify cells in terms of their spatial and color opponency. We perform these classifications for a range of CNNs with different depths and bottleneck widths. Our key finding is that networks with a bottleneck show a strong functional organization: almost all cells in the bottleneck layer become both spatially and color opponent, and cells in the layer following the bottleneck become nonopponent. The color tuning data can further be used to form a rich understanding of how color a network encodes color. As a concrete demonstration, we show that shallower networks without a bottleneck learn a complex nonlinear color system, whereas deeper networks with tight bottlenecks learn a simple channel opponent code in the bottleneck layer. We develop a method of obtaining a hue sensitivity curve for a trained CNN that enables high-level insights that complement the low-level findings from the color tuning data. We go on to train a series of networks under different conditions to ascertain the robustness of the discussed results. Ultimately our methods and findings coalesce with prior art, strengthening our ability to interpret trained CNNs and furthering our understanding of the connection between architecture and learned representation. Trained models and code for all experiments are available at https://github.com/ecs-vlc/opponency.
Collapse
Affiliation(s)
- Ethan Harris
- Vision Learning and Control, Electronics and Computer Science, University of Southampton, Southampton SO17 1B J, U.K.,
| | - Daniela Mihai
- Vision Learning and Control, Electronics and Computer Science, University of Southampton, Southampton SO17 1B J, U.K.,
| | - Jonathon Hare
- Vision Learning and Control, Electronics and Computer Science, University of Southampton, Southampton SO17 1B J, U.K.,
| |
Collapse
|
8
|
ter Haar Romeny BM. Introduction to Artificial Intelligence in Medicine. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_27-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
9
|
The Influence of Object-Color Knowledge on Emerging Object Representations in the Brain. J Neurosci 2020; 40:6779-6789. [PMID: 32703903 DOI: 10.1523/jneurosci.0158-20.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 07/13/2020] [Accepted: 07/15/2020] [Indexed: 11/21/2022] Open
Abstract
The ability to rapidly and accurately recognize complex objects is a crucial function of the human visual system. To recognize an object, we need to bind incoming visual features, such as color and form, together into cohesive neural representations and integrate these with our preexisting knowledge about the world. For some objects, typical color is a central feature for recognition; for example, a banana is typically yellow. Here, we applied multivariate pattern analysis on time-resolved neuroimaging (MEG) data to examine how object-color knowledge affects emerging object representations over time. Our results from 20 participants (11 female) show that the typicality of object-color combinations influences object representations, although not at the initial stages of object and color processing. We find evidence that color decoding peaks later for atypical object-color combinations compared with typical object-color combinations, illustrating the interplay between processing incoming object features and stored object knowledge. Together, these results provide new insights into the integration of incoming visual information with existing conceptual object knowledge.SIGNIFICANCE STATEMENT To recognize objects, we have to be able to bind object features, such as color and shape, into one coherent representation and compare it with stored object knowledge. The MEG data presented here provide novel insights about the integration of incoming visual information with our knowledge about the world. Using color as a model to understand the interaction between seeing and knowing, we show that there is a unique pattern of brain activity for congruently colored objects (e.g., a yellow banana) relative to incongruently colored objects (e.g., a red banana). This effect of object-color knowledge only occurs after single object features are processed, demonstrating that conceptual knowledge is accessed relatively late in the visual processing hierarchy.
Collapse
|
10
|
Marić M, Domijan D. A neurodynamic model of the interaction between color perception and color memory. Neural Netw 2020; 129:222-248. [PMID: 32615406 DOI: 10.1016/j.neunet.2020.06.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 05/03/2020] [Accepted: 06/04/2020] [Indexed: 12/17/2022]
Abstract
The memory color effect and Spanish castle illusion have been taken as evidence of the cognitive penetrability of vision. In the same manner, the successful decoding of color-related brain signals in functional neuroimaging studies suggests the retrieval of memory colors associated with a perceived gray object. Here, we offer an alternative account of these findings based on the design principles of adaptive resonance theory (ART). In ART, conscious perception is a consequence of a resonant state. Resonance emerges in a recurrent cortical circuit when a bottom-up spatial pattern agrees with the top-down expectation. When they do not agree, a special control mechanism is activated that resets the network and clears off erroneous expectation, thus allowing the bottom-up activity to always dominate in perception. We developed a color ART circuit and evaluated its behavior in computer simulations. The model helps to explain how traces of erroneous expectations about incoming color are eventually removed from the color perception, although their transient effect may be visible in behavioral responses or in brain imaging. Our results suggest that the color ART circuit, as a predictive computational system, is almost never penetrable, because it is equipped with computational mechanisms designed to constrain the impact of the top-down predictions on ongoing perceptual processing.
Collapse
|
11
|
Oregi I, Del Ser J, Pérez A, Lozano JA. Robust image classification against adversarial attacks using elastic similarity measures between edge count sequences. Neural Netw 2020; 128:61-72. [PMID: 32442627 DOI: 10.1016/j.neunet.2020.04.030] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Revised: 04/25/2020] [Accepted: 04/27/2020] [Indexed: 10/24/2022]
Abstract
Due to their unprecedented capacity to learn patterns from raw data, deep neural networks have become the de facto modeling choice to address complex machine learning tasks. However, recent works have emphasized the vulnerability of deep neural networks when being fed with intelligently manipulated adversarial data instances tailored to confuse the model. In order to overcome this issue, a major effort has been made to find methods capable of making deep learning models robust against adversarial inputs. This work presents a new perspective for improving the robustness of deep neural networks in image classification. In computer vision scenarios, adversarial images are crafted by manipulating legitimate inputs so that the target classifier is eventually fooled, but the manipulation is not visually distinguishable by an external observer. The reason for the imperceptibility of the attack is that the human visual system fails to detect minor variations in color space, but excels at detecting anomalies in geometric shapes. We capitalize on this fact by extracting color gradient features from input images at multiple sensitivity levels to detect possible manipulations. We resort to a deep neural classifier to predict the category of unseen images, whereas a discrimination model analyzes the extracted color gradient features with time series techniques to determine the legitimacy of input images. The performance of our method is assessed over experiments comprising state-of-the-art techniques for crafting adversarial attacks. Results corroborate the increased robustness of the classifier when using our discrimination module, yielding drastically reduced success rates of adversarial attacks that operate on the whole image rather than on localized regions or around the existing shapes of the image. Future research is outlined towards improving the detection accuracy of the proposed method for more general attack strategies.
Collapse
Affiliation(s)
- Izaskun Oregi
- TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain.
| | - Javier Del Ser
- TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain; Department of Communications Engineering, University of the Basque Country (UPV/EHU), 48013 Bilbao, Spain; Basque Center for Applied Mathematics (BCAM), 48009 Bilbao, Spain
| | - Aritz Pérez
- Basque Center for Applied Mathematics (BCAM), 48009 Bilbao, Spain
| | - José A Lozano
- Basque Center for Applied Mathematics (BCAM), 48009 Bilbao, Spain; Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), 20018 Donostia/San-Sebastián, Spain
| |
Collapse
|
12
|
Weldon KB, Woolgar A, Rich AN, Williams MA. Late disruption of central visual field disrupts peripheral perception of form and color. PLoS One 2020; 15:e0219725. [PMID: 31999697 PMCID: PMC6991998 DOI: 10.1371/journal.pone.0219725] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Accepted: 01/15/2020] [Indexed: 11/18/2022] Open
Abstract
Evidence from neuroimaging and brain stimulation studies suggest that visual information about objects in the periphery is fed back to foveal retinotopic cortex in a separate representation that is essential for peripheral perception. The characteristics of this phenomenon have important theoretical implications for the role fovea-specific feedback might play in perception. In this work, we employed a recently developed behavioral paradigm to explore whether late disruption to central visual space impaired perception of color. In the first experiment, participants performed a shape discrimination task on colored novel objects in the periphery while fixating centrally. Consistent with the results from previous work, a visual distractor presented at fixation ~100ms after presentation of the peripheral stimuli impaired sensitivity to differences in peripheral shapes more than a visual distractor presented at other stimulus onset asynchronies. In a second experiment, participants performed a color discrimination task on the same colored objects. In a third experiment, we further tested for this foveal distractor effect with stimuli restricted to a low-level feature by using homogenous color patches. These two latter experiments resulted in a similar pattern of behavior: a central distractor presented at the critical stimulus onset asynchrony impaired sensitivity to peripheral color differences, but, importantly, the magnitude of the effect was stronger when peripheral objects contained complex shape information. These results show a behavioral effect consistent with disrupting feedback to the fovea, in line with the foveal feedback suggested by previous neuroimaging studies.
Collapse
Affiliation(s)
- Kimberly B. Weldon
- Department of Psychiatry and Behavioral Sciences, University of Minnesota, Minneapolis, MN, United States of America
- Perception in Action Research Centre (PARC), Department of Cognitive Science, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, NSW, Australia
- * E-mail:
| | - Alexandra Woolgar
- Perception in Action Research Centre (PARC), Department of Cognitive Science, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, NSW, Australia
- Medical Research Council (UK), Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, England, United Kingdom
| | - Anina N. Rich
- Perception in Action Research Centre (PARC), Department of Cognitive Science, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, NSW, Australia
| | - Mark A. Williams
- Perception in Action Research Centre (PARC), Department of Cognitive Science, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
13
|
|
14
|
Pinotsis DA, Siegel M, Miller EK. Sensory processing and categorization in cortical and deep neural networks. Neuroimage 2019; 202:116118. [PMID: 31445126 PMCID: PMC6819254 DOI: 10.1016/j.neuroimage.2019.116118] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 07/23/2019] [Accepted: 08/20/2019] [Indexed: 01/13/2023] Open
Abstract
Many recent advances in artificial intelligence (AI) are rooted in visual neuroscience. However, ideas from more complicated paradigms like decision-making are less used. Although automated decision-making systems are ubiquitous (driverless cars, pilot support systems, medical diagnosis algorithms etc.), achieving human-level performance in decision making tasks is still a challenge. At the same time, these tasks that are hard for AI are easy for humans. Thus, understanding human brain dynamics during these decision-making tasks and modeling them using deep neural networks could improve AI performance. Here we modelled some of the complex neural interactions during a sensorimotor decision making task. We investigated how brain dynamics flexibly represented and distinguished between sensory processing and categorization in two sensory domains: motion direction and color. We used two different approaches for understanding neural representations. We compared brain responses to 1) the geometry of a sensory or category domain (domain selectivity) and 2) predictions from deep neural networks (computation selectivity). Both approaches gave us similar results. This confirmed the validity of our analyses. Using the first approach, we found that neural representations changed depending on context. We then trained deep recurrent neural networks to perform the same tasks as the animals. Using the second approach, we found that computations in different brain areas also changed flexibly depending on context. Color computations appeared to rely more on sensory processing, while motion computations more on abstract categories. Overall, our results shed light to the biological basis of categorization and differences in selectivity and computations in different brain areas. They also suggest a way for studying sensory and categorical representations in the brain: compare brain responses to both a behavioral model and a deep neural network and test if they give similar results.
Collapse
Affiliation(s)
- Dimitris A Pinotsis
- Centre for Mathematical Neuroscience and Psychology and Department of Psychology, City -University of London, London, EC1V 0HB, United Kingdom; The Picower Institute for Learning & Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
| | - Markus Siegel
- Center for Integrative Neuroscience and MEG Center, University of Tubingen, 72076, Tübingen, Germany
| | - Earl K Miller
- The Picower Institute for Learning & Memory and Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| |
Collapse
|
15
|
Teichmann L, Grootswagers T, Carlson TA, Rich AN. Seeing versus knowing: The temporal dynamics of real and implied colour processing in the human brain. Neuroimage 2019; 200:373-381. [DOI: 10.1016/j.neuroimage.2019.06.062] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Revised: 05/23/2019] [Accepted: 06/25/2019] [Indexed: 12/21/2022] Open
|
16
|
Akhand O, Galetta MS, Cobbs L, Hasanaj L, Webb N, Drattell J, Amorapanth P, Rizzo JR, Nolan R, Serrano L, Rucker JC, Cardone D, Jordan BD, Silverio A, Galetta SL, Balcer LJ. The new Mobile Universal Lexicon Evaluation System (MULES): A test of rapid picture naming for concussion sized for the sidelines. J Neurol Sci 2018; 387:199-204. [PMID: 29571863 PMCID: PMC6022286 DOI: 10.1016/j.jns.2018.02.031] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2018] [Revised: 02/17/2018] [Accepted: 02/19/2018] [Indexed: 11/28/2022]
Abstract
OBJECTIVE Measures of rapid automatized naming (RAN) have been used for over 50 years to capture vision-based aspects of cognition. The Mobile Universal Lexicon Evaluation System (MULES) is a test of rapid picture naming under investigation for detection of concussion and other neurological disorders. MULES was designed as a series of 54 grouped color photographs (fruits, random objects, animals) that integrates saccades, color perception and contextual object identification. Recent changes to the MULES test have been made to improve ease of use on the athletic sidelines. Originally an 11 × 17-inch single-sided paper, the test has been reduced to a laminated 8.5 × 11-inch double-sided version. We identified performance changes associated with transition to the new, MULES, now sized for the sidelines, and examined MULES on the sideline for sports-related concussion. METHODS We administered the new laminated MULES to a group of adult office volunteers as well as youth and collegiate athletes during pre-season baseline testing. Athletes with concussion underwent sideline testing after injury. Time scores for the new laminated MULES were compared to those for the larger version (big MULES). RESULTS Among 501 athletes and office volunteers (age 16 ± 7 years, range 6-59, 29% female), average test times at baseline were 44.4 ± 14.4 s for the new laminated MULES (n = 196) and 46.5 ± 16.3 s for big MULES (n = 248). Both versions were completed by 57 participants, with excellent agreement (p < 0.001, linear regression, accounting for age). Age was a predictor of test times for both MULES versions, with longer times noted for younger participants (p < 0.001). Among 6 athletes with concussion thus far during the fall sports season (median age 15 years, range 11-21) all showed worsening of MULES scores from pre-season baseline (median 4.0 s, range 2.1-16.4). CONCLUSION The MULES test has been converted to an 11 × 8.5-inch laminated version, with excellent agreement between versions across age groups. Feasibly administered at pre-season and in an office setting, the MULES test shows preliminary evidence of capacity to identify athletes with sports-related concussion.
Collapse
Affiliation(s)
- Omar Akhand
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Matthew S Galetta
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Lucy Cobbs
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Lisena Hasanaj
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Nikki Webb
- Department of Emergency Medicine, New York University School of Medicine, New York, NY, USA.
| | - Julia Drattell
- Department of Recreation and Athletics, New York University, New York, NY, USA.
| | - Prin Amorapanth
- Department of Physical Medicine and Rehabilitation, New York University School of Medicine, New York, NY, USA.
| | - John-Ross Rizzo
- Department of Neurology, New York University School of Medicine, New York, NY, USA; Department of Physical Medicine and Rehabilitation, New York University School of Medicine, New York, NY, USA.
| | - Rachel Nolan
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Liliana Serrano
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Janet C Rucker
- Department of Neurology, New York University School of Medicine, New York, NY, USA; Department of Ophthalmology, New York University School of Medicine, New York, NY, USA.
| | - Dennis Cardone
- Department of Emergency Medicine, New York University School of Medicine, New York, NY, USA.
| | | | - Arlene Silverio
- Department of Orthopaedics, Primary Care Sports Medicine, New York University School of Medicine, New York, NY, USA
| | - Steven L Galetta
- Department of Neurology, New York University School of Medicine, New York, NY, USA; Department of Ophthalmology, New York University School of Medicine, New York, NY, USA.
| | - Laura J Balcer
- Department of Neurology, New York University School of Medicine, New York, NY, USA; Department of Population Health, New York University School of Medicine, New York, NY, USA; Department of Ophthalmology, New York University School of Medicine, New York, NY, USA.
| |
Collapse
|
17
|
Abstract
Color is special among basic visual features in that it can form a defining part of objects that are engrained in our memory. Whereas most neuroimaging research on human color vision has focused on responses related to external stimulation, the present study investigated how sensory-driven color vision is linked to subjective color perception induced by object imagery. We recorded fMRI activity in male and female volunteers during viewing of abstract color stimuli that were red, green, or yellow in half of the runs. In the other half we asked them to produce mental images of colored, meaningful objects (such as tomato, grapes, banana) corresponding to the same three color categories. Although physically presented color could be decoded from all retinotopically mapped visual areas, only hV4 allowed predicting colors of imagined objects when classifiers were trained on responses to physical colors. Importantly, only neural signal in hV4 was predictive of behavioral performance in the color judgment task on a trial-by-trial basis. The commonality between neural representations of sensory-driven and imagined object color and the behavioral link to neural representations in hV4 identifies area hV4 as a perceptual hub linking externally triggered color vision with color in self-generated object imagery.SIGNIFICANCE STATEMENT Humans experience color not only when visually exploring the outside world, but also in the absence of visual input, for example when remembering, dreaming, and during imagery. It is not known where neural codes for sensory-driven and internally generated hue converge. In the current study we evoked matching subjective color percepts, one driven by physically presented color stimuli, the other by internally generated color imagery. This allowed us to identify area hV4 as the only site where neural codes of corresponding subjective color perception converged regardless of its origin. Color codes in hV4 also predicted behavioral performance in an imagery task, suggesting it forms a perceptual hub for color perception.
Collapse
|
18
|
Cobbs L, Hasanaj L, Amorapanth P, Rizzo JR, Nolan R, Serrano L, Raynowska J, Rucker JC, Jordan BD, Galetta SL, Balcer LJ. Mobile Universal Lexicon Evaluation System (MULES) test: A new measure of rapid picture naming for concussion. J Neurol Sci 2016; 372:393-398. [PMID: 27856005 DOI: 10.1016/j.jns.2016.10.044] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2016] [Revised: 09/25/2016] [Accepted: 10/26/2016] [Indexed: 10/20/2022]
Abstract
OBJECTIVE This study introduces a rapid picture naming test, the Mobile Universal Lexicon Evaluation System (MULES), as a novel, vision-based performance measure for concussion screening. The MULES is a visual-verbal task that includes 54 original photographs of fruits, objects and animals. We piloted MULES in a cohort of volunteers to determine feasibility, ranges of picture naming responses, and the relation of MULES time scores to those of King-Devick (K-D), a rapid number naming test. METHODS A convenience sample (n=20, age 34±10) underwent MULES and K-D (spiral bound, iPad versions). Administration order was randomized; MULES tests were audio-recorded to provide objective data on temporal variability and ranges of picture naming responses. RESULTS Scores for the best of two trials for all tests were 40-50s; average times required to name each MULES picture (0.72±0.14s) was greater than those needed for each K-D number ((spiral: 0.33±0.05s, iPad: 0.36±0.06s, 120 numbers), p<0.0001, paired t-test). MULES scores showed the greatest degree of improvement between trials (9.4±4.8s, p<0.0001 for trials 1 vs. 2), compared to K-D (spiral 1.5±3.3s, iPad 1.8±3.4s). Shorter MULES times demonstrated moderate and significant correlations with shorter iPad but not spiral K-D times (r=0.49, p=0.03). CONCLUSION The MULES test is a rapid picture naming task that may engage more extensive neural systems than more commonly used rapid number naming tasks. Rapid picture naming may require additional processing devoted to color perception, object identification, and categorization. Both tests rely on initiation and sequencing of saccadic eye movements.
Collapse
Affiliation(s)
- Lucy Cobbs
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Lisena Hasanaj
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Prin Amorapanth
- Department of Physical Medicine and Rehabilitation, New York University School of Medicine, New York, NY, USA.
| | - John-Ross Rizzo
- Department of Physical Medicine and Rehabilitation, New York University School of Medicine, New York, NY, USA.
| | - Rachel Nolan
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Liliana Serrano
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Jenelle Raynowska
- Department of Neurology, New York University School of Medicine, New York, NY, USA.
| | - Janet C Rucker
- Department of Neurology, New York University School of Medicine, New York, NY, USA; Department of Ophthalmology, New York University School of Medicine, New York, NY, USA.
| | | | - Steven L Galetta
- Department of Neurology, New York University School of Medicine, New York, NY, USA; Department of Ophthalmology, New York University School of Medicine, New York, NY, USA.
| | - Laura J Balcer
- Department of Neurology, New York University School of Medicine, New York, NY, USA; Department of Population Health, New York University School of Medicine, New York, NY, USA; Department of Ophthalmology, New York University School of Medicine, New York, NY, USA.
| |
Collapse
|