1
|
Smithson CJR, Chow JK, Chang TY, Gauthier I. Measuring object recognition ability: Reliability, validity, and the aggregate z-score approach. Behav Res Methods 2024; 56:6598-6612. [PMID: 38438656 DOI: 10.3758/s13428-024-02372-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/14/2024] [Indexed: 03/06/2024]
Abstract
Measurement of domain-general object recognition ability (o) requires minimization of domain-specific variance. One approach is to model o as a latent variable explaining performance on a battery of tests which differ in task demands and stimuli; however, time and sample requirements may be prohibitive. Alternatively, an aggregate measure of o can be obtained by averaging z-scores across tests. Using data from Sunday et al., Journal of Experimental Psychology: General, 151, 676-694, (2022), we demonstrated that aggregate scores from just two such object recognition tests provide a good approximation (r = .79) of factor scores calculated from a model using a much larger set of tests. Some test combinations produced correlations of up to r = .87 with factor scores. We then revised these tests to reduce testing time, and developed an odd one out task, using a unique object category on nearly every trial, to increase task and stimuli diversity. To validate our measures, 163 participants completed the object recognition tests on two occasions, one month apart. Providing the first evidence that o is stable over time, our short aggregate o measure demonstrated good test-retest reliability (r = .77). The stability of o could not be completely accounted for by intelligence, perceptual speed, and early visual ability. Structural equation modeling suggested that our tests load significantly onto the same latent variable, and revealed that as a latent variable, o is highly stable (r = .93). Aggregation is an efficient method for estimating o, allowing investigation of individual differences in object recognition ability to be more accessible in future studies.
Collapse
Affiliation(s)
| | - Jason K Chow
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| | - Ting-Yun Chang
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Isabel Gauthier
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
2
|
Chow JK, Palmeri TJ, Gauthier I. Distinct but related abilities for visual and haptic object recognition. Psychon Bull Rev 2024:10.3758/s13423-024-02471-x. [PMID: 38381302 DOI: 10.3758/s13423-024-02471-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/29/2024] [Indexed: 02/22/2024]
Abstract
People vary in their ability to recognize objects visually. Individual differences for matching and recognizing objects visually is supported by a domain-general ability capturing common variance across different tasks (e.g., Richler et al., Psychological Review, 126, 226-251, 2019). Behavioral (e.g., Cooke et al., Neuropsychologia, 45, 484-495, 2007) and neural evidence (e.g., Amedi, Cerebral Cortex, 12, 1202-1212, 2002) suggest overlapping mechanisms in the processing of visual and haptic information in the service of object recognition, but it is unclear whether such group-average results generalize to individual differences. Psychometrically validated measures are required, which have been lacking in the haptic modality. We investigate whether object recognition ability is specific to vision or extends to haptics using psychometric measures we have developed. We use multiple visual and haptic tests with different objects and different formats to measure domain-general visual and haptic abilities and to test for relations across them. We measured object recognition abilities using two visual tests and four haptic tests (two each for two kinds of haptic exploration) in 97 participants. Partial correlation and confirmatory factor analyses converge to support the existence of a domain-general haptic object recognition ability that is moderately correlated with domain-general visual object recognition ability. Visual and haptic abilities share about 25% of their variance, supporting the existence of a multisensory domain-general ability while leaving a substantial amount of residual variance for modality-specific abilities. These results extend our understanding of the structure of object recognition abilities; while there are mechanisms that may generalize across categories, tasks, and modalities, there are still other mechanisms that are distinct between modalities.
Collapse
Affiliation(s)
- Jason K Chow
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Thomas J Palmeri
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA
| | - Isabel Gauthier
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
3
|
Sun J, Gauthier I. Does food recognition depend on color? Psychon Bull Rev 2023; 30:2219-2229. [PMID: 37231176 DOI: 10.3758/s13423-023-02298-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/16/2023] [Indexed: 05/27/2023]
Abstract
Color is considered important in food perception, but its role in food-specific visual mechanisms is unclear. We explore this question in North American adults. We build on work revealing contributions from domain-general and domain-specific abilities in food recognition and a negative correlation between the domain-specific component and food neophobia (FN, aversion to novel food). In Study 1, participants performed two food-recognition tests, one in color and one in grayscale. Removing color reduced performance, but food recognition was predicted by domain-general and -specific abilities, and FN negatively correlated with food recognition. In Study 2, we removed color from both food tests. Food recognition was still predicted by domain-general and food-specific abilities, but with a relation between food-specific ability and FN. In Study 3, color-blind men reported lower FN than men with normal color perception. These results suggest two separate food-specific recognition mechanisms, only one of which is dependent on color.
Collapse
Affiliation(s)
- Jisoo Sun
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA
| | - Isabel Gauthier
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA.
| |
Collapse
|
4
|
Delavari P, Ozturan G, Yuan L, Yilmaz Ö, Oruc I. Artificial intelligence, explainability, and the scientific method: A proof-of-concept study on novel retinal biomarker discovery. PNAS NEXUS 2023; 2:pgad290. [PMID: 37746328 PMCID: PMC10517742 DOI: 10.1093/pnasnexus/pgad290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 08/28/2023] [Indexed: 09/26/2023]
Abstract
We present a structured approach to combine explainability of artificial intelligence (AI) with the scientific method for scientific discovery. We demonstrate the utility of this approach in a proof-of-concept study where we uncover biomarkers from a convolutional neural network (CNN) model trained to classify patient sex in retinal images. This is a trait that is not currently recognized by diagnosticians in retinal images, yet, one successfully classified by CNNs. Our methodology consists of four phases: In Phase 1, CNN development, we train a visual geometry group (VGG) model to recognize patient sex in retinal images. In Phase 2, Inspiration, we review visualizations obtained from post hoc interpretability tools to make observations, and articulate exploratory hypotheses. Here, we listed 14 hypotheses retinal sex differences. In Phase 3, Exploration, we test all exploratory hypotheses on an independent dataset. Out of 14 exploratory hypotheses, nine revealed significant differences. In Phase 4, Verification, we re-tested the nine flagged hypotheses on a new dataset. Five were verified, revealing (i) significantly greater length, (ii) more nodes, and (iii) more branches of retinal vasculature, (iv) greater retinal area covered by the vessels in the superior temporal quadrant, and (v) darker peripapillary region in male eyes. Finally, we trained a group of ophthalmologists (N = 26 ) to recognize the novel retinal features for sex classification. While their pretraining performance was not different from chance level or the performance of a nonexpert group (N = 31 ), after training, their performance increased significantly (p < 0.001 , d = 2.63 ). These findings showcase the potential for retinal biomarker discovery through CNN applications, with the added utility of empowering medical practitioners with new diagnostic capabilities to enhance their clinical toolkit.
Collapse
Affiliation(s)
- Parsa Delavari
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
- Neuroscience, University of British Columbia, Djavad Mowafaghian Centre for Brain Health, Vancouver, V6T 1Z3 BC, Canada
| | - Gulcenur Ozturan
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
| | - Lei Yuan
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
| | - Özgür Yilmaz
- Mathematics, University of British Columbia, Vancouver, V6T 1Z2 BC, Canada
| | - Ipek Oruc
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
- Neuroscience, University of British Columbia, Djavad Mowafaghian Centre for Brain Health, Vancouver, V6T 1Z3 BC, Canada
| |
Collapse
|
5
|
Chow JK, Palmeri TJ, Pluck G, Gauthier I. Evidence for an amodal domain-general object recognition ability. Cognition 2023; 238:105542. [PMID: 37419065 DOI: 10.1016/j.cognition.2023.105542] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/26/2023] [Accepted: 06/27/2023] [Indexed: 07/09/2023]
Abstract
A general object recognition ability predicts performance across a variety of high-level visual tests, categories, and performance in haptic recognition. Does this ability extend to auditory recognition? Vision and haptics tap into similar representations of shape and texture. In contrast, features of auditory perception like pitch, timbre, or loudness do not readily translate into shape percepts related to edges, surfaces, or spatial arrangement of parts. We find that an auditory object recognition ability correlates highly with a visual object recognition ability after controlling for general intelligence, perceptual speed, low-level visual ability, and memory ability. Auditory object recognition was a stronger predictor of visual object recognition than all control measures across two experiments, even though those control variables were also tested visually. These results point towards a single high-level ability used in both vision and audition. Much work highlights how the integration of visual and auditory information is important in specific domains (e.g., speech, music), with evidence for some overlap of visual and auditory neural representations. Our results are the first to reveal a domain-general ability, o, that predicts object recognition performance in both visual and auditory tests. Because o is domain-general, it reveals mechanisms that apply across a wide range of situations, independent of experience and knowledge. As o is distinct from general intelligence, it is well positioned to potentially add predictive validity when explaining individual differences in a variety of tasks, above and beyond measures of common cognitive abilities like general intelligence and working memory.
Collapse
Affiliation(s)
- Jason K Chow
- Department of Psychology, Vanderbilt University, USA.
| | | | - Graham Pluck
- Faculty of Psychology, Chulalongkorn University, Thailand
| | | |
Collapse
|
6
|
Cooper D, Wiggins MW, Main LC, Wills JA, Doyle T. Cue utilisation is partially related to performance on an urban operations course but not experience. APPLIED ERGONOMICS 2023; 110:104024. [PMID: 37080083 DOI: 10.1016/j.apergo.2023.104024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 03/24/2023] [Accepted: 04/04/2023] [Indexed: 05/03/2023]
Abstract
INTRODUCTION Decision making in use of force relies on accurate cue identification to inform appropriate response. This research was designed to test the relationship between cue utilisation and performance prior to, and following participation in an urban operations course (UOC). METHODS A total of 37 participants were assessed on cue utilisation measures, course outcome and between group changes following course participation. RESULTS A significant main effect was evident for Cue utilisation and administration, (p = 0.005), but not training group, (p = 0.54), nor between groups and point of administration, (p = 0.410). No main effect was evident between groups and training outcome, (p = 0.11). However, there was a main effect for point of administration, (p = 0.02) and training outcome and point of administration (p = 0.02). CONCLUSION Although cue utilisation is an essential component of perception-action tasks, cues may be more specific to the relevant training environment with limited transfer to the operational context.
Collapse
Affiliation(s)
| | | | - Luana C Main
- Deakin University, Institute for Physical Activity & Nutrition (IPAN), Geelong, VIC, Australia
| | | | - Tim Doyle
- Macquarie University, Sydney, NSW, Australia.
| |
Collapse
|
7
|
McGugin RW, Sunday MA, Gauthier I. The neural correlates of domain-general visual ability. Cereb Cortex 2023; 33:4280-4292. [PMID: 36045003 DOI: 10.1093/cercor/bhac342] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 08/01/2022] [Accepted: 08/02/2022] [Indexed: 11/12/2022] Open
Abstract
People vary in their general ability to compare, identify, and remember objects. Research using latent variable modeling identifies a domain-general visual recognition ability (called o) that reflects correlations among different visual tasks and categories. We measure associations between a psychometrically-sensitive measure of o and a neurometrically-sensitive measure of visual sensitivity to shape. We report evidence for distributed neural correlates of o using functional and anatomical regions-of-interest (ROIs) as well as whole brain analyses. Neural selectivity to shape is associated with o in several regions of the ventral pathway, as well as additional foci in parietal and premotor cortex. Multivariate analyses suggest the distributed effects in ventral cortex reflect a common mechanism. The network of brain areas where neural selectivity predicts o is similar to that evoked by the most informative features for object recognition in prior work, showing convergence of 2 different approaches on identifying areas that support the best object recognition performance. Because o predicts performance across many visual tasks for both novel and familiar objects, we propose that o could predict the magnitude of neural changes in task-relevant areas following experience with specific task and object category.
Collapse
Affiliation(s)
- Rankin W McGugin
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, Nashville, TN 37240, United States
| | - Mackenzie A Sunday
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, Nashville, TN 37240, United States
| | - Isabel Gauthier
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, Nashville, TN 37240, United States
| |
Collapse
|
8
|
Smithson CJR, Eichbaum QG, Gauthier I. Object recognition ability predicts category learning with medical images. Cogn Res Princ Implic 2023; 8:9. [PMID: 36720722 PMCID: PMC9889590 DOI: 10.1186/s41235-022-00456-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 12/18/2022] [Indexed: 02/02/2023] Open
Abstract
We investigated the relationship between category learning and domain-general object recognition ability (o). We assessed this relationship in a radiological context, using a category learning test in which participants judged whether white blood cells were cancerous. In study 1, Bayesian evidence negated a relationship between o and category learning. This lack of correlation occurred despite high reliability in all measurements. However, participants only received feedback on the first 10 of 60 trials. In study 2, we assigned participants to one of two conditions: feedback on only the first 10 trials, or on all 60 trials of the category learning test. We found strong Bayesian evidence for a correlation between o and categorisation accuracy in the full-feedback condition, but not when feedback was limited to early trials. Moderate Bayesian evidence supported a difference between these correlations. Without feedback, participants may stick to simple rules they formulate at the start of category learning, when trials are easier. Feedback may encourage participants to abandon less effective rules and switch to exemplar learning. This work provides the first evidence relating o to a specific learning mechanism, suggesting this ability is more dependent upon exemplar learning mechanisms than rule abstraction. Object-recognition ability could complement other sources of individual differences when predicting accuracy of medical image interpretation.
Collapse
Affiliation(s)
- Conor J R Smithson
- Department of Psychology, Vanderbilt University, PMB 407817, 2301 Vanderbilt Place, Nashville, TN, 37240-7817, USA.
| | - Quentin G Eichbaum
- Department of Pathology, Microbiology and Immunology, Vanderbilt University, Nashville, USA
- Vanderbilt Pathology Education Research Group, Nashville, USA
| | - Isabel Gauthier
- Department of Psychology, Vanderbilt University, PMB 407817, 2301 Vanderbilt Place, Nashville, TN, 37240-7817, USA
| |
Collapse
|
9
|
Wang Z, Manassi M, Ren Z, Ghirardo C, Canas-Bajo T, Murai Y, Zhou M, Whitney D. Idiosyncratic biases in the perception of medical images. Front Psychol 2022; 13:1049831. [PMID: 36600706 PMCID: PMC9806180 DOI: 10.3389/fpsyg.2022.1049831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction Radiologists routinely make life-altering decisions. Optimizing these decisions has been an important goal for many years and has prompted a great deal of research on the basic perceptual mechanisms that underlie radiologists' decisions. Previous studies have found that there are substantial individual differences in radiologists' diagnostic performance (e.g., sensitivity) due to experience, training, or search strategies. In addition to variations in sensitivity, however, another possibility is that radiologists might have perceptual biases-systematic misperceptions of visual stimuli. Although a great deal of research has investigated radiologist sensitivity, very little has explored the presence of perceptual biases or the individual differences in these. Methods Here, we test whether radiologists' have perceptual biases using controlled artificial and Generative Adversarial Networks-generated realistic medical images. In Experiment 1, observers adjusted the appearance of simulated tumors to match the previously shown targets. In Experiment 2, observers were shown with a mix of real and GAN-generated CT lesion images and they rated the realness of each image. Results We show that every tested individual radiologist was characterized by unique and systematic perceptual biases; these perceptual biases cannot be simply explained by attentional differences, and they can be observed in different imaging modalities and task settings, suggesting that idiosyncratic biases in medical image perception may widely exist. Discussion Characterizing and understanding these biases could be important for many practical settings such as training, pairing readers, and career selection for radiologists. These results may have consequential implications for many other fields as well, where individual observers are the linchpins for life-altering perceptual decisions.
Collapse
Affiliation(s)
- Zixuan Wang
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Mauro Manassi
- School of Psychology, University of Aberdeen, King’s College, Aberdeen, United Kingdom
| | - Zhihang Ren
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
- Vision Science Group, University of California, Berkeley, Berkeley, CA, United States
| | - Cristina Ghirardo
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Teresa Canas-Bajo
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
- Vision Science Group, University of California, Berkeley, Berkeley, CA, United States
| | - Yuki Murai
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Koganei, Japan
| | - Min Zhou
- Department of Pediatrics, The First People's Hospital of Shuangliu District, Chengdu, Sichuan, China
| | - David Whitney
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
- Vision Science Group, University of California, Berkeley, Berkeley, CA, United States
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
10
|
Gauthier I, Fiestan G. Food neophobia predicts visual ability in the recognition of prepared food, beyond domain-general factors. Food Qual Prefer 2022. [DOI: 10.1016/j.foodqual.2022.104702] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
11
|
Abstract
Visual comparison-comparing visual stimuli (e.g., fingerprints) side by side and determining whether they originate from the same or different source (i.e., "match")-is a complex discrimination task involving many cognitive and perceptual processes. Despite the real-world consequences of this task, which is often conducted by forensic scientists, little is understood about the psychological processes underpinning this ability. There are substantial individual differences in visual comparison accuracy amongst both professionals and novices. The source of this variation is unknown, but may reflect a domain-general and naturally varying perceptual ability. Here, we investigate this by comparing individual differences (N = 248 across two studies) in four visual comparison domains: faces, fingerprints, firearms, and artificial prints. Accuracy on all comparison tasks was significantly correlated and accounted for a substantial portion of variance (e.g., 42% in Exp. 1) in performance across all tasks. Importantly, this relationship cannot be attributed to participants' intrinsic motivation or skill in other visual-perceptual tasks (visual search and visual statistical learning). This paper provides novel evidence of a reliable, domain-general visual comparison ability.
Collapse
|
12
|
Carrigan AJ, Charlton A, Wiggins MW, Georgiou A, Palmeri T, Curby KM. Cue utilisation reduces the impact of response bias in histopathology. APPLIED ERGONOMICS 2022; 98:103590. [PMID: 34598079 DOI: 10.1016/j.apergo.2021.103590] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 08/18/2021] [Accepted: 09/03/2021] [Indexed: 06/13/2023]
Abstract
Histopathologists make diagnostic decisions that are thought to be based on pattern recognition, likely informed by cue-based associations formed in memory, a process known as cue utilisation. Typically, the cases presented to the histopathologist have already been classified as 'abnormal' by clinical examination and/or other diagnostic tests. This results in a high disease prevalence, the potential for 'abnormality priming', and a response bias leading to false positives on normal cases. This study investigated whether higher cue utilisation is associated with a reduction in positive response bias in the diagnostic decisions of histopathologists. Data were collected from eighty-two histopathologists who completed a series of demographic and experience-related questions and the histopathology edition of the Expert Intensive Skills Evaluation 2.0 (EXPERTise 2.0) to establish behavioural indicators of context-related cue utilisation. They also completed a separate, diagnostic task comprising breast histopathology images where the frequency of abnormality was manipulated to create a high disease prevalence context for diagnostic decisions relating to normal tissue. Participants were assigned to higher or lower cue utilisation groups based on their performance on EXPERTise 2.0. When the effects of experience were controlled, higher cue utilisation was specifically associated with a greater accuracy classifying normal images, recording a lower positive response bias. This study suggests that cue utilisation may play a protective role against response biases in histopathology settings.
Collapse
Affiliation(s)
- A J Carrigan
- Department of Psychology, Macquarie University, Sydney, Australia; Centre for Elite Performance, Expertise & Training, Macquarie University, Sydney, Australia.
| | - A Charlton
- Department of Histopathology, Auckland City Hospital, and Department of Molecular Medicine and Pathology, University of Auckland, New Zealand
| | - M W Wiggins
- Department of Psychology, Macquarie University, Sydney, Australia; Centre for Elite Performance, Expertise & Training, Macquarie University, Sydney, Australia
| | - A Georgiou
- Centre for Health Systems and Safety Research, Macquarie University, Sydney, Australia
| | - T Palmeri
- Department of Psychology, Vanderbilt University, Nashville, United States
| | - K M Curby
- Department of Psychology, Macquarie University, Sydney, Australia; Centre for Elite Performance, Expertise & Training, Macquarie University, Sydney, Australia
| |
Collapse
|
13
|
Carrigan AJ, Stoodley P, Ng K, Moerel D, Wiggins MW. Static versus dynamic medical images: The role of cue utilization in diagnostic performance. APPLIED COGNITIVE PSYCHOLOGY 2021. [DOI: 10.1002/acp.3861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Affiliation(s)
- Ann J. Carrigan
- Centre for Elite Performance, Expertise and Training Macquarie University Sydney New South Wales Australia
- Perception in Action Research Centre Macquarie University Sydney New South Wales Australia
- Department of Psychology Macquarie University Sydney New South Wales Australia
| | - Paul Stoodley
- School of Medicine Western Sydney University Sydney, New South Wales Australia
- Westmead Private Cardiology Westmead New South Wales Australia
| | - Kenny Ng
- Cardiology Department Royal North Shore Hospital Sydney New South Wales Australia
| | - Denise Moerel
- Perception in Action Research Centre Macquarie University Sydney New South Wales Australia
- Department of Cognitive Science Macquarie University Sydney New South Wales Australia
| | - Mark W. Wiggins
- Centre for Elite Performance, Expertise and Training Macquarie University Sydney New South Wales Australia
- Department of Psychology Macquarie University Sydney New South Wales Australia
| |
Collapse
|
14
|
Haptic object recognition based on shape relates to visual object recognition ability. PSYCHOLOGICAL RESEARCH 2021; 86:1262-1273. [PMID: 34355269 PMCID: PMC8341045 DOI: 10.1007/s00426-021-01560-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 07/16/2021] [Indexed: 11/23/2022]
Abstract
Visual object recognition depends in large part on a domain-general ability (Richler et al. Psychol Rev 126(2): 226–251, 2019). Given evidence pointing towards shared mechanisms for object perception across vision and touch, we ask whether individual differences in haptic and visual object recognition are related. We use existing validated visual tests to estimate visual object recognition ability and relate it to performance on two novel tests of haptic object recognition ability (n = 66). One test includes complex objects that participants chose to explore with a hand grasp. The other test uses a simpler stimulus set that participants chose to explore with just their fingertips. Only performance on the haptic test with complex stimuli correlated with visual object recognition ability, suggesting a shared source of variance across task structures, stimuli, and modalities. A follow-up study using a visual version of the haptic test with simple stimuli shows a correlation with the original visual tests, suggesting that the limited complexity of the stimuli did not limit correlation with visual object recognition ability. Instead, we propose that the manner of exploration may be a critical factor in whether a haptic test relates to visual object recognition ability. Our results suggest a perceptual ability that spans at least across vision and touch, however, it may not be recruited during just fingertip exploration.
Collapse
|
15
|
Domain-specific and domain-general contributions to reading musical notation. Atten Percept Psychophys 2021; 83:2983-2994. [PMID: 34341940 DOI: 10.3758/s13414-021-02349-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2021] [Indexed: 02/03/2023]
Abstract
Musical practice may benefit not only domain-specific abilities, such as pitch discrimination and music performance, but also domain-general abilities, like executive functioning and memory. Behavioral and neural changes in visual processing have been associated with music-reading experience. However, it is still unclear whether there is a domain-specific visual ability to process musical notation. This study investigates the specificity of the visual skills relevant to simple decisions about musical notation. Ninety-six participants varying in music-reading experience answered a short survey to quantify experience with musical notation and completed a test battery that assessed musical notation reading fluency and accuracy at the level of individual note or note sequence. To characterize how this ability may relate to domain-general abilities, we also estimated general intelligence (as measured with the Raven's Progressive Matrices) and general object-recognition ability (as measure by a recently proposed construct o). We obtained reliable measurements on our various tasks and found evidence for a domain-specific ability of the perception of musical notation. This music-reading ability and domain-general abilities were found to contribute to performance on specific tasks differently, depending on the level of experience reading music.
Collapse
|
16
|
Carrigan AJ, Magnussen J, Georgiou A, Curby KM, Palmeri TJ, Wiggins MW. Differentiating Experience From Cue Utilization in Radiological Assessments. HUMAN FACTORS 2021; 63:635-646. [PMID: 32150500 DOI: 10.1177/0018720820902576] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
OBJECTIVE This research was designed to examine the contribution of self-reported experience and cue utilization to diagnostic accuracy in the context of radiology. BACKGROUND Within radiology, it is unclear how task-related experience contributes to the acquisition of associations between features with events in memory, or cues, and how they contribute to diagnostic performance. METHOD Data were collected from 18 trainees and 41 radiologists. The participants completed a radiology edition of the established cue utilization assessment tool EXPERTise 2.0, which provides a measure of cue utilization based on performance on a number of domain-specific tasks. The participants also completed a separate image interpretation task as an independent measure of diagnostic performance. RESULTS Consistent with previous research, a k-means cluster analysis using the data from EXPERTise 2.0 delineated two groups, the pattern of centroids of which reflected higher and lower cue utilization. Controlling for years of experience, participants with higher cue utilization were more accurate on the image interpretation task compared to participants who demonstrated relatively lower cue utilization (p = .01). CONCLUSION This study provides support for the role of cue utilization in assessments of radiology images among qualified radiologists. Importantly, it also demonstrates that cue utilization and self-reported years of experience as a radiologist make independent contributions to performance on the radiological diagnostic task. APPLICATION Task-related experience, including training, needs to be structured to ensure that learners have the opportunity to acquire feature-event relationships and internalize these associations in the form of cues in memory.
Collapse
Affiliation(s)
| | | | | | - Kim M Curby
- 7788 Macquarie University, Sydney, Australia
| | | | | |
Collapse
|
17
|
Robson SG, Tangen JM, Searston RA. The effect of expertise, target usefulness and image structure on visual search. Cogn Res Princ Implic 2021; 6:16. [PMID: 33709197 PMCID: PMC7977019 DOI: 10.1186/s41235-021-00282-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 02/19/2021] [Indexed: 11/18/2022] Open
Abstract
Experts outperform novices on many cognitive and perceptual tasks. Extensive training has tuned experts to the most relevant information in their specific domain, allowing them to make decisions quickly and accurately. We compared a group of fingerprint examiners to a group of novices on their ability to search for information in fingerprints across two experiments-one where participants searched for target features within a single fingerprint and another where they searched for points of difference between two fingerprints. In both experiments, we also varied how useful the target feature was and whether participants searched for these targets in a typical fingerprint or one that had been scrambled. Experts more efficiently located targets when searching for them in intact but not scrambled fingerprints. In Experiment 1, we also found that experts more efficiently located target features classified as more useful compared to novices, but this expert-novice difference was not present when the target feature was classified as less useful. The usefulness of the target may therefore have influenced the search strategies that participants used, and the visual search advantages that experts display appear to depend on their vast experience with visual regularity in fingerprints. These results align with a domain-specific account of expertise and suggest that perceptual training ought to involve learning to attend to task-critical features.
Collapse
Affiliation(s)
- Samuel G Robson
- School of Psychology, The University of Queensland, St Lucia, 4072, QLD, Australia.
| | - Jason M Tangen
- School of Psychology, The University of Queensland, St Lucia, 4072, QLD, Australia
| | - Rachel A Searston
- School of Psychology, The University of Adelaide, Adelaide, 5005, SA, Australia
| |
Collapse
|
18
|
Carrigan AJ, Stoodley P, Fernandez F, Sunday MA, Wiggins MW. Individual differences in echocardiography: Visual object recognition ability predicts cue utilization. APPLIED COGNITIVE PSYCHOLOGY 2020. [DOI: 10.1002/acp.3711] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Affiliation(s)
- Ann J. Carrigan
- Centre for Elite Performance, Expertise and Training Macquarie University Sydney, NSW Australia
- Perception in Action Research Centre Macquarie University Sydney, NSW Australia
- Department of Psychology Macquarie University Sydney, NSW Australia
| | - Paul Stoodley
- School of Medicine Western Sydney University Sydney, NSW Australia
- Westmead Private Cardiology Westmead NSW Australia
| | | | | | - Mark W. Wiggins
- Centre for Elite Performance, Expertise and Training Macquarie University Sydney, NSW Australia
- Department of Psychology Macquarie University Sydney, NSW Australia
| |
Collapse
|
19
|
What can an echocardiographer see in briefly presented stimuli? Perceptual expertise in dynamic search. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:30. [PMID: 32696181 PMCID: PMC7374494 DOI: 10.1186/s41235-020-00232-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Accepted: 05/26/2020] [Indexed: 11/10/2022]
Abstract
Background Experts in medical image perception are able to detect abnormalities rapidly from medical images. This ability is likely due to enhanced pattern recognition on a global scale. However, the bulk of research in this domain has focused on static rather than dynamic images, so it remains unclear what level of information that can be extracted from these displays. This study was designed to examine the visual capabilities of echocardiographers—practitioners who provide information regarding cardiac integrity and functionality. In three experiments, echocardiographers and naïve participants completed an abnormality detection task that comprised movies presented on a range of durations, where half were abnormal. This was followed by an abnormality categorization task. Results Across all durations, the results showed that performance was high for detection, but less so for categorization, indicating that categorization was a more challenging task. Not surprisingly, echocardiographers outperformed naïve participants. Conclusions Together, this suggests that echocardiographers have a finely tuned capability for cardiac dysfunction, and a great deal of visual information can be extracted during a global assessment, within a brief glance. No relationship was evident between experience and performance which suggests that other factors such as individual differences need to be considered for future studies.
Collapse
|
20
|
Wild MG, Bachorowski JA. Lay Beliefs About Interaction Quality: An Expertise Perspective on Individual Differences in Interpersonal Emotion Ability. Front Psychol 2020; 11:277. [PMID: 32158414 PMCID: PMC7052128 DOI: 10.3389/fpsyg.2020.00277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 02/05/2020] [Indexed: 12/01/2022] Open
Abstract
Social interactions have long been a source of lay beliefs about the ways in which psychological constructs operate. Some of the most enduring psychological constructs to become common lay beliefs originated from research focused on social-emotional processes. "Emotional intelligence" and "social intelligence" are now mainstream notions, stemming from their appealing nature and depiction in popular media. However, empirical attempts at quantifying the quality of social interactions have not been nearly as successful as measures of individual differences such as social skills, theory of mind, or social/emotional intelligence. The subjective, lay ratings of the quality of interactions by naïve observers are nonetheless consistent both within and between observers. The goal of this paper is to describe recent empirical work surrounding lay beliefs about social interaction quality and ways in which those beliefs can be quantified. We will then argue that these lay impressions formed about the quality of an interaction, perhaps via affect induction, are consistent with an expertise framework. Affect induction, beginning in infancy and occurring over time, creates instances in memory that accumulate and are ultimately measurable as social-emotional expertise (SEE). The ways in which our lay beliefs about social interaction quality fit the definition of expertise, or the automatic, holistic processing of relevant stimuli, will be discussed. We will then describe the promise of future work in this area, with a focus on a) continued delineation of the thoughts, behaviors, and timing of behaviors that lead to high-quality social interactions; and b) the viability of expertise as the conceptual model for individual differences in social-emotional ability.
Collapse
Affiliation(s)
- Marcus G. Wild
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| | - Jo-Anne Bachorowski
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
21
|
Exploring the effect of context and expertise on attention: is attention shifted by information in medical images? Atten Percept Psychophys 2019; 81:1283-1296. [PMID: 30825115 PMCID: PMC6647457 DOI: 10.3758/s13414-019-01695-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
Radiologists make critical decisions based on searching and interpreting medical images. The probability of a lung nodule differs across anatomical regions within the chest, raising the possibility that radiologists might have a prior expectation that creates an attentional bias. The development of expertise is also thought to cause “tuning” to relevant features, allowing radiologists to become faster and more accurate at detecting potential masses within their domain of expertise. Here, we tested both radiologists and control participants with a novel attentional-cueing paradigm to investigate whether the deployment of attention was affected (1) by a context that might invoke prior knowledge for experts, (2) by a nodule localized either on the same or on opposite sides as a subsequent target, and (3) by inversion of the nodule-present chest radiographs, to assess the orientation specificity of any effects. The participants also performed a nodule detection task to verify that our presentation duration was sufficient to extract diagnostic information. We saw no evidence of priors triggered by a normal chest radiograph cue affecting attention. When the cue was an upright abnormal chest radiograph, radiologists were faster when the lateralised nodule and the subsequent target appeared at the same rather than at opposite locations, suggesting attention was captured by the nodule. The opposite pattern was present for inverted images. We saw no evidence of cueing for control participants in any condition, which suggests that radiologists are indeed more sensitive to visual features that are not perceived as salient by naïve observers.
Collapse
|