1
|
Vurgun U, Ji Y, Papafragou A. Aspectual Processing Shifts Visual Event Apprehension. Cogn Sci 2024; 48:e13476. [PMID: 38923020 DOI: 10.1111/cogs.13476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 05/29/2024] [Accepted: 06/04/2024] [Indexed: 06/28/2024]
Abstract
What is the relationship between language and event cognition? Past work has suggested that linguistic/aspectual distinctions encoding the internal temporal profile of events map onto nonlinguistic event representations. Here, we use a novel visual detection task to directly test the hypothesis that processing telic versus atelic sentences (e.g., "Ebony folded a napkin in 10 seconds" vs. "Ebony did some folding for 10 seconds") can influence whether the very same visual event is processed as containing distinct temporal stages including a well-defined endpoint or lacking such structure, respectively. In two experiments, we show that processing (a)telicity in language shifts how people later construe the temporal structure of identical visual stimuli. We conclude that event construals are malleable representations that can align with the linguistic framing of events.
Collapse
Affiliation(s)
| | - Yue Ji
- School of Foreign Languages, Beijing Institute of Technology
| | | |
Collapse
|
2
|
Raykov PP, Varga D, Bird CM. False memories for ending of events. J Exp Psychol Gen 2023; 152:3459-3475. [PMID: 37650821 PMCID: PMC10694998 DOI: 10.1037/xge0001462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 06/19/2023] [Accepted: 06/27/2023] [Indexed: 09/01/2023]
Abstract
Memories are not perfect recordings of the past and can be subject to systematic biases. Memory distortions are often caused by our experience of what typically happens in a given situation. However, it is unclear whether memory for events is biased by the knowledge that events usually have a predictable structure (a beginning, middle, and an end). Using video clips of everyday situations, we tested how interrupting events at unexpected time points affects memory of how those events ended. In four free recall experiments (1, 2, 4, and 5), we found that interrupting clips just before a salient piece of action was completed, resulted in the false recall of details about how the clip might have ended. We refer to this as "event extension." On the other hand, interrupting clips just after one scene had ended and a new scene started, resulted in omissions of details about the true ending of the clip (Experiments 4 and 5). We found that these effects were present, albeit attenuated, when testing memory shortly after watching the video clips compared to a week later (Experiments 5a and 5b). The event extension effect was not present when memory was tested with a recognition paradigm (Experiment 3). Overall, we conclude that when people watch videos that violate their expectations of typical event structure, they show a bias to later recall the videos as if they had ended at a predictable event boundary, exhibiting event extension or the omission of details depending on where the original video was interrupted. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Petar P Raykov
- Sussex Neuroscience, School of Psychology, University of Sussex
| | - Dominika Varga
- Sussex Neuroscience, School of Psychology, University of Sussex
| | - Chris M Bird
- Sussex Neuroscience, School of Psychology, University of Sussex
| |
Collapse
|
3
|
Kominsky JF, Baker L, Keil FC, Strickland B. Causality and continuity close the gaps in event representations. Mem Cognit 2021; 49:518-531. [PMID: 33025571 PMCID: PMC8021615 DOI: 10.3758/s13421-020-01102-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/17/2020] [Indexed: 11/08/2022]
Abstract
Imagine you see a video of someone pulling back their leg to kick a soccer ball, and then a soccer ball soaring toward a goal. You would likely infer that these scenes are two parts of the same event, and this inference would likely cause you to remember having seen the moment the person kicked the soccer ball, even if that information was never actually presented (Strickland & Keil, 2011, Cognition, 121[3], 409-415). What cues trigger people to "fill in" causal events from incomplete information? Is it due to the experience they have had with soccer balls being kicked toward goals? Is it the visual similarity of the object in both halves of the video? Or is it the mere spatiotemporal continuity of the event? In three experiments, we tested these different potential mechanisms underlying the "filling-in" effect. Experiment 1 showed that filling in occurs equally in familiar and unfamiliar contexts, indicating that familiarity with specific event schemas is unnecessary to trigger false memory. Experiment 2 showed that the visible continuation of a launched object's trajectory is all that is required to trigger filling in, regardless of other occurrences in the second half of the scene. Finally, Experiment 3 found that, using naturalistic videos, this filling-in effect is more heavily affected if the object's trajectory is discontinuous in space/time compared with if the object undergoes a noticeable transformation. Together, these findings indicate that the spontaneous formation of causal event representations is driven by object representation systems that prioritize spatiotemporal information over other object features.
Collapse
Affiliation(s)
- Jonathan F Kominsky
- Department of Psychology, Rutgers University, 101 Warren St. Rm. 301, Newark, NJ, 07102, USA.
| | | | | | - Brent Strickland
- Ecole Normale Superieure & Institut Jean Nicod, 29 rue d'Ulm, 75005, Paris, France.
- School of Collective Intelligence, UM6P, Ben Guerir, Morocco.
| |
Collapse
|
4
|
Hafri A, Firestone C. The Perception of Relations. Trends Cogn Sci 2021; 25:475-492. [PMID: 33812770 DOI: 10.1016/j.tics.2021.01.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 01/05/2021] [Accepted: 01/18/2021] [Indexed: 11/16/2022]
Abstract
The world contains not only objects and features (red apples, glass bowls, wooden tables), but also relations holding between them (apples contained in bowls, bowls supported by tables). Representations of these relations are often developmentally precocious and linguistically privileged; but how does the mind extract them in the first place? Although relations themselves cast no light onto our eyes, a growing body of work suggests that even very sophisticated relations display key signatures of automatic visual processing. Across physical, eventive, and social domains, relations such as support, fit, cause, chase, and even socially interact are extracted rapidly, are impossible to ignore, and influence other perceptual processes. Sophisticated and structured relations are not only judged and understood, but also seen - revealing surprisingly rich content in visual perception itself.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Philosophy, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
5
|
Ji Y, Papafragou A. Is there an end in sight? Viewers' sensitivity to abstract event structure. Cognition 2020; 197:104197. [DOI: 10.1016/j.cognition.2020.104197] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 01/10/2020] [Accepted: 01/18/2020] [Indexed: 11/15/2022]
|
6
|
Cross-linguistic frequency and the learnability of semantics: Artificial language learning studies of evidentiality. Cognition 2020; 197:104194. [PMID: 31986353 DOI: 10.1016/j.cognition.2020.104194] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 11/26/2019] [Accepted: 01/16/2020] [Indexed: 11/22/2022]
Abstract
It is often assumed that cross-linguistically more prevalent distinctions are easier to learn (Typological Prevalence Hypothesis; TPH). Prior work supports this idea in phonology, morphology and syntax but has not addressed semantics. Using Artificial Language Learning experiments with adults, we test predictions made by the TPH about the relative learnability of semantic distinctions in the domain of evidentiality, i.e., the linguistic encoding of information source. As the TPH predicted, when exposed to miniature evidential morphological systems, adult speakers of English whose language does not encode evidentiality grammatically learned the typologically most prevalent system (marking indirect, reportative information) better compared to less-attested systems (Experiments 1-2). Similar patterns were observed when non-linguistic symbols were used to encode evidential distinctions (Experiment 3). Our data support the conjecture that some semantic distinctions are marked preferentially and acquired more easily compared to others in both language and other symbolic systems.
Collapse
|
7
|
Abstract
Events make up much of our lived experience, and the perceptual mechanisms that represent events in experience have pervasive effects on action control, language use, and remembering. Event representations in both perception and memory have rich internal structure and connections one to another, and both are heavily informed by knowledge accumulated from previous experiences. Event perception and memory have been identified with specific computational and neural mechanisms, which show protracted development in childhood and are affected by language use, expertise, and brain disorders and injuries. Current theoretical approaches focus on the mechanisms by which events are segmented from ongoing experience, and emphasize the common coding of events for perception, action, and memory. Abetted by developments in eye-tracking, neuroimaging, and computer science, research on event perception and memory is moving from small-scale laboratory analogs to the complexity of events in the wild.
Collapse
Affiliation(s)
- Jeffrey M Zacks
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130, USA;
| |
Collapse
|
8
|
Peng Y, Ichien N, Lu H. Causal actions enhance perception of continuous body movements. Cognition 2020; 194:104060. [DOI: 10.1016/j.cognition.2019.104060] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Revised: 08/22/2019] [Accepted: 08/28/2019] [Indexed: 10/26/2022]
|
9
|
Papenmeier F, Brockhoff A, Huff M. Filling the gap despite full attention: the role of fast backward inferences for event completion. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:3. [PMID: 30693396 PMCID: PMC6352563 DOI: 10.1186/s41235-018-0151-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 12/18/2018] [Indexed: 11/30/2022]
Abstract
The comprehension of dynamic naturalistic events poses at least two challenges to the cognitive system: filtering relevant information with attention and dealing with information that was missing or missed. With four experiments, we studied the completion of missing information despite full attention. Participants watched short soccer video clips and we informed participants that we removed a critical moment of ball contact in half of the clips. We asked participants to detect whether these moments of ball contact were present or absent. In Experiment 1, participants gave their detection responses either directly during an event or delayed after an event. Although participants directed their full attention toward the critical contact moment, they were more likely to indicate seeing the missing ball contact if it was followed by a causally matching scene than if it was followed by an unrelated scene, both for the immediate and delayed responses. Thus, event completion occurs quickly. In Experiment 2, only a causally matching scene but neither a white mask nor an irrelevant scene caused the completion of missing information. This indicates that the completion of missing information is caused by backward inferences rather than predictive perception. In Experiment 3, we showed that event completion occurs directly during a trial and does not depend on expectations built up after seeing the same causality condition multiple times. In Experiment 4, we linked our findings to event cognition by asking participants to perform a natural segmentation task. We conclude that observers complete missing information during coherent events based on a fast backward inference mechanism even when directing their attention toward the missing information.
Collapse
Affiliation(s)
- Frank Papenmeier
- Department of Psychology, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany.
| | - Alisa Brockhoff
- Department of Psychology, University of Tübingen, Schleichstr. 4, 72076, Tübingen, Germany
| | - Markus Huff
- Department of Research Infrastructures, German Institute for Adult Education, Heinemannstraße 12-14, 53175, Bonn, Germany
| |
Collapse
|
10
|
Watching diagnoses develop: Eye movements reveal symptom processing during diagnostic reasoning. Psychon Bull Rev 2018; 24:1398-1412. [PMID: 28444634 DOI: 10.3758/s13423-017-1294-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Finding a probable explanation for observed symptoms is a highly complex task that draws on information retrieval from memory. Recent research suggests that observed symptoms are interpreted in a way that maximizes coherence for a single likely explanation. This becomes particularly clear if symptom sequences support more than one explanation. However, there are no existing process data available that allow coherence maximization to be traced in ambiguous diagnostic situations, where critical information has to be retrieved from memory. In this experiment, we applied memory indexing, an eye-tracking method that affords rich time-course information concerning memory-based cognitive processing during higher order thinking, to reveal symptom processing and the preferred interpretation of symptom sequences. Participants first learned information about causes and symptoms presented in spatial frames. Gaze allocation to emptied spatial frames during symptom processing and during the diagnostic response reflected the subjective status of hypotheses held in memory and the preferred interpretation of ambiguous symptoms. Memory indexing traced how the diagnostic decision developed and revealed instances of hypothesis change and biases in symptom processing. Memory indexing thus provided direct online evidence for coherence maximization in processing ambiguous information.
Collapse
|
11
|
Cohn N, Paczynski M, Kutas M. Not so secret agents: Event-related potentials to semantic roles in visual event comprehension. Brain Cogn 2017; 119:1-9. [PMID: 28898720 DOI: 10.1016/j.bandc.2017.09.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Revised: 08/25/2017] [Accepted: 09/05/2017] [Indexed: 11/24/2022]
Abstract
Research across domains has suggested that agents, the doers of actions, have a processing advantage over patients, the receivers of actions. We hypothesized that agents as "event builders" for discrete actions (e.g., throwing a ball, punching) build on cues embedded in their preparatory postures (e.g., reaching back an arm to throw or punch) that lead to (predictable) culminating actions, and that these cues afford frontloading of event structure processing. To test this hypothesis, we compared event-related brain potentials (ERPs) to averbal comic panels depicting preparatory agents (ex. reaching back an arm to punch) that cued specific actions with those to non-preparatory agents (ex. arm to the side) and patients that did not cue any specific actions. We also compared subsequent completed action panels (ex. agent punching patient) across conditions, where we expected an inverse pattern of ERPs indexing the differential costs of processing completed actions asa function of preparatory cues. Preparatory agents evoked a greater frontal positivity (600-900ms) relative to non-preparatory agents and patients, while subsequent completed actions panels following non-preparatory agents elicited a smaller frontal positivity (600-900ms). These results suggest that preparatory (vs. non-) postures may differentially impact the processing of agents and subsequent actions in real time.
Collapse
Affiliation(s)
- Neil Cohn
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA, USA; Tilburg Center for Cognition and Communication, Tilburg University, Tilburg, The Netherlands.
| | - Martin Paczynski
- Wright State Research Institute, Wright State University, Dayton, OH, USA
| | - Marta Kutas
- Department of Cognitive Science, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
12
|
Huff M, Papenmeier F. Event perception: From event boundaries to ongoing events. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2017. [DOI: 10.1016/j.jarmac.2017.01.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
13
|
Neural Representations of Observed Actions Generalize across Static and Dynamic Visual Input. J Neurosci 2017; 37:3056-3071. [PMID: 28209734 DOI: 10.1523/jneurosci.2496-16.2017] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Revised: 01/16/2017] [Accepted: 02/03/2017] [Indexed: 11/21/2022] Open
Abstract
People interact with entities in the environment in distinct and categorizable ways (e.g., kicking is making contact with foot). We can recognize these action categories across variations in actors, objects, and settings; moreover, we can recognize them from both dynamic and static visual input. However, the neural systems that support action recognition across these perceptual differences are unclear. Here, we used multivoxel pattern analysis of fMRI data to identify brain regions that support visual action categorization in a format-independent way. Human participants were scanned while viewing eight categories of interactions (e.g., pulling) depicted in two visual formats: (1) visually controlled videos of two interacting actors and (2) visually varied photographs selected from the internet involving different actors, objects, and settings. Action category was decodable across visual formats in bilateral inferior parietal, bilateral occipitotemporal, left premotor, and left middle frontal cortex. In most of these regions, the representational similarity of action categories was consistent across subjects and visual formats, a property that can contribute to a common understanding of actions among individuals. These results suggest that the identified brain regions support action category codes that are important for action recognition and action understanding.SIGNIFICANCE STATEMENT Humans tend to interpret the observed actions of others in terms of categories that are invariant to incidental features: whether a girl pushes a boy or a button and whether we see it in real-time or in a single snapshot, it is still pushing Here, we investigated the brain systems that facilitate the visual recognition of these action categories across such differences. Using fMRI, we identified several areas of parietal, occipitotemporal, and frontal cortex that exhibit action category codes that are similar across viewing of dynamic videos and still photographs. Our results provide strong evidence for the involvement of these brain regions in recognizing the way that people interact physically with objects and other people.
Collapse
|
14
|
Mind the gap: Temporal discontinuities in observed activity streams influence perceived duration of actions. Psychon Bull Rev 2017; 24:1627-1635. [PMID: 28194722 DOI: 10.3758/s13423-017-1239-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In everyday life, when observing activities taking place in our environment, we often shift our attention among several activities and therefore perceive each activity sequence piecemeal with temporal gaps in between. Two studies examined whether the length of these gaps influences the processing of the observed activities. Experiment 1 presented film clips depicting activities that were interrupted by either short or long gaps and asked participants to estimate how long the target action presented at the end of the clip would normally take if it were to take place in reality. Using the same activities, Experiment 2 asked participants to judge the duration of the presentation of this target action-that is, how long the target action was presented. Results showed that following long gaps instead of short gaps, target actions are estimated to take longer in reality (Experiment 1), but the depictions themselves are estimated to be shorter (Experiment 2). Following long gaps, target actions seem to be processed pars pro toto as placeholders for longer segments in the stream of events, but in contrast, the depictions themselves appear to be shorter. Results suggest that long gaps lengthen the perceived duration of an event in our cognitive representation and also seem to influence our perception of the duration of the presentation itself.
Collapse
|
15
|
Brockhoff A, Huff M, Maurer A, Papenmeier F. Seeing the unseen? Illusory causal filling in FIFA referees, players, and novices. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2017; 1:7. [PMID: 28180158 PMCID: PMC5256435 DOI: 10.1186/s41235-016-0008-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2016] [Accepted: 07/27/2016] [Indexed: 11/28/2022]
Abstract
Humans often falsely report having seen a causal link between two dynamic scenes if the second scene depicts a valid logical consequence of the initial scene. As an example, a video clip shows someone kicking a ball including the ball flying. Even if the video clip omitted the moment of contact (i.e., the causal link), participants falsely report having seen this moment. In the current study, we explored the interplay of cognitive-perceptual expertise and event perception by measuring the false-alarm rates of three groups with differing interests in football (soccer in North America) (novices, players, and FIFA referees). We used the event-completion paradigm with video footage of a real football match, presenting either complete clips or incomplete clips (i.e., with the contact moment omitted). Either a causally linked scene or an incoherent scene followed a cut in the incomplete videos. Causally linked scenes induced false recognitions in all three groups: although the ball contact moment was not presented, participants indicated that they had seen the contact as frequently when it was absent as in the complete condition. In a second experiment, we asked the novices to detect the ball contact moment when it was either visible or not and when it was either followed by a causally or non-causally linked scene. Here, instead of presenting pictures of the clip, the participants were give a two-alternative forced-choice task: “Yes, contact was visible”, or “No, contact was not visible”. The results of Experiment 1 indicate that conceptual interpretations of simple events are independent of expertise: there were no top-down effects on perception. Participants in Experiment 2 detected the ball contact moment significantly more often correctly in the non-causal than in the causal conditions, indicating that the effect observed in Experiment 1 was not due to a possibly influential design (e.g., inducing a false memory for the presented pictures). The theoretical as well as the practical implications are discussed.
Collapse
Affiliation(s)
- Alisa Brockhoff
- Department of Psychology, Eberhard Karls Universität Tübingen, Schleichstr. 4, Tübingen, 72076 Germany
| | - Markus Huff
- Department of Psychology, Eberhard Karls Universität Tübingen, Schleichstr. 4, Tübingen, 72076 Germany
| | - Annika Maurer
- Department of Psychology, Eberhard Karls Universität Tübingen, Schleichstr. 4, Tübingen, 72076 Germany
| | - Frank Papenmeier
- Department of Psychology, Eberhard Karls Universität Tübingen, Schleichstr. 4, Tübingen, 72076 Germany
| |
Collapse
|
16
|
|
17
|
Ildirar S, Schwan S. First-time viewers' comprehension of films: bridging shot transitions. Br J Psychol 2014; 106:133-51. [PMID: 24654735 DOI: 10.1111/bjop.12069] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Revised: 01/22/2014] [Indexed: 11/30/2022]
Abstract
Which perceptual and cognitive prerequisites must be met in order to be able to comprehend a film is still unresolved and a controversial issue. In order to gain some insights into this issue, our field experiment investigates how first-time adult viewers extract and integrate meaningful information across film cuts. Three major types of commonalities between adjacent shots were differentiated, which may help first-time viewers with bridging the shots: pictorial, causal, and conceptual. Twenty first-time, 20 low-experienced and 20 high-experienced viewers from Turkey were shown a set of short film clips containing these three kinds of commonalities. Film clips conformed also to the principles of continuity editing. Analyses of viewers' spontaneous interpretations show that first-time viewers indeed are able to notice basic pictorial (object identity), causal (chains of activity), as well as conceptual (links between gaze direction and object attention) commonalities between shots due to their close relationship with everyday perception and cognition. However, first-time viewers' comprehension of the commonalities is to a large degree fragile, indicating the lack of a basic notion of what constitutes a film.
Collapse
|
18
|
Grotzer TA, Kamarainen AM, Tutwiler MS, Metcalf S, Dede C. Learning to Reason about Ecosystems Dynamics over Time: The Challenges of an Event-Based Causal Focus. Bioscience 2013. [DOI: 10.1525/bio.2013.63.4.9] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|