1
|
Mazumdar B, De la Mora N, Roberts T, Swiderski A, Kapantzoglou M, Fergadiotis G. Response Latencies During Confrontation Picture Naming in Aphasia: Are Proxy Measurements Sufficient? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1548-1557. [PMID: 38557214 PMCID: PMC11087083 DOI: 10.1044/2024_jslhr-23-00452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 12/13/2023] [Accepted: 01/31/2024] [Indexed: 04/04/2024]
Abstract
PURPOSE Anomia, or word-finding difficulty, is a prevalent and persistent feature of aphasia, a neurogenic language disorder affecting millions of people in the United States. Anomia assessments are essential for measuring performance and monitoring outcomes in clinical settings. This study aims to evaluate the reliability of response time (RT) annotation based on spectrograms and assess the predictive utility of proxy RTs collected during computerized naming tests. METHOD Archival data from 10 people with aphasia were used. Trained research assistants phonemically transcribed participants' responses, and RTs were generated from the onset of picture stimulus to the initial phoneme of the first complete attempt. RTs were measured in two ways: hand-generated RTs (from spectrograms) and proxy RTs (automatically extracted online). Interrater agreement was evaluated based on interclass correlation coefficients and generalizability theory tools including variance partitioning and the φ-coefficient. The predictive utility of proxy RTs was evaluated within a linear mixed-effects framework. RESULTS RT annotation reliability showed near-perfect agreement across research assistants (φ-coefficient = .93), and the variance accounted for by raters was negligible. Furthermore, proxy RTs significantly and strongly predicted hand-annotated RTs (R2 = ~0.82), suggesting their utility as an alternative measure. CONCLUSIONS The study confirms the reliability of RT annotation and demonstrates the predictive utility of proxy RTs in estimating RTs during computerized naming tests. Incorporating proxy RTs can enhance clinical assessments, providing additional information for cognitive measurement. Further research with larger samples and exploring the impact of using proxy RTs in different psychometric models could optimize clinical protocols and improve communication interventions for individuals with aphasia.
Collapse
Affiliation(s)
- Barnali Mazumdar
- Department of Communication Sciences & Disorders, Louisiana State University, Baton Rouge
| | - Nora De la Mora
- Department of Speech & Hearing Sciences, Portland State University, OR
| | - Teresa Roberts
- Department of Speech & Hearing Sciences, Portland State University, OR
| | - Alexander Swiderski
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | | | | |
Collapse
|
2
|
Morkovina O, Manukyan P, Sharapkova A. Picture naming test through the prism of cognitive neuroscience and linguistics: adapting the test for cerebellar tumor survivors-or pouring new wine in old sacks? Front Psychol 2024; 15:1332391. [PMID: 38566942 PMCID: PMC10985186 DOI: 10.3389/fpsyg.2024.1332391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 02/20/2024] [Indexed: 04/04/2024] Open
Abstract
A picture naming test (PNT) has long been regarded as an integral part of neuropsychological assessment. In current research and clinical practice, it serves a variety of purposes. PNTs are used to assess the severity of speech impairment in aphasia, monitor possible cognitive decline in aging patients with or without age-related neurodegenerative disorders, track language development in children and map eloquent brain areas to be spared during surgery. In research settings, picture naming tests provide an insight into the process of lexical retrieval in monolingual and bilingual speakers. However, while numerous advances have occurred in linguistics and neuroscience since the classic, most widespread PNTs were developed, few of them have found their way into test design. Consequently, despite the popularity of PNTs in clinical and research practice, their relevance and objectivity remain questionable. The present study provides an overview of literature where relevant criticisms and concerns have been expressed over the recent decades. It aims to determine whether there is a significant gap between conventional test design and the current understanding of the mechanisms underlying lexical retrieval by focusing on the parameters that have been experimentally proven to influence picture naming. We discuss here the implications of these findings for improving and facilitating test design within the picture naming paradigm. Subsequently, we highlight the importance of designing specialized tests with a particular target group in mind, so that test variables could be selected for cerebellar tumor survivors.
Collapse
Affiliation(s)
- Olga Morkovina
- Laboratory of Diagnostics and Advancing Cognitive Functions, Research Institute for Brain Development and Peak Performance, RUDN University, Moscow, Russia
- Department of English, Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, Moscow, Russia
| | - Piruza Manukyan
- Laboratory of Diagnostics and Advancing Cognitive Functions, Research Institute for Brain Development and Peak Performance, RUDN University, Moscow, Russia
| | - Anastasia Sharapkova
- Laboratory of Diagnostics and Advancing Cognitive Functions, Research Institute for Brain Development and Peak Performance, RUDN University, Moscow, Russia
- Department of English Linguistics, Faculty of Philology, Lomonosov Moscow State University, Moscow, Russia
| |
Collapse
|
3
|
Fergadiotis G, Casilio M, Dickey MW, Steel S, Nicholson H, Fleegle M, Swiderski A, Hula WD. Item Response Theory Modeling of the Verb Naming Test. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1718-1739. [PMID: 37000934 PMCID: PMC10457085 DOI: 10.1044/2023_jslhr-22-00458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 12/19/2022] [Accepted: 01/23/2023] [Indexed: 05/11/2023]
Abstract
PURPOSE Item response theory (IRT) is a modern psychometric framework with several advantageous properties as compared with classical test theory. IRT has been successfully used to model performance on anomia tests in individuals with aphasia; however, all efforts to date have focused on noun production accuracy. The purpose of this study is to evaluate whether the Verb Naming Test (VNT), a prominent test of action naming, can be successfully modeled under IRT and evaluate its reliability. METHOD We used responses on the VNT from 107 individuals with chronic aphasia from AphasiaBank. Unidimensionality and local independence, two assumptions prerequisite to IRT modeling, were evaluated using factor analysis and Yen's Q 3 statistic (Yen, 1984), respectively. The assumption of equal discrimination among test items was evaluated statistically via nested model comparisons and practically by using correlations of resulting IRT-derived scores. Finally, internal consistency, marginal and empirical reliability, and conditional reliability were evaluated. RESULTS The VNT was found to be sufficiently unidimensional with the majority of item pairs demonstrating adequate local independence. An IRT model in which item discriminations are constrained to be equal demonstrated fit equivalent to a model in which unique discrimination parameters were estimated for each item. All forms of reliability were strong across the majority of IRT ability estimates. CONCLUSIONS Modeling the VNT using IRT is feasible, yielding ability estimates that are both informative and reliable. Future efforts are needed to quantify the validity of the VNT under IRT and determine the extent to which it measures the same construct as other anomia tests. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.22329235.
Collapse
Affiliation(s)
| | - Marianne Casilio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Michael Walsh Dickey
- Department of Communication Science and Disorders, University of Pittsburgh, PA
- VA Pittsburgh Healthcare System, PA
| | - Stacey Steel
- Department of Speech & Hearing Sciences, Portland State University, OR
| | - Hannele Nicholson
- U.S. Department of Veterans Affairs, VA Minneapolis Healthcare System, MN
| | - Mikala Fleegle
- Department of Speech & Hearing Sciences, Portland State University, OR
| | - Alexander Swiderski
- Department of Communication Science and Disorders, University of Pittsburgh, PA
- VA Pittsburgh Healthcare System, PA
| | - William D Hula
- Department of Communication Science and Disorders, University of Pittsburgh, PA
- VA Pittsburgh Healthcare System, PA
| |
Collapse
|
4
|
Swiderski AM, Hula WD, Fergadiotis G. Accuracy of Naming Error Profiles Elicited From Adaptive Short Forms of the Philadelphia Naming Test. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1351-1364. [PMID: 37014997 PMCID: PMC10187961 DOI: 10.1044/2023_jslhr-22-00439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 12/05/2022] [Accepted: 01/03/2023] [Indexed: 05/18/2023]
Abstract
PURPOSE The purpose of this study was to evaluate whether a short-form computerized adaptive testing (CAT) version of the Philadelphia Naming Test (PNT) provides error profiles and model-based estimates of semantic and phonological processing that agree with the full test. METHOD Twenty-four persons with aphasia took the PNT-CAT and the full version of the PNT (hereinafter referred to as the "full PNT") at least 2 weeks apart. The PNT-CAT proceeded in two stages: (a) the PNT-CAT30, in which 30 items were selected to match the evolving ability estimate with the goal of producing a 50% error rate, and (b) the PNT-CAT60, in which an additional 30 items were selected to produce a 75% error rate. Agreement was evaluated in terms of the root-mean-square deviation of the response-type proportions and, for individual response types, in terms of agreement coefficients and bias. We also evaluated agreement and bias for estimates of semantic and phonological processing derived from the semantic-phonological interactive two-step model (SP model) of word production. RESULTS The results suggested that agreement was poorest for semantic, formal, mixed, and unrelated errors, all of which were underestimated by the short forms. Better agreement was observed for correct and nonword responses. SP model weights estimated by the short forms demonstrated no substantial bias but generally inadequate agreement with the full PNT, which itself showed acceptable test-retest reliability for SP model weights and all response types except for formal errors. DISCUSSION Results suggest that the PNT-CAT30 and the PNT-CAT60 are generally inadequate for generating naming error profiles or model-derived estimates of semantic and phonological processing ability. Post hoc analyses suggested that increasing the number of stimuli available in the CAT item bank may improve the utility of adaptive short forms for generating error profiles, but the underlying theory also suggests that there are limitations to this approach based on a unidimensional measurement model. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.22320814.
Collapse
Affiliation(s)
- Alexander M. Swiderski
- Center for the Neural Basis of Cognition, Carnegie Mellon University–University of Pittsburgh, PA
- VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, PA
| | - William D. Hula
- VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, PA
| | | |
Collapse
|
5
|
Casilio M, Fergadiotis G, Salem AC, Gale RC, McKinney-Bock K, Bedrick S. ParAlg: A Paraphasia Algorithm for Multinomial Classification of Picture Naming Errors. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:966-986. [PMID: 36791263 PMCID: PMC10461785 DOI: 10.1044/2022_jslhr-22-00255] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 10/05/2022] [Accepted: 11/21/2022] [Indexed: 06/18/2023]
Abstract
PURPOSE A preliminary version of a paraphasia classification algorithm (henceforth called ParAlg) has previously been shown to be a viable method for coding picture naming errors. The purpose of this study is to present an updated version of ParAlg, which uses multinomial classification, and comprehensively evaluate its performance when using two different forms of transcribed input. METHOD A subset of 11,999 archival responses produced on the Philadelphia Naming Test were classified into six cardinal paraphasia types using ParAlg under two transcription configurations: (a) using phonemic transcriptions for responses exclusively (phonemic-only) and (b) using phonemic transcriptions for nonlexical responses and orthographic transcriptions for lexical responses (orthographic-lexical). Agreement was quantified by comparing ParAlg-generated paraphasia codes between configurations and relative to human-annotated codes using four metrics (positive predictive value, sensitivity, specificity, and F1 score). An item-level qualitative analysis of misclassifications under the best performing configuration was also completed to identify the source and nature of coding discrepancies. RESULTS Agreement between ParAlg-generated and human-annotated codes was high, although the orthographic-lexical configuration outperformed phonemic-only (weighted-average F1 scores of .78 and .87, respectively). A qualitative analysis of the orthographic-lexical configuration revealed a mix of human- and ParAlg-related misclassifications, the former of which were related primarily to phonological similarity judgments whereas the latter were due to semantic similarity assignment. CONCLUSIONS ParAlg is an accurate and efficient alternative to manual scoring of paraphasias, particularly when lexical responses are orthographically transcribed. With further development, it has the potential to be a useful software application for anomia assessment. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.22087763.
Collapse
|
6
|
Fergadiotis G, Casilio M, Hula WD, Swiderski A. Computer Adaptive Testing for the Assessment of Anomia Severity. Semin Speech Lang 2021; 42:180-191. [PMID: 34261162 DOI: 10.1055/s-0041-1727252] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Anomia assessment is a fundamental component of clinical practice and research inquiries involving individuals with aphasia, and confrontation naming tasks are among the most commonly used tools for quantifying anomia severity. While currently available confrontation naming tests possess many ideal properties, they are ultimately limited by the overarching psychometric framework they were developed within. Here, we discuss the challenges inherent to confrontation naming tests and present a modern alternative to test development called item response theory (IRT). Key concepts of IRT approaches are reviewed in relation to their relevance to aphasiology, highlighting the ability of IRT to create flexible and efficient tests that yield precise measurements of anomia severity. Empirical evidence from our research group on the application of IRT methods to a commonly used confrontation naming test is discussed, along with future avenues for test development.
Collapse
Affiliation(s)
| | - Marianne Casilio
- Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee
| | - William D Hula
- VA Pittsburgh Healthcare System, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Alexander Swiderski
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|
7
|
Hula WD, Doyle PJ. The Aphasia Communication Outcome Measure: Motivation, Development, Validity Evidence, and Interpretation of Change Scores. Semin Speech Lang 2021; 42:211-224. [PMID: 34261164 DOI: 10.1055/s-0041-1730906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The Aphasia Communication Outcome Measure (ACOM) is a patient-reported measure of communicative functioning developed for persons with stroke-induced aphasia. It was motivated by the desire to include the perspective of persons with aphasia in the measurement of treatment outcomes and to apply newly accessible psychometric tools to improve the quality and usefulness of available outcome measures for aphasia. The ACOM was developed within an item response theory framework, and the validity of the score estimates it provides is supported by evidence based on its content, internal structure, relationships with other variables, stability over time, and responsiveness to treatment. This article summarizes the background and motivation for the ACOM, the steps in its initial development, evidence supporting its validity as a measure of patient-reported communication functioning, and current recommendations for interpreting change scores.
Collapse
Affiliation(s)
- William D Hula
- Geriatric Research Education and Clinical Center, VA Pittsburgh Healthcare System, Pittsburgh, Pennsylvania.,Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Patrick J Doyle
- Geriatric Research Education and Clinical Center, VA Pittsburgh Healthcare System, Pittsburgh, Pennsylvania
| |
Collapse
|
8
|
Dresang HC, Warren T, Hula WD, Dickey MW. Rational Adaptation in Using Conceptual Versus Lexical Information in Adults With Aphasia. Front Psychol 2021; 12:589930. [PMID: 33584469 PMCID: PMC7876333 DOI: 10.3389/fpsyg.2021.589930] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 01/05/2021] [Indexed: 11/29/2022] Open
Abstract
The information theoretic principle of rational adaptation predicts that individuals with aphasia adapt to their language impairments by relying more heavily on comparatively unimpaired non-linguistic knowledge to communicate. This prediction was examined by assessing the extent to which adults with chronic aphasia due to left-hemisphere stroke rely more on conceptual rather than lexical information during verb retrieval, as compared to age-matched neurotypical controls. A primed verb naming task examined the degree of facilitation each participant group received from either conceptual event-related or lexical collocate cues, compared to unrelated baseline cues. The results provide evidence that adults with aphasia received amplified facilitation from conceptual cues compared to controls, whereas healthy controls received greater facilitation from lexical cues. This indicates that adaptation to alternative and relatively unimpaired information may facilitate successful word retrieval in aphasia. Implications for models of rational adaptation and clinical neurorehabilitation are discussed.
Collapse
Affiliation(s)
- Haley C. Dresang
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, United States
- VA Pittsburgh Healthcare System, Pittsburgh, PA, United States
| | - Tessa Warren
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, United States
- Learning Research and Development Center, University of Pittsburgh, Pittsburgh, PA, United States
| | - William D. Hula
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
- VA Pittsburgh Healthcare System, Pittsburgh, PA, United States
| | - Michael Walsh Dickey
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
- Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, United States
- VA Pittsburgh Healthcare System, Pittsburgh, PA, United States
| |
Collapse
|
9
|
Evans WS, Hula WD, Quique Y, Starns JJ. How Much Time Do People With Aphasia Need to Respond During Picture Naming? Estimating Optimal Response Time Cutoffs Using a Multinomial Ex-Gaussian Approach. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:599-614. [PMID: 32073336 DOI: 10.1044/2019_jslhr-19-00255] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Aphasia is a language disorder caused by acquired brain injury, which generally involves difficulty naming objects. Naming ability is assessed by measuring picture naming, and models of naming performance have mostly focused on accuracy and excluded valuable response time (RT) information. Previous approaches have therefore ignored the issue of processing efficiency, defined here in terms of optimal RT cutoff, that is, the shortest deadline at which individual people with aphasia produce their best possible naming accuracy performance. The goals of this study were therefore to (a) develop a novel model of aphasia picture naming that could accurately account for RT distributions across response types; (b) use this model to estimate the optimal RT cutoff for individual people with aphasia; and (c) explore the relationships between optimal RT cutoff, accuracy, naming ability, and aphasia severity. Method A total of 4,021 naming trials across 10 people with aphasia were scored for accuracy and RT onset. Data were fit using a novel ex-Gaussian multinomial RT model, which was then used to characterize individual optimal RT cutoffs. Results Overall, the model fitted the empirical data well and provided reliable individual estimates of optimal RT cutoff in picture naming. Optimal cutoffs ranged between approximately 5 and 10 s, which has important implications for assessment and treatment. There was no direct relationship between aphasia severity, naming RT, and optimal RT cutoff. Conclusion The multinomial ex-Gaussian modeling approach appears to be a promising and straightforward way to estimate optimal RT cutoffs in picture naming in aphasia. Limitations and future directions are discussed.
Collapse
Affiliation(s)
- William S Evans
- Geriatric Research Education and Clinical Center, VA Healthcare System, Pittsburgh, PA
- Department of Communication Sciences and Disorders, University of Pittsburgh, PA
| | - William D Hula
- Geriatric Research Education and Clinical Center, VA Healthcare System, Pittsburgh, PA
- Department of Communication Sciences and Disorders, University of Pittsburgh, PA
| | - Yina Quique
- Geriatric Research Education and Clinical Center, VA Healthcare System, Pittsburgh, PA
- Department of Communication Sciences and Disorders, University of Pittsburgh, PA
| | | |
Collapse
|
10
|
Hula WD, Fergadiotis G, Swiderski AM, Silkes JP, Kellough S. Empirical Evaluation of Computer-Adaptive Alternate Short Forms for the Assessment of Anomia Severity. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:163-172. [PMID: 31851861 PMCID: PMC7213484 DOI: 10.1044/2019_jslhr-l-19-0213] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 08/12/2019] [Accepted: 08/22/2019] [Indexed: 05/09/2023]
Abstract
Purpose The purpose of this study was to verify the equivalence of 2 alternate test forms with nonoverlapping content generated by an item response theory (IRT)-based computer-adaptive test (CAT). The Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996)was utilized as an item bank in a prospective, independent sample of persons with aphasia. Method Two alternate CAT short forms of the PNT were administered to a sample of 25 persons with aphasia who were at least 6 months postonset and received no treatment for 2 weeks before or during the study. The 1st session included administration of a 30-item PNT-CAT, and the 2nd session, conducted approximately 2 weeks later, included a variable-length PNT-CAT that excluded items administered in the 1st session and terminated when the modeled precision of the ability estimate was equal to or greater than the value obtained in the 1st session. The ability estimates were analyzed in a Bayesian framework. Results The 2 test versions correlated highly (r = .89) and obtained means and standard deviations that were not credibly different from one another. The correlation and error variance between the 2 test versions were well predicted by the IRT measurement model. Discussion The results suggest that IRT-based CAT alternate forms may be productively used in the assessment of anomia. IRT methods offer advantages for the efficient and sensitive measurement of change over time. Future work should consider the potential impact of differential item functioning due to person factors and intervention-specific effects, as well as expanding the item bank to maximize the clinical utility of the test. Supplemental Material https://doi.org/10.23641/asha.11368040.
Collapse
Affiliation(s)
- William D. Hula
- Geriatric Research Education and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | | | - Alexander M. Swiderski
- Department of Communication Science and Disorders, University of Pittsburgh, PA
- Research and Development Service, VA Pittsburgh Healthcare System, PA
| | - JoAnn P. Silkes
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Stacey Kellough
- Research and Development Service, VA Pittsburgh Healthcare System, PA
| |
Collapse
|