1
|
Abstract
This study examined speech intelligibility and preferences for omnidirectional and directional microphone hearing aid processing across a range of signal-to-noise ratios (SNRs). A primary motivation for the study was to determine whether SNR might be used to represent distance between talker and listener in automatic directionality algorithms based on scene analysis. Participants were current hearing aid users who either had experience with omnidirectional microphone hearing aids only or with manually switchable omnidirectional/directional hearing aids. Using IEEE/Harvard sentences from a front loudspeaker and speech-shaped noise from three loudspeakers located behind and to the sides of the listener, the directional advantage (DA) was obtained at 11 SNRs ranging from -15 dB to +15 dB in 3 dB steps. Preferences for the two microphone modes at each of the 11 SNRs were also obtained using concatenated IEEE sentences presented in the speech-shaped noise. Results revealed that a DA was observed across a broad range of SNRs, although directional processing provided the greatest benefit within a narrower range of SNRs. Mean data suggested that microphone preferences were determined largely by the DA, such that the greater the benefit to speech intelligibility provided by the directional microphones, the more likely the listeners were to prefer that processing mode. However, inspection of the individual data revealed that highly predictive relationships did not exist for most individual participants. Few preferences for omnidirectional processing were observed. Overall, the results did not support the use of SNR to estimate the effects of distance between talker and listener in automatic directionality algorithms.
Collapse
|
2
|
Abstract
The bacteriophage population is vast, dynamic, old, and genetically diverse. The genomics of phages that infect bacterial hosts in the phylum Actinobacteria show them to not only be diverse but also pervasively mosaic, and replete with genes of unknown function. To further explore this broad group of bacteriophages, we describe here the isolation and genomic characterization of 116 phages that infect Microbacterium spp. Most of the phages are lytic, and can be grouped into twelve clusters according to their overall relatedness; seven of the phages are singletons with no close relatives. Genome sizes vary from 17.3 kbp to 97.7 kbp, and their G+C% content ranges from 51.4% to 71.4%, compared to ~67% for their Microbacterium hosts. The phages were isolated on five different Microbacterium species, but typically do not efficiently infect strains beyond the one on which they were isolated. These Microbacterium phages contain many novel features, including very large viral genes (13.5 kbp) and unusual fusions of structural proteins, including a fusion of VIP2 toxin and a MuF-like protein into a single gene. These phages and their genetic components such as integration systems, recombineering tools, and phage-mediated delivery systems, will be useful resources for advancing Microbacterium genetics.
Collapse
|
3
|
Differential susceptibility of bone marrow-derived dendritic cells and macrophages to productive infection with Listeria monocytogenes. Cell Microbiol 2007; 9:1397-411. [PMID: 17250592 DOI: 10.1111/j.1462-5822.2006.00880.x] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Dendritic cells (DC) are required for the immune response against Listeria monocytogenes and are permissive for infection in vivo and in vitro. However, it is unclear if DC provide a desirable intracellular niche for bacterial growth. To address this issue, we have compared the behaviour of L. monocytogenes in murine bone marrow-derived DC and macrophages (BMM). Similar to BMM, bacteria escaped to the cytosol in DC, replicated, and spread to adjacent cells. However, DC infection was less robust in terms of intracellular doubling time and total increase in bacterial numbers. Immunofluorescence analysis using a strain of L. monocytogenes that expresses green fluorescent protein upon bacterial entry into the cytosol suggested that a subpopulation of DC restricted bacteria to vacuoles, a finding that was confirmed by electron microscopy. In unstimulated DC cultures, L. monocytogenes replicated preferentially in phenotypically immature cells. Furthermore, DC that were induced to mature prior to infection were poor hosts for bacterial growth. We conclude that DC provide a suboptimal niche for L. monocytogenes growth, and this is at least in part a function of the DC maturation state. Therefore, the generation of an effective T cell response may be a net effect of both productive and non-productive infection of DC.
Collapse
|
4
|
Abstract
OBJECTIVE This study sought to describe the consonant information provided by amplification and by speechreading, and the extent to which such information might be complementary when a hearing aid user can see the talker's face. DESIGN Participants were 25 adults with acquired sensorineural hearing losses who wore the GN ReSound BT2 Personal Hearing System binaurally. Consonant recognition was assessed under four test conditions, each presented at an input level of 50 dB SPL: unaided listening without speechreading (baseline), aided listening without speechreading, unaided listening with speechreading, and aided listening with speechreading. Confusion matrices were generated for each of the four conditions to determine overall percent correct for each of 14 consonants, and information transmitted for place of articulation, manner of articulation, and voicing features. RESULTS Both amplification and speechreading provided a significant improvement in consonant recognition from the baseline condition. Speech-reading provided primarily place-of-articulation information, whereas amplification provided information about place and manner of articulation, as well as some voicing information. CONCLUSIONS Both amplification and speechreading provided place-of-articulation cues. The manner-of-articulation and voicing cues provided by amplification, therefore, were generally complementary to speechreading. It appears that the synergistic effect of combining the two sources of information can be optimized by amplification parameters that provide good audibility in the low-to-mid frequencies.
Collapse
|
5
|
The effect of speechreading on masked detection thresholds for filtered speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2001; 109:2272-2275. [PMID: 11386581 DOI: 10.1121/1.1362687] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
|
6
|
Hearing aid benefit in patients with high-frequency hearing loss. J Am Acad Audiol 2000; 11:429-37. [PMID: 11012238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Patients with hearing loss limited to frequencies above 2 kHz are often considered borderline candidates for hearing aids. In this study, we used the Profile of Hearing Aid Benefit to access 134 patients' perceived benefit with a variety of linear hearing aids, some more capable than others at achieving prescribed frequency gain targets. We also sought to explore various audiologic and subject factors that might have led patients to report different degrees of success or failure with their hearing aids. Results demonstrate that subjects with hearing loss limited to frequencies above 2 kHz benefit significantly from amplification. However, the amount of benefit reported is mostly unrelated to the hearing aid gain and frequency response. Of numerous audiologic and demographic factors explored in the present study, the number of hours of hearing aid use per day turned out to be the most important single factor that was significantly related to the amount of reported hearing aid benefit. However, the predictive value of knowing how many hours per day subjects wore their aids, or any other combination of factors explored, was quite limited and only accounted for a small amount of the variability observed in user benefit.
Collapse
|
7
|
The use of visible speech cues for improving auditory detection of spoken sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2000; 108:1197-1208. [PMID: 11008820 DOI: 10.1121/1.1288668] [Citation(s) in RCA: 292] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Classic accounts of the benefits of speechreading to speech recognition treat auditory and visual channels as independent sources of information that are integrated fairly early in the speech perception process. The primary question addressed in this study was whether visible movements of the speech articulators could be used to improve the detection of speech in noise, thus demonstrating an influence of speechreading on the ability to detect, rather than recognize, speech. In the first experiment, ten normal-hearing subjects detected the presence of three known spoken sentences in noise under three conditions: auditory-only (A), auditory plus speechreading with a visually matched sentence (AV(M)) and auditory plus speechreading with a visually unmatched sentence (AV(UM). When the speechread sentence matched the target sentence, average detection thresholds improved by about 1.6 dB relative to the auditory condition. However, the amount of threshold reduction varied significantly for the three target sentences (from 0.8 to 2.2 dB). There was no difference in detection thresholds between the AV(UM) condition and the A condition. In a second experiment, the effects of visually matched orthographic stimuli on detection thresholds was examined for the same three target sentences in six subjects who participated in the earlier experiment. When the orthographic stimuli were presented just prior to each trial, average detection thresholds improved by about 0.5 dB relative to the A condition. However, unlike the AV(M) condition, the detection improvement due to orthography was not dependent on the target sentence. Analyses of correlations between area of mouth opening and acoustic envelopes derived from selected spectral regions of each sentence (corresponding to the wide-band speech, and first, second, and third formant regions) suggested that AV(M) threshold reduction may be determined by the degree of auditory-visual temporal coherence, especially between the area of lip opening and the envelope derived from mid- to high-frequency acoustic energy. Taken together, the data (for these sentences at least) suggest that visual cues derived from the dynamic movements of the fact during speech production interact with time-aligned auditory cues to enhance sensitivity in auditory detection. The amount of visual influence depends in part on the degree of correlation between acoustic envelopes and visible movement of the articulators.
Collapse
|
8
|
Abstract
Dopamine (DA), while an essential neurotransmitter, is also a known neurotoxin that potentially plays an etiologic role in several neurodegenerative diseases. DA metabolism and oxidation readily produce reactive oxygen species (ROS) and DA can also be oxidized to a reactive quinone via spontaneous, enzyme-catalyzed or metal-enhanced reactions. A number of these reactions are cytotoxic, yet the precise mechanisms by which DA leads to cell death remain unknown. In this study, the neuroblastoma cell line, SK-N-SH, was utilized to examine DA toxicity under varying oxidant states. Cells pretreated with the glutathione (GSH)-depleting compound, L-buthionine sulfoximine (L-BSO), exhibited enhanced sensitivity to DA compared to controls (non-GSH-depleted cells). Furthermore, in cells pretreated with L-BSO, the addition of ascorbate (250 microM) afforded significant protection against DA-induced toxicity, while pyruvate (500 microM) had no protective effect. To further characterize the possibility that DA is associated with oxidative stress, additional studies were carried out with manganese (30 microM) as a pro-oxidant. Manganese and DA (200 microM), although not cytotoxic when individually administered to SK-N-SH cells, had a synergistic action on cytotoxicity. Finally, morphological and molecular markers of programmed cell death (apoptosis) were observed in cells treated with DA and L-BSO. These markers included membrane blebbing and internucleosomal DNA fragmentation. These results suggest that DA toxicity is tightly linked to intracellular oxidant/antioxidant levels, and that environmental factors, such as excessive Mn exposure, may modulate cellular sensitivity to DA.
Collapse
|
9
|
The recognition of isolated words and words in sentences: individual variability in the use of sentence context. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2000; 107:1000-1011. [PMID: 10687709 DOI: 10.1121/1.428280] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Estimates of the ability to make use of sentence context in 34 postlingually hearing-impaired (HI) individuals were obtained using formulas developed by Boothroyd and Nittrouer [Boothroyd and Nittrouer, J. Acoust. Sco. Am. 84, 101-114 (1988)] which relate scores for isolated words to words in meaningful sentences. Sentence materials were constructed by concatenating digitized productions of isolated words to ensure physical equivalence among the test items in the two conditions. Isolated words and words in sentences were tested at three levels of intelligibility (targeting 29%, 50%, and 79% correct). Thus, for each subject, three estimates of context ability, or k factors, were obtained. In addition, auditory, visual, and auditory-visual sentence recognition was evaluated using natural productions of sentence materials. Two main questions were addressed: (1) Is context ability constant for speech materials produced with different degrees of clarity? and (2) What are the relations between individual estimates of k and sentence recognition as a function of presentation modality? Results showed that estimates of k were not constant across different levels of intelligibility: k was greater for the more degraded condition relative to conditions of higher word intelligibility. Estimates of k also were influenced strongly by the test order of isolated words and words in sentences. That is, prior exposure to words in sentences improved later recognition of the same words when presented in isolation (and vice versa), even though the 1500 key words comprising the test materials were presented under degraded (filtered) conditions without feedback. The impact of this order effect was to reduce individual estimates of k for subjects exposed to sentence materials first and to increase estimates of k for subjects exposed to isolated words first. Finally, significant relationships were found between individual k scores and sentence recognition scores in all three presentation modalities, suggesting that k is a useful measure of individual differences in the ability to use sentence context.
Collapse
|
10
|
Measures of auditory-visual integration in nonsense syllables and sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 1998; 104:2438-2450. [PMID: 10491705 DOI: 10.1121/1.423751] [Citation(s) in RCA: 108] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
For all but the most profoundly hearing-impaired (HI) individuals, auditory-visual (AV) speech has been shown consistently to afford more accurate recognition than auditory (A) or visual (V) speech. However, the amount of AV benefit achieved (i.e., the superiority of AV performance in relation to unimodal performance) can differ widely across HI individuals. To begin to explain these individual differences, several factors need to be considered. The most obvious of these are deficient A and V speech recognition skills. However, large differences in individuals' AV recognition scores persist even when unimodal skill levels are taken into account. These remaining differences might be attributable to differing efficiency in the operation of a perceptual process that integrates A and V speech information. There is at present no accepted measure of the putative integration process. In this study, several possible integration measures are compared using both congruent and discrepant AV nonsense syllable and sentence recognition tasks. Correlations were tested among the integration measures, and between each integration measure and independent measures of AV benefit for nonsense syllables and sentences in noise. Integration measures derived from tests using nonsense syllables were significantly correlated with each other; on these measures, HI subjects show generally high levels of integration ability. Integration measures derived from sentence recognition tests were also significantly correlated with each other, but were not significantly correlated with the measures derived from nonsense syllable tests. Similarly, the measures of AV benefit based on nonsense syllable recognition tests were found not to be significantly correlated with the benefit measures based on tests involving sentence materials. Finally, there were significant correlations between AV integration and benefit measures derived from the same class of speech materials, but nonsignificant correlations between integration and benefit measures derived from different classes of materials. These results suggest that the perceptual processes underlying AV benefit and the integration of A and V speech information might not operate in the same way on nonsense syllable and sentence input.
Collapse
|
11
|
Modulation rate detection and discrimination by normal-hearing and hearing-impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 1998; 104:1051-1060. [PMID: 9714924 DOI: 10.1121/1.423323] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Modulation detection and modulation rate discrimination thresholds were obtained at three different modulation rates (fm = 80, 160, and 320 Hz) and for three different ranges of modulation depths (m): full (100%), mid (70%-80%), and low (40%-60%) with both normal-hearing (NH) and hearing-impaired (HI) subjects. The results showed that modulation detection thresholds increased with modulation rate, but significantly more so for HI than for NH subjects. Similarly, rate discrimination thresholds (delta r) increased with increases in fm and decreases in modulation depth. When compared to NH subjects, rate discrimination thresholds for HI subjects were significantly worse for all rates and for all depths. At the fastest modulation rate with less than 100% modulation depth, most HI subjects could not discriminate any change in rate. When valid thresholds for rate discrimination were obtained for HI subjects, they ranged from 2.5 semitones (delta r = 12.7 Hz, fm = 80 Hz, m = 100%) to 8.7 semitones (delta r = 214.5 Hz, fm = 320 Hz, m = 100%). In contrast, average rate discrimination thresholds for NH subjects ranged from 0.9 semitones (delta r = 4.2 Hz, fm = 80 Hz, m = 100%) to 4.7 semitones (delta r = 103.5 Hz, fm = 320 Hz, m = 60%). Some of the differences in temporal processing between NH and HI subjects, especially those related to modulation detection, may be accounted for by differences in signal audibility, especially for high-frequency portions of the modulated noise. However, in many cases, HI subjects encountered great difficulty discriminating a change in modulation rate even though the modulation components of the standard and test stimuli were detectable.
Collapse
|
12
|
Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition, and auditory-visual integration. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 1998; 103:2677-2690. [PMID: 9604361 DOI: 10.1121/1.422788] [Citation(s) in RCA: 205] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Factors leading to variability in auditory-visual (AV) speech recognition include the subject's ability to extract auditory (A) and visual (V) signal-related cues, the integration of A and V cues, and the use of phonological, syntactic, and semantic context. In this study, measures of A, V, and AV recognition of medial consonants in isolated nonsense syllables and of words in sentences were obtained in a group of 29 hearing-impaired subjects. The test materials were presented in a background of speech-shaped noise at 0-dB signal-to-noise ratio. Most subjects achieved substantial AV benefit for both sets of materials relative to A-alone recognition performance. However, there was considerable variability in AV speech recognition both in terms of the overall recognition score achieved and in the amount of audiovisual gain. To account for this variability, consonant confusions were analyzed in terms of phonetic features to determine the degree of redundancy between A and V sources of information. In addition, a measure of integration ability was derived for each subject using recently developed models of AV integration. The results indicated that (1) AV feature reception was determined primarily by visual place cues and auditory voicing + manner cues, (2) the ability to integrate A and V consonant cues varied significantly across subjects, with better integrators achieving more AV benefit, and (3) significant intra-modality correlations were found between consonant measures and sentence measures, with AV consonant scores accounting for approximately 54% of the variability observed for AV sentence recognition. Integration modeling results suggested that speechreading and AV integration training could be useful for some individuals, potentially providing as much as 26% improvement in AV consonant recognition.
Collapse
|
13
|
Evaluating the articulation index for auditory-visual consonant recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 1996; 100:2415-2424. [PMID: 8865647 DOI: 10.1121/1.417950] [Citation(s) in RCA: 47] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Adequacy of the ANSI standard for calculating the articulation index (AI) [ANSI S3.5-1969 (R1986)] was evaluated by measuring auditory (A), visual (V), and auditory-visual (AV) consonant recognition under a variety of bandpass-filtered speech conditions. Contrary to ANSI predictions, filter conditions having the same auditory AI did not necessarily result in the same auditory-visual AI. Low-frequency bands of speech tended to provide more benefit to AV consonant recognition than high-frequency bands. Analyses of the auditory error patterns produced by the different filter conditions showed a strong negative correlation between the degree of A and V redundancy and the amount of benefit obtained when A and V cues were combined. These data indicate that the ANSI auditory-visual AI procedure is inadequate for predicting AV consonant recognition performance under conditions of severe spectral shaping.
Collapse
|
14
|
Abstract
Prosodic speech cues for rhythm, stress, and intonation are related primarily to variations in intensity, duration, and fundamental frequency. Because these cues make use of temporal properties of the speech waveform they are likely to be represented broadly across the speech spectrum. In order to determine the relative importance of different frequency regions for the recognition of prosodic cues, identification of four prosodic features, syllable number, syllabic stress, sentence intonation, and phrase boundary location, was evaluated under six filter conditions spanning the range from 200-6100 Hz. Each filter condition had equal articulation index (AI) weights, AI = 0.01; p(C)isolated words approximately equal to 0.40. Results obtained with normally hearing subjects showed that there was an interaction between filter condition and the identification of specific prosodic features. For example, information from high-frequency regions of speech was particularly useful in the identification of syllable number and stress, whereas information from low-frequency regions was helpful in identifying intonation patterns. In spite of these spectral differences, overall listeners performed remarkably well in identifying prosodic patterns, although individual differences were apparent. For some subjects, equivalent levels of performance across the six filter conditions were achieved. These results are discussed in relation to auditory and auditory-visual speech recognition.
Collapse
|
15
|
Complementation of M gene mutants of vesicular stomatitis virus by plasmid-derived M protein converts spherical extracellular particles into native bullet shapes. Virology 1996; 217:76-87. [PMID: 8599238 DOI: 10.1006/viro.1996.0095] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
The matrix (M) protein of vesicular stomatitis virus (VSV) binds the nucleocapsid to the cytoplasmic surface of the host plasma membrane during virus assembly by budding. It also condenses the nucleocapsid into a tightly coiled nucleocapsid-M protein complex that appears to give the virion its bullet-like shape. As described here, temperature-sensitive (ts) M mutants produced two classes of membrane-containing extracellular particles at the nonpermissive temperature. These could be distinguished by sedimentation in sucrose gradients and by electron microscopy. One class contained nucleocapsids and envelope glycoprotein, but very little M protein. The other class was devoid of nucleocapsids. Most of these particles were spherical or pleiomorphic in shape as determined by electron microscopy. Expression of wild-type (wt) M protein from plasmid DNA using the vaccinia/T7 virus system did not enhance the incorporation of nucleocapsids into extracellular particles from cells coinfected with the ts M mutants but did enhance the incorporation of M protein into these particles. Electron microscopy showed that wt M protein served to impart the bullet-like shape typical of VSV virions to what would otherwise be spherical or pleiomorphic virus-like particles. These data suggest that there are two distinct processes in VSV envelope biogenesis. One process involves envelopment of the nucleocapsid and can be accomplished by the ts M mutants at the nonpermissive temperature, albeit at a low level compared to wt VSV. The other process involves conversion of virion components into the bullet-like shape and requires a function provided by wt M protein.
Collapse
|
16
|
Procoagulant activity after exposure of monocyte-derived macrophages to minimally oxidized low density lipoprotein. Co-localization of tissue factor antigen and nascent fibrin fibers at the cell surface. THE AMERICAN JOURNAL OF PATHOLOGY 1995; 147:1029-40. [PMID: 7573348 PMCID: PMC1870998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
The role of tissue factor (TF) as an initiator of the thrombotic complications secondary to atherosclerosis has been acknowledged, and in situ expression of TF activity by monocyte-derived macrophages and lesion-associated macrophage foam cells has been documented. Macrophages express TF activity upon exposure in vitro to either oxidized low density lipoprotein LDL (Ox-LDL) or endotoxin (lipopolysaccharide). This activity has been associated with membrane vesicles that apparently are shed after procoagulant expression. The present study based upon the correlative use of an enzyme-linked coagulant assay and three-dimensional multi-antigen, immunogold electron microscopy, reports the ultrastructural localization of TF antigen and spatially correlates TF with OX-LDL binding and the presence of nascent fibrin polymers on the plasma membrane of cultured macrophages. Pigeon monocyte/macrophages, after a 4-hour induction with lipopolysaccharide (2 micrograms/ml) or minimally oxidized LDL (50 micrograms/ml; thiobarbituric acid reducing substance, 5 to 8 nmol/mg protein) were incubated for 40 minutes in a Tris-buffered medium containing factors VII, V, X, II, and I before either assaying for coagulant activity or processing for gold-colloid cytochemistry. TF activity, as measured by enzyme-linked coagulant assay peaked 6 hours after agonist exposure with lipopolysaccharide and Ox-LDL giving, respectively, 115- and 60-fold stimulation as compared with control. This activity corresponded to the elaboration of membrane ruffles and microvilli on the cell surfaces. Through correlative immunogold cytochemistry (15-nm-diameter colloid) and gold-ligand cytochemistry (30-nm-diameter colloid), TF antigen (83%) and Ox-LDL (78%) were primarily associated with the membrane ruffles and microvilli. Multi-antigen immunogold cytochemistry when used in conjunction with ligand-gold cytochemistry documented co-localization of Ox-LDL (22-nm gold), TF antigen (15-nm gold) and a delicate three-dimensional network of short fibrin fibers that were decorated in a linear fashion with the immunogold probes (30-nm gold). These results provide evidence that TF antigen is located at selected regions on the cell surfaces. Furthermore, these same regions provide binding sites for agonist uptake and organization sites for fibrin polymerization. Hypothetically, the localized membrane regions could be shed from the cell surface as a means for regulating coagulation potential.
Collapse
|
17
|
Auditory supplements to speechreading: combining amplitude envelope cues from different spectral regions of speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 1994; 95:1065-1073. [PMID: 8132900 DOI: 10.1121/1.408468] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Many listeners with severe-to-profound hearing losses perceive only a narrow range of low-frequency sounds and must rely on speechreading to supplement the impoverished auditory signal in speech recognition. Previous research with normal-hearing subjects [Grant et al., J. Exp. Psychol. 43A, 621-645 (1991)] demonstrated that speechreading was significantly improved when supplemented by amplitude-envelope cues that were extracted from various spectral regions of speech and presented as amplitude modulations of carriers with frequencies at or below the speech band from which the envelope was derived. This experiment assessed the benefit to speechreading provided by pairs of such envelope cues presented simultaneously. In general, greater improvements in speechreading scores were observed for pairs than for single envelopes when the carrier signals were chosen appropriately. However, when pairs of envelope signals were transposed to low frequencies, the benefit to speechreading was no better than the most effective single-band envelope signal tested, or for a low-pass-filtered speech signal with the same overall bandwidth. Suggestions for improving the efficacy of frequency-lowered envelope cues for hearing-impaired listeners are discussed.
Collapse
|
18
|
Single band amplitude envelope cues as an aid to speechreading. THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY. A, HUMAN EXPERIMENTAL PSYCHOLOGY 1991; 43:621-45. [PMID: 1775660 DOI: 10.1080/14640749108400990] [Citation(s) in RCA: 23] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Amplitude envelopes derived from speech have been shown to facilitate speech-reading to varying degrees, depending on how the envelope signals were extracted and presented and on the amount of training given to the subjects. In this study, three parameters related to envelope extraction and presentation were examined using both easy and difficult sentence materials: (1) the bandwidth and centre frequency of the filtered speech signal used to obtain the envelope; (2) the bandwidth of the envelope signal determined by the lowpass filter cutoff frequency used to "smooth" the envelope fluctuations; and (3) the carrier signal used to convey the envelope cues. Results for normal hearing subjects following a brief visual and auditory-visual familiarization/training period showed that (1) the envelope derived from wideband speech does not provide the greatest benefit to speechreading when compared to envelopes derived from selected octave bands of speech; (2) as the bandwidth centred around the carrier frequency increased from 12.5 to 1600 Hz, auditory-visual (AV) performance obtained with difficult sentence materials improved, especially for envelopes derived from high-frequency speech energy; (3) envelope bandwidths below 25 Hz resulted in AV scores that were sometimes equal to or worse than speechreading alone; (4) for each filtering condition tested, there was at least one bandwidth and carrier condition that produced AV scores that were significantly greater than speechreading alone; (5) low-frequency carriers were better than high-frequency or wideband carriers for envelopes derived from an octave band of speech centred at 500 Hz; and (6) low-frequency carriers were worse than high-frequency or wideband carriers for envelopes derived from an octave band centred at 3150 Hz. These results suggest that amplitude envelope cues can provide a substantial benefit to speechreading for both easy and difficult sentence materials, but that frequency transposition of these signals to regions remote from their "natural" spectral locations may result in reduced performance.
Collapse
|
19
|
Evaluating the articulation index for auditory-visual input. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 1991; 89:2952-2960. [PMID: 1918633 DOI: 10.1121/1.400733] [Citation(s) in RCA: 62] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
An investigation of the auditory-visual (AV) articulation index (AI) correction procedure outlined in the ANSI standard [ANSI S3.5-1969 (R1986)] was made by evaluating auditory (A), visual (V), and auditory-visual sentence identification for both wideband speech degraded by additive noise and a variety of bandpass-filtered speech conditions presented in quiet and in noise. When the data for each of the different listening conditions were averaged across talkers and subjects, the procedure outlined in the standard was fairly well supported, although deviations from the predicted AV score were noted for individual subjects as well as individual talkers. For filtered speech signals with AIA less than 0.25, there was a tendency for the standard to underpredict AV scores. Conversely, for signals with AIA greater than 0.25, the standard consistently overpredicted AV scores. Additionally, synergistic effects, where the AIA obtained from the combination of different bandpass-filtered conditions was greater than the sum of the individual AIA's, were observed for all nonadjacent filter-band combinations (e.g., the addition of a low-pass band with a 630-Hz cutoff and a high-pass band with a 3150-Hz cutoff). These latter deviations from the standard violate the basic assumption of additivity stated by Articulation Theory, but are consistent with earlier reports by Pollack [I. Pollack, J. Acoust. Soc. Am. 20, 259-266 (1948)], Licklider [J. C. R. Licklider, Psychology: A Study of a Science, Vol. 1, edited by S. Koch (McGraw-Hill, New York, 1959), pp. 41-144], and Kryter [K. D. Kryter, J. Acoust. Soc. Am. 32, 547-556 (1960)].
Collapse
|
20
|
Abstract
Listeners' accuracy in discriminating one temporal pattern from another was measured in three psychophysical experiments. When the standard pattern consisted of equally timed (isochronic) brief tones, whose interonset intervals (IOIs) were 50, 100, or 200 msec, the accuracy in detecting an asynchrony or deviation of one tone in the sequence was about as would be predicted from older research on the discrimination of single time intervals (6%-8% at an IOI of 200 msec, 11%-12% at an IOI of 100 msec, and almost 20% at an IOI of 50 msec). In a series of 6 or 10 tones, this accuracy was independent of position of delay for IOIs of 100 and 200 msec. At 50 msec, however, accuracy depended on position, being worst in initial positions and best in final positions. When one tone in a series of six has a frequency different from the others, there is some evidence (at IOI = 200 msec) that interval discrimination is relatively poorer for the tone with the different frequency. Similarly, even if all tones have the same frequency but one interval in the series is made twice as long as the others, temporal discrimination is poorer for the tones bordering the longer interval, although this result is dependent on tempo or IOI. Results with these temporally more complex patterns may be interpreted in part by applying the relative Weber ratio to the intervals before and after the delayed tone. Alternatively, these experiments may show the influence of accent on the temporal discrimination of individual tones.
Collapse
|
21
|
Identification of intonation contours by normally hearing and profoundly hearing-impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 1987; 82:1172-8. [PMID: 3680776 DOI: 10.1121/1.395253] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Fundamental frequency (F0) information extracted from low-pass-filtered speech and aurally presented as frequency-modulated sinusoids can greatly improve speechreading performance [Grant et al., J. Acoust. Soc. Am. 77, 671-677 (1985)]. To use this source of information, listeners must be able to detect the presence or absence of F0 (i.e., voicing), discriminate changes in frequency, and make judgments about the linguistic meaning of perceived variations in F0. In the present study, normally hearing and hearing-impaired subjects were required to locate the stressed peak of an intonation contour according to the extent of frequency transition at the primary peak. The results showed that listeners with profound hearing impairments required frequency transitions that were 1.5-6 times greater than those required by normally hearing subjects. These results were consistent with the subjects' identification performance for intonation and stress patterns in natural speech, and suggest that natural variations in F0 may be too small for some impaired listeners to perceive and follow accurately.
Collapse
|
22
|
Abstract
Variations in voice fundamental frequency were extracted from naturally produced speech samples and transmitted to an electrocutaneous display consisting of 10 electrodes arranged in a linear array along the forearm. Changes in fundamental frequency were encoded as changes in stimulus location. Speechreading performance, with and without the electrocutaneous aid, was examined for both sentence and connected discourse materials with one profoundly hearing-impaired listener and one normally hearing listener whose hearing was masked. After 20 hours of speechreading training in connected discourse tracking procedure, both subjects showed higher tracking rates and faster learning rates with the aid than without the aid. Results from closed-set sentence-identification tests showed that patterns of intonation, stress, and phrase structure, which are not easily speechread, are readily available with the aid.
Collapse
|
23
|
Perceived magnitude of multiple electrocutaneous pulses. PERCEPTION & PSYCHOPHYSICS 1980; 28:255-62. [PMID: 7433004 DOI: 10.3758/bf03204383] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|