1
|
Keintz CK, Bunton K, Hoit JD. Influence of visual information on the intelligibility of dysarthric speech. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2007; 16:222-34. [PMID: 17666548 DOI: 10.1044/1058-0360(2007/027)] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
PURPOSE To examine the influence of visual information on speech intelligibility for a group of speakers with dysarthria associated with Parkinson's disease. METHOD Eight speakers with Parkinson's disease and dysarthria were recorded while they read sentences. Speakers performed a concurrent manual task to facilitate typical speech production. Twenty listeners (10 experienced and 10 inexperienced) transcribed sentences while watching and listening to videotapes of the speakers (auditory-visual mode) and while only listening to the speakers (auditory-only mode). RESULTS Significant main effects were found for both presentation mode and speaker. Auditory-visual scores were significantly higher than auditory-only scores for the 3 speakers with the lowest intelligibility scores. No significant difference was found between the 2 listener groups. CONCLUSIONS The findings suggest that clinicians should consider both auditory-visual and auditory-only intelligibility measures in speakers with Parkinson's disease to determine the most effective strategies aimed at evaluation and treatment of speech intelligibility decrements.
Collapse
|
2
|
Hall DA, Fussell C, Summerfield AQ. Reading fluent speech from talking faces: typical brain networks and individual differences. J Cogn Neurosci 2005; 17:939-53. [PMID: 15969911 DOI: 10.1162/0898929054021175] [Citation(s) in RCA: 88] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Listeners are able to extract important linguistic information by viewing the talker's face-a process known as ''speechreading.'' Previous studies of speechreading present small closed sets of simple words and their results indicate that visual speech processing engages a wide network of brain regions in the temporal, frontal, and parietal lobes that are likely to underlie multiple stages of the receptive language system. The present study further explored this network in a large group of subjects by presenting naturally spoken sentences which tap the richer complexities of visual speech processing. Four different baselines (blank screen, static face, nonlinguistic facial gurning, and auditory speech) enabled us to determine the hierarchy of neural processing involved in speechreading and to test the claim that visual input reliably accesses sound-based representations in the auditory cortex. In contrast to passively viewing a blank screen, the static-face condition evoked activation bilaterally across the border of the fusiform gyrus and cerebellum, and in the medial superior frontal gyrus and left precentral gyrus (p < .05, whole brain corrected). With the static face as baseline, the gurning face evoked bilateral activation in the motion-sensitive region of the occipital cortex, whereas visual speech additionally engaged the middle temporal gyrus, inferior and middle frontal gyri, and the inferior parietal lobe, particularly in the left hemisphere. These latter regions are implicated in lexical stages of spoken language processing. Although auditory speech generated extensive bilateral activation across both superior and middle temporal gyri, the group-averaged pattern of speechreading activation failed to include any auditory regions along the superior temporal gyrus, suggesting that f luent visual speech does not always involve sound-based coding of the visual input. An important finding from the individual subject analyses was that activation in the superior temporal gyrus did reach significance (p < .001, small-volume corrected) for a subset of the group. Moreover, the extent of the left-sided superior temporal gyrus activity was strongly correlated with speechreading performance. Skilled speechreading was also associated with activations and deactivations in other brain regions, suggesting that individual differences ref lect the efficiency of a circuit linking sensory, perceptual, memory, cognitive, and linguistic processes rather than the operation of a single component process.
Collapse
|
3
|
Criteria of candidacy for unilateral cochlear implantation in postlingually deafened adults III: prospective evaluation of an actuarial approach to defining a criterion. Ear Hear 2005; 25:361-74. [PMID: 15292776 DOI: 10.1097/01.aud.0000134551.13162.88] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Outcomes from unilateral cochlear implantation in postlingually deafened adults are variable and difficult to predict precisely from data gathered before surgery. The objective was to derive and validate a method for specifying criteria of candidacy for implantation that takes this variability into account. DESIGN Accuracy of identifying words in prerecorded sentences without lipreading was measured in 480 users of unilateral multichannel cochlear implants. These patients had all scored zero before surgery on prerecorded open-set tests of word recognition in sentences with acoustic hearing aids. Statistical models were derived that calculated the odds that a patient would score higher with an implant than a criterion score, given knowledge of the duration of profound deafness in the implanted ear. The accuracy of the models was evaluated prospectively with two new groups of patients who scored between 1% and approximately 50% correct before surgery in one or both ears with acoustic hearing aids. Group I (N=53) was implanted in an ear that scored zero. Group II (N=31) was implanted in an ear that scored above zero. Benefits from implantation, measured as changes in word recognition performance and in health utility, were compared with the odds calculated by the statistical models. RESULTS The preferred model was based on data from 376 subjects. It made accurate predictions of the proportion of patients in group I, and, disregarding minor exceptions, accurate predictions of the proportion of patients in group II, who improved on their preoperative word recognition score. Benefit from implantation was low for patients implanted with odds less favorable than 4:1 (4 chances out of 5). CONCLUSIONS Adoption of odds of 4:1 as the criterion of candidacy for unilateral cochlear implantation would be likely to maintain acceptable benefit and cost-effectiveness while being explicit and informative for patients, clinicians, and commissioners of health care.
Collapse
|
4
|
Criteria of candidacy for unilateral cochlear implantation in postlingually deafened adults I: theory and measures of effectiveness. Ear Hear 2005; 25:310-35. [PMID: 15292774 DOI: 10.1097/01.aud.0000134549.48718.53] [Citation(s) in RCA: 79] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The objectives of this study were to distinguish the equivalent-effectiveness, health-economic, and actuarial approaches to specifying criteria of candidacy for medical interventions; to apply the equivalent-effectiveness approach to unilateral cochlear implantation for postlingually deafened adults; and to determine whether the criterion should take age at implantation and duration of profound deafness into account. DESIGN The study was designed as a prospective cohort study in 13 hospitals with four groups of severely-profoundly hearing-impaired subjects distinguished by their preoperative ability to identify words in sentences when aided acoustically. The groups represent a progressive relaxation of criteria of candidacy: Group I (N=134) scored 0% correct without lipreading and did not improve their lipreading score significantly when aided; group II (N=93) scored 0% without lipreading but did improve their lipreading score significantly when aided; group III (N=53) scored 0% without lipreading when the to-be-implanted ear was aided but between 1% and approximately 50% when the other ear was aided. Group IV (N=31) scored between 1% and approximately 50% without lipreading when the to-be-implanted ear was aided. Measures of speech intelligibility, health utility, and otologically relevant quality of life were obtained before surgery and 9 mo after surgery from each subject. Measures of effectiveness were calculated as the difference between 9-mo and preoperative scores. RESULTS Effectiveness differed only slightly between groups. Effectiveness was not strongly associated with age at the time of implantation. Greater effectiveness was associated with implantation in the ear with the shorter duration of profound deafness. Cochlear implantation was least effective when the preoperative audiological status of the better-hearing ear was good and the duration of profound deafness of the implanted ear was long. As a result, effectiveness was not significant for the subsets of groups III and IV, who were given implants in ears that had been profoundly deaf for more than 30 yr. CONCLUSIONS The effectiveness of cochlear implantation differs little between groups of candidates who score zero with acoustic hearing aids before surgery and groups who score up to approximately 50% correct, thereby justifying a relaxation of the criterion of candidacy to embrace some members of the latter groups. The criterion should be based not only on preoperative speech intelligibility but also on duration of profound deafness in the to-be-implanted ear.
Collapse
|
5
|
Bernstein LE, Demorest ME, Tucker PE. Speech perception without hearing. PERCEPTION & PSYCHOPHYSICS 2000; 62:233-52. [PMID: 10723205 DOI: 10.3758/bf03205546] [Citation(s) in RCA: 157] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this study of visual phonetic speech perception without accompanying auditory speech stimuli, adults with normal hearing (NH; n = 96) and with severely to profoundly impaired hearing (IH; n = 72) identified consonant-vowel (CV) nonsense syllables and words in isolation and in sentences. The measures of phonetic perception were the proportion of phonemes correct and the proportion of transmitted feature information for CVs, the proportion of phonemes correct for words, and the proportion of phonemes correct and the amount of phoneme substitution entropy for sentences. The results demonstrated greater sensitivity to phonetic information in the IH group. Transmitted feature information was related to isolated word scores for the IH group, but not for the NH group. Phoneme errors in sentences were more systematic in the IH than in the NH group. Individual differences in phonetic perception for CVs were more highly associated with word and sentence performance for the IH than for the NH group. The results suggest that the necessity to perceive speech without hearing can be associated with enhanced visual phonetic perception in some individuals.
Collapse
Affiliation(s)
- L E Bernstein
- Department of Communication Neuroscience, House Ear Institute, Los Angeles, California 90057, USA.
| | | | | |
Collapse
|
6
|
Demorest ME, Bernstein LE, DeHaven GP. Generalizability of speechreading performance on nonsense syllables, words, and sentences: subjects with normal hearing. JOURNAL OF SPEECH AND HEARING RESEARCH 1996; 39:697-713. [PMID: 8844551 DOI: 10.1044/jshr.3904.697] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Ninety-six adults with normal hearing viewed three types of recorded speechreading materials (consonant-vowel nonsense syllables, isolated words, and sentences) on 2 days. Responses to nonsense syllables were scored for syllables correct and syllable groups correct; responses to words and sentences were scored in terms of words correct, phonemes correct, and an estimate of visual distance between the stimulus and the response. Generalizability analysis was used to quantify sources of variability in performance. Subjects and test items were important sources of variability for all three types of materials; effects of talker and day of testing varied but were comparatively small. For each type of material, alternative models of test construction and test-score interpretation were evaluated through estimation of generalizability coefficients as a function of test length. Performance on nonsense syllables correlated about .50 with both word and sentence measures, whereas correlations between words and sentences typically exceeded .80.
Collapse
Affiliation(s)
- M E Demorest
- University of Maryland, Baltimore County, Department of Psychology 21228-5398, USA.
| | | | | |
Collapse
|
7
|
Gray RF, Wareing MJ, Court I. Cochlear implant results: a comparison of live-voice and videotaped tests. Laryngoscope 1995; 105:1001-4. [PMID: 7666710 DOI: 10.1288/00005537-199509000-00022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
The Cambridge Cochlear Implant Programme has so far implanted the Ineraid multichannel cochlear implant in 16 profoundly deaf adult patients; there has been a 9-month or longer follow-up period with these patients. We have evaluated these patients by open-set Bamford-Kowal-Bench (BKB) Standard Sentence List testing in two different delivery strategies, live-speaker testing by the same speaker and high-resolution videotaped testing. The performance in lip reading both before and 9 months after implantation has been tested, as well as performance with the implant alone and with the implant in conjunction with lip reading at the 9-month stage. We have compared the performance in these two delivery strategies and have found a significantly better performance in the live-speaker tests that is attributable to slower and perhaps more sympathetic delivery. We have also found evidence of a ceiling effect in the performance of the implant with lip reading in the live-speaker mode and, of greater importance, a floor effect in the performance of the implant alone with the videotaped test. These results and the implications for a complementary role of these two test-delivery modes are discussed.
Collapse
Affiliation(s)
- R F Gray
- Department of Otolaryngology, Addenbrookes Hospital, Cambridge, United Kingdom
| | | | | |
Collapse
|
8
|
Bench J, Daly N, Doyle J, Lind C. Choosing talkers for the BKB/A Speechreading Test: a procedure with observations on talker age and gender. BRITISH JOURNAL OF AUDIOLOGY 1995; 29:172-87. [PMID: 8574203 DOI: 10.3109/03005369509086594] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
A procedure is described for choosing talkers for the BKB/A (BKB/Australian version) Speechreading Test. The main aims were: to select several talkers from a pool of potential talkers, to avoid adventitiously choosing a markedly atypical single talker; to assess speechreading as a general skill rather than as talker-specific; and to select talkers who were acceptable to speechreaders, relatively easy to speechread, and comparable in their speechreadability. Because of the number of variables involved and the demanding nature of the task for speechreaders, a three-stage selection procedure was adopted. In the resulting BKB/A 21-sentence list Speechreading Test, four of the 16 sentences in a list are each spoken by four talkers, chosen as follows. In Stage 1, 16 talkers (four of each age/gender set: older men, older women, younger men, younger women) were selected from an original pool of 40 (10 of each set), via rankings made by eight hearing-impaired judges with speechreading experience. In Stage 2, the final four talkers (one of each set) were selected from the 16 via the speechreading scores of further hearing-impaired subjects with speechreading experience. In Stage 3, the order of talker appearance within lists (in random order versus over blocks of four consecutive sentences) was determined. This three-stage approach to talker selection identified differences between talker candidates within sets, except for younger men, and suggested that, overall, younger women were the easiest to speechread. The discussion addresses the merits and disadvantages of this approach to talker selection, and suggests some reasons for the documented differences in speechreadability among talkers of different age and gender.
Collapse
Affiliation(s)
- J Bench
- School of Communication Disorders, La Trobe University, Victoria, Australia
| | | | | | | |
Collapse
|
9
|
Foster JR, Summerfield AQ, Marshall DH, Palmer L, Ball V, Rosen S. Lip-reading the BKB sentence lists: corrections for list and practice effects. BRITISH JOURNAL OF AUDIOLOGY 1993; 27:233-46. [PMID: 8312846 DOI: 10.3109/03005369309076700] [Citation(s) in RCA: 25] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Two groups of 21 adult subjects with normal hearing viewed the video recordings of the Bamford-Kowal-Bench standard sentence lists issued by the EPI Group in 1986. Each subject viewed all of the 21 lists and attempted to write down the words contained in each sentence. One group lip-read the lists with no sound (the LR:alone condition). The other group also heard a sequence of acoustic pulses which were synchronized to the moments when the talker's vocal folds closed (the LR&Lx condition). Performance was assessed both by loose (KW(L)) and by tight (KW(T)) keyword scoring methods. Both scoring methods produced the same pattern of results: performance was better in the LR&Lx condition; performance in both conditions improved linearly with the logarithm of the list presentation order number; subjects who produced higher overall scores also improved more with experience of the lists. The data were described well by a logistic regression model which provided a formula which can be used to compensate for practice effects and for differences in difficulty between lists. Two simpler, but less accurate, methods for compensating for variation in inter-list difficulty are also described. A figure is provided which can be used to assess the significance of the difference between a pair of scores obtained from a single subject in any pair of presentation conditions.
Collapse
Affiliation(s)
- J R Foster
- MRC Institute of Hearing Research, University of Nottingham, UK
| | | | | | | | | | | |
Collapse
|
10
|
MacLeod A, Summerfield Q. A procedure for measuring auditory and audio-visual speech-reception thresholds for sentences in noise: rationale, evaluation, and recommendations for use. BRITISH JOURNAL OF AUDIOLOGY 1990; 24:29-43. [PMID: 2317599 DOI: 10.3109/03005369009077840] [Citation(s) in RCA: 204] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
The strategy for measuring speech-reception thresholds for sentences in noise advocated by Plomp and Mimpen (Audiology, 18, 43-52, 1979) was modified to create a reliable test for measuring the difficulty which listeners have in speech reception, both auditorily and audio-visually. The test materials consist of 10 lists of 15 short sentences of homogeneous intelligibility when presented acoustically, and of different, but still homogeneous, intelligibility when presented audio-visually, in white noise. Homogeneity was achieved by applying phonetic and linguistic principles at the stage of compilation, followed by pilot testing and balancing of properties. To run the test, lists are presented at signal-to-noise ratios (SNRs) determined by an up-down psychophysical rule so as to estimate auditory and audio-visual speech-reception thresholds, defined as the SNRs at which the three content words in each sentence are identified correctly on 50% of trials. These thresholds provide measures of a subject's speech-reception abilities. The difference between them provides a measure of the benefit received from vision. It is shown that this measure is closely related to the accuracy with which subjects lip-read words in sentences with no acoustical information. In data from normally hearing adults, the standard deviations (s.d.s) of estimates of auditory speech reception threshold in noise (SRTN), audio-visual SRTN, and visual benefit are 1.2, 2.0, and 2.3 dB, respectively. Graphs are provided with which to estimate the trade-off between reliability and the number of lists presented, and to assess the significance of deviant scores from individual subjects.
Collapse
Affiliation(s)
- A MacLeod
- MRC Institute of Hearing Research, University Park, Nottingham
| | | |
Collapse
|
11
|
Dodd B, Plant G, Gregory M. Teaching lip-reading: the efficacy of lessons on video. BRITISH JOURNAL OF AUDIOLOGY 1989; 23:229-38. [PMID: 2790308 DOI: 10.3109/03005368909076504] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Studies of the efficacy of filmed and videotaped materials for lip-reading self instruction have provided encouraging results. The recent increase in private ownership of VCRs allows widespread home use of video lessons for improving lip-reading skills. Such an approach is particularly, useful for hearing-impaired adults who have no access to lip-reading classes. To fill this need in Australia a 3-hour video cassette of nine lip-reading lessons was produced. The video lessons were tested over a period of 5 weeks. The study showed a significant improvement in the lip-reading skill of students who studied the video cassette compared to a control group who did not. The extent of improvement did not differ for students who studied the video in a class, at home, or as supplementary teaching material. While the age and sex of the subjects did not influence improvement of lip-reading skills, the study showed greater improvement for the relatively poorer lip-readers. More detailed testing of one group of students showed generalization of lip-reading skills to unfamiliar speakers and materials.
Collapse
Affiliation(s)
- B Dodd
- Speech Hearing and Language Research Centre, Macquarie University, Sydney, NSW, Australia
| | | | | |
Collapse
|
12
|
Lalande NM, Lafleur G, Lacouture YS. Développement d'une épreuve franco-québécoise de lecture labiale. Int J Audiol 1989. [DOI: 10.3109/00206098909081612] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
13
|
Day GA, Browning GG, Gatehouse S. An audiovisual test of hearing disability using free-field sentences in noise. BRITISH JOURNAL OF AUDIOLOGY 1988; 22:179-82. [PMID: 3167256 DOI: 10.3109/03005368809076450] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
An audiovisual test, using BKB sentences in noise, has been developed to assess hearing disability, unaided and aided with a hearing aid(s), in severely hearing-impaired individuals. After a single practice list, no significant further increases in performance were detected. The test is reproducible within and between test sessions.
Collapse
Affiliation(s)
- G A Day
- MRC Institute of Hearing Research (Scottish Section), Royal Infirmary, Glasgow, Scotland
| | | | | |
Collapse
|
14
|
MacLeod A, Summerfield Q. Quantifying the contribution of vision to speech perception in noise. BRITISH JOURNAL OF AUDIOLOGY 1987; 21:131-41. [PMID: 3594015 DOI: 10.3109/03005368709077786] [Citation(s) in RCA: 227] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
The intelligibility of sentences presented in noise improves when the listener can view the talker's face. Our aims were to quantify this benefit, and to relate it to individual differences among subjects in lipreading ability and among sentences in lipreading difficulty. Auditory and audiovisual speech-reception thresholds (SRTs) were measured in 20 listeners with normal hearing. Sixty sentences, selected to range in the difficulty with which they could be lipread (with vision alone) from easy to hard, were presented for identification in white noise. Using the ascending method of limits, the SRT was defined as the lowest signal-to-noise ratio at which all three 'key words' in each sentence could be identified correctly. Measured as the difference in dB between auditory-alone and audiovisual SRTs, 'audiovisual benefit' averaged 11 dB, ranging from 6 to 15 dB among subjects, and from 3 to 22 dB among sentences. As predicted, audiovisual benefit is a measure of lipreading ability. It was highly correlated with visual-alone performance (n = 20, r = 0.86, P less than 0.01). Likewise, those sentences which were easiest to lipread gave a higher measure of benefit from vision in audiovisual conditions than did sentences that were hard to lipread (n = 60, r = 0.92, P less than 0.01). The results establish the basis of an efficient test of speech-reception disability in which measures are freed from the floor and ceiling effects encountered when percentage correct is used as the dependent variable.
Collapse
|
15
|
Rosen S, Ball V. Speech perception with the Vienna extra-cochlear single-channel implant: a comparison of two approaches to speech coding. BRITISH JOURNAL OF AUDIOLOGY 1986; 20:61-83. [PMID: 3754170 DOI: 10.3109/03005368609078999] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Although it is generally accepted that single-channel electrical stimulation can significantly improve a deafened patient's speech perceptual ability, there is still much controversy surrounding the choice of speech processing schemes. We have compared, in the same patients, two different approaches: (1) The speech pattern extraction technique of the EPI group, London (Fourcin et al., British Journal of Audiology, 1979,13,85-107) in which voice fundamental frequency is extracted and presented in an appropriate way, and (2) The analogue 'whole speech' approach of Hochmair and Hochmair-Desoyer (Annals of the New York Academy of Sciences, 1983, 405, 268-279) of Vienna, in which the microphone-sensed acoustic signal is frequency-equalized and amplitude-compressed before being presented to the electrode. With the 'whole-speech' coding scheme (which they used daily), all three patients showed an improvement in lipreading when they used the device. No patient was able to understand speech without lipreading. Reasonable ability to distinguish voicing contrasts and voice pitch contours was displayed. One patient was able to detect and make appropriate use of the presence of voiceless frication in certain situations. Little sensitivity to spectral features in natural speech was noted, although two patients could detect changes in the frequency of the first formant of synthesised vowels. Presentation of the fundamental frequency only generally led to improved perception of features associated with it (voicing and intonation). Only one patient consistently showed any advantage (and that not in all tests) of coding more than the fundamental alone.
Collapse
|