• Users Online: 48
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
ORIGINAL ARTICLE
Year : 2021  |  Volume : 35  |  Issue : 1  |  Page : 16-21

Consonant Recognition Using Coarticulatory Cues in Individuals with Normal Hearing and Sensorineural Hearing Loss


1 Department of Audiology and Speech-Language Pathology, Baby Memorial College of Allied Medical Sciences, Kozhikode, Kerala, India
2 Department of Audiology, All India Institute of Speech and Hearing, Manasagangothri, Mysuru, Karnataka, India

Date of Submission01-Apr-2021
Date of Decision21-Apr-2021
Date of Acceptance02-May-2021
Date of Web Publication28-Jun-2021

Correspondence Address:
Dhanya Mohan
Department of Audiology and Speech-Language Pathology, Baby Memorial College of Allied Medical Sciences, Baby Memorial Hospital Campus, Kozhikode - 673 016, Kerala
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jisha.jisha_9_21

Rights and Permissions
  Abstract 


Background and Objectives: The study investigated the role of coarticulatory cues in the perception of consonants in Malayalam and its temporal window. It also compared normal hearing individuals and individuals with sensorineural hearing loss (SNHL) for their ability to utilize coarticulatory cues for the perception of consonants. Methods: The study used a quasi- experimental post-test only mixed research design. Fifteen normal-hearing individuals and 15 individuals with SNHL who were native speakers of Malayalam participated in the study. The stimuli included consonant-vowel syllables in their original and truncated forms. The forward-gating method was used to generate the truncated tokens. The participants were assessed for their consonant recognition in closed-set conditions. Results: There was a significant difference in the temporal window of the utility of coarticulatory cues across consonants and also between the two groups of participants. Conclusions: In normal-hearing individuals, coarticulatory cues are useful for the recognition of stop consonants, nasals as well as fricatives, with the maximum temporal window of utility seen in nasals. However, individuals with SNHL fail to utilize the available coarticulatory cues to recognize the consonants.

Keywords: Consonant recognition, gating, Malayalam, temporal window of coarticulatory cue


How to cite this article:
Mohan D, Maruthy S. Consonant Recognition Using Coarticulatory Cues in Individuals with Normal Hearing and Sensorineural Hearing Loss. J Indian Speech Language Hearing Assoc 2021;35:16-21

How to cite this URL:
Mohan D, Maruthy S. Consonant Recognition Using Coarticulatory Cues in Individuals with Normal Hearing and Sensorineural Hearing Loss. J Indian Speech Language Hearing Assoc [serial online] 2021 [cited 2021 Aug 3];35:16-21. Available from: https://www.jisha.org/text.asp?2021/35/1/16/319604




  Introduction Top


Speech is the primary mode of communication in human beings. Irrespective of the language, the basic units of speech remain to be the consonants and vowels, which are coarticulated in various combinations to produce words and sentences.[1] Vowels carry the power of speech while consonants contribute primarily to speech intelligibility.[2] Although consonants and vowels are distinct in their acoustic features,[3] when coarticulated with each other, the distinction would be reduced. This is due to the spread of the acoustic feature of one phoneme to its adjacent phonemes when coarticulated.[4] Such coarticulation-induced changes in speech acoustics are known to influence speech perception.[5]

Earlier studies have demonstrated the coarticulatory effect of consonant on vowel,[6],[7],[8] vowel on consonant,[9],[10] vowel on vowel,[11],[12] as well as consonant on the consonant.[13],[14] These coarticulatory effects are shown to differ across languages.[15],[16],[17],[18],[19],[20] For example, Dubno and Levitt[19] found the lowest recognition of stop consonants in the context of/u/and highest in the context of/a/in English, whereas, Singh and Black[20] found the highest recognition in the context of/i/compared to/a/.

It is found that phonemes can be identified even after removing their primary cues, only based on their coarticulatory cues present in the adjacent phonemes.[16],[21] This suggests that in instances of nonavailability of primary cues, listeners rely on the coarticulated cues for the perception of phonemes. Such dependency may be useful while perceiving speech in the presence of background noise. The coarticulatory cues may be imperative to cope up with everyday listening challenges.

Individuals with sensorineural hearing loss (SNHL) are known to have reduced speech perception both in quiet[22] and noisy situations.[23] The degree of impairment in speech perception is related to the degree, configuration, and duration of hearing loss.[24],[25] In general, the higher the degree of hearing loss, the more is the impact on speech perception,[26] due to the limited access to the acoustical cues of speech and distortion of acoustic cues due to decline in spectrotemporal processing abilities.[24],[25] Studies have shown that individuals with hearing impairment exhibit deficits in consonant recognition[24] which contribute to problems in speech perception. In general, the perception of speech sounds that are brief and of low intensity and high frequency are susceptible to hearing loss.[22],[23],[24],[25],[26]

Individuals with hearing loss are known to have impaired spectral[27],[28] as well as temporal resolution.[29],[30] Among the two, spectral resolution is known to be affected more compared to temporal resolution.[31],[32] As a result, it is found that they exhibit different acoustic cue weighting during the perception of speech compared to normal hearing individuals.[33] Hedrick and Younger[34] compared normal-hearing listeners and listeners with hearing loss for their cue weighting for the perception of the place of articulation of fricatives. It was found that listeners with hearing loss used spectral cues less efficiently and weighted relative amplitude cues heavier than the spectral cue. Similar inferences were drawn based on the perception of stop consonants by Nelson, Nittrouer, and Norton.[33] The findings suggest that the perception of phonemes by individuals with hearing loss based on the coarticulatory cues is likely to be different compared to their normal-hearing peers.

Hence if the primary cue of the consonant is removed and the listeners are presented only with the coarticulatory cue, the limited spectral and temporal resolution in individuals with hearing loss is likely to pose a challenge to the consonant recognition. One would expect individuals with hearing loss to have reduced consonant recognition when perceived based only on coarticulatory cues. However, the notion needs to be scientifically studied. Zeng and Turner[35] reported that individuals with hearing loss rely mainly on primary cues and could not efficiently use the dynamic formant transition for consonant identification. Smits[21] studied the perception of stops, fricatives, and nasal consonants based on their coarticulatory cues. The gating paradigm was used to identify the location and spread of coarticulatory features. It was found that the spread of features was highly variable across consonants. This indicates that the role of coarticulatory cues derived from one class of phonemes cannot be generalized to the other classes of phonemes. Hence, the present study compared the individuals with hearing loss and normal-hearing listeners for their consonant recognition abilities when provided with only the coarticulatory cues. The study used a gating paradigm through which an attempt was made to identify the time boundary up to which the two groups can utilize the coarticulatory cues for the identification of consonants. This is the first study wherein the role of coarticulatory cues in the perception of consonant identification is being compared between individuals with hearing loss and normal-hearing listeners using gating paradigm. Gating is a method for studying how continuously evolving acoustic information is perceptually evaluated at different time points (Smits, 2000). Here listeners hear truncated portions of speech signals thereby the procedure allows an assessment of the acoustic information available in different portions of the signal.. The findings would throw light on the dynamics of speech perception in SNHL and in turn help plan new strategies to enhance speech perception in them.


  Methods Top


Participants

Thirty adults in the age range of 18 to 52 years (mean age: 35.15 years) participated in the study. Fifteen (5 males and 10 females) of them had normal hearing sensitivity (hearing thresholds ≤15 dB HL at octave frequencies between 250 and 8000 Hz) in both ears (normal hearing group). They had speech identification scores of 90% or more for a phonetically balanced word list[36] in quiet and 60% or more in the presence of speech noise at 0 dB SNR. They had type “A” tympanogram and the presence of acoustic reflexes indicating normal middle ear functioning.[37] Auditory brainstem responses and transient otoacoustic emissions revealed normal findings in all the participants of normal hearing group. The remaining 15 participants (9 males and 6 females) had bilateral SNHL of moderate degree (SNHL group). The average hearing thresholds for 0.5, 1, 2, and 4 kHz pure tones were 47.5 dB HL (SD = 12.2). The configuration of hearing loss was either flat or gradually sloping. They had postlingual onset of hearing loss, and their speech identification scores were proportional to the degree of hearing loss. None of them had tinnitus, ear pain, ear discharge, or giddiness. They had type “A” tympanogram in both ears. The auditory brainstem responses recorded in them did not show any evidence of retrocochlear pathology hinting at the cochlear origin of hearing loss.

All the participants had normal speech-language abilities as screened informally by a qualified Speech-language pathologist. They were native speakers of Malayalam and hailed from the south Malabar region of Kerala. Informed consent was obtained from all the participants before they participated in the study, and the method was approved by the institutional ethical committee.[38]

Test stimuli

The study assessed the participants for their recognition of original and truncated consonant-vowel (CV) syllables. The study aimed to probe consonant recognition based on their coarticulatory cues in three manners of articulation and in their two places of articulation. The vowel was/a/while the consonants were/p/(unvoiced bilabial plosive),/t/(unvoiced retroflex plosive),/s/(unvoiced alveolar fricative),/ʃ/(unvoiced palatal fricative),/m/(bilabial nasal), and/n/(dental nasal). The syllables belonged to the phonetic inventory of Malayalam but were nonmeaningful. They were uttered by a male, native speaker of Malayalam who was a professional orator. He had normal hearing and speech-language abilities. He was instructed to utter them individually, clearly, and in a neutral tone. A word reference was given to the speaker before the recording of each syllable. Each syllable was uttered five times to allow choose the best of the samples. The utterances were recorded using Adobe Audition software version 3 (Adobe Systems Incorporated, San Jose, CA, USA). The samples were inspected for clarity in sound and spectrogram to choose the best one. The best samples of each syllable were concatenated and RMS normalized. These syllables were operationally termed the “original” tokens.

The original tokens were then truncated at predefined time points to generate truncated tokens for the study. PRAAT software (version 6.1.40) was used for editing the stimulus. Using the forward-gating procedure, gates were placed every 10 ms from the onset of the stimulus. The number of gates depended on the duration of the consonants. Gates were identified till the beginning of the steady-state of the vowel. Specifically, there were 5 gates for/pa/and/ta/, 16 gates for/sa/, 17 gates for/ʃa/, 10 gates for/ma/, and 18 gates for/n/. Utmost care was taken to truncate the syllable at the nearest zero crossings. The original syllables were then successively truncated leading to as many truncated tokens as that of the number of gates. The portion of the signal prior to the gate was removed. This resulted in 6 original and 114 truncated tokens. [Figure 1] shows the truncation points on the waveform of syllable/pa/.
Figure 1: The truncation points depicted in the waveform of the syllable/pa/.

Click here to view


The primary interest of the study was to assess the temporal window up to which coarticulatory cues exist recognizing the consonants. Therefore, only the gating conditions without the primary cues of consonants were compared. In stop consonants, the gating conditions after the end of release burst were considered; in nasals, gating conditions after the end of nasal murmur were considered and; in fricatives, gating conditions after the end of frication noise were considered for analysis [Figure 2] shows the gates considered accordingly with the primary cues removed. There were 4 such gates for/pa/and/ta/, 6 gates for/sa/and/ʃa/, 5 gates for/ma/, and 8 gates for/n/.
Figure 2: Representative Illustration showing 6 gates in syllable / sa/ used to generate truncated tokens for studying coarticulatory perception

Click here to view


Test procedure

All the audiological tests were carried out in a sound-treated two-room suit, wherein the ambient noise was within permissible levels (ANSI S3. 1.1999). The participants were tested for their recognition of original and truncated tokens. Each token was presented five times in a random order resulting in a total of 600 presentations (30 original and 570 truncated tokens). The stimuli were presented at the most comfortable levels of the participants through Paradigm software (version 2.5.0.68) and were delivered through Sennheiser HD 449 circumaural headphones. A forced-choice recognition task was used. In stop consonants participants had to choose among /p/, /t/ and /a/, in fricatives they had to choose among /s/, /ʃ/ and /a/, and in nasals they had to choose among /m/, /n/ and /a/. The possible response options (consonants) were displayed using a customized graphic user interface prepared using paradigm software. The participants were instructed to listen carefully and click on the consonant heard. The minimum interstimulus interval was 3 seconds, but the software was scripted in such a way that unless the participant chooses a consonant, the next stimuli was not presented. A score of “1” was assigned for every correct response and “0” was assigned for the wrong response. The responses were automatically saved in the Paradigm software. The total recognition score of each participant for each stimulus token was converted to percentage correct scores, and the group data were subjected to statistical analysis.

The recognition scores of each participant across these gates were subjected to logistic regression analysis using R software version 4.0.3. The best-fit regression line was derived from the analysis and the truncation durations at which 50% scores could be obtained were extrapolated. The truncation duration in the present study was defined with reference to the first gate considered for analysis which contains only the coarticulatory cue (C1, C2, C3, C4, C5, C6 as shown in [Figure 2]) and it was expressed in milliseconds. There was no primary cue after the first gate. The assumption was that the coarticulatory cue would begin from this time point. Accordingly, +10ms means that the target score was obtained even after additional truncation of 10ms, and -10ms means that the target score was achieved only when 10ms of the primary cue was presented.


  Results Top


[Figure 3] shows the mean and standard deviation of percentage correct scores of the two groups of participants for the six consonants in different gating conditions. In the figures, “0” on the X-axis indicates no truncation and represents the original token. The subsequent numbers indicate the truncated tokens wherein the length of truncation was equal to the number*10 ms. Token 1 refers to a speech token in which the first 10 ms was truncated, while token 5 refers to the one in which the first 50 ms was truncated. The figures show that recognition scores decreased systematically with increasing truncation. The decrease of scores started with lesser truncation in stop consonants compared to nasals and fricatives. There were also differences observed in the mean recognition scores between normal hearing and SNHL groups.
Figure 3: Mean and standard deviation of percentage correct scores of the normal hearing and SNHL groups for the six consonants (p, t, m, n, s & ?) in different gating conditions

Click here to view


[Figure 4] shows the mean and standard deviation of the truncation duration required to obtain 50% correct scores in the six consonants in normal hearing and SNHL groups. The figure shows that the mean truncation duration to obtain 50% correct scores was longer in the normal hearing group compared to SNHL group. This is true in all six consonants. The mean differences between the groups were more in/ta/,/ma/, and/na/compared to/pa/,/sa/, and/ʃa/.
Figure 4: Estimated truncation duration to obtain 50% scores in the two groups for the six consonants

Click here to view


The estimated mean truncation duration at which the participants obtained 50% recognition scores was compared across consonants using repeated-measures ANOVA, taking the group as a between-subject factor. The results showed significant main effect of consonants (F [5, 24] =6.950, P < 0.001, ηp2 = 0.45) as well as group (F [1,28] =157.57, P < 0.001, ηp2 = 0.88). There was significant interaction between consonant and group (F [5, 28] =9.922, P < 0.001, ηp2 = 0.52). Owing to significant interaction between consonant and group, the effect on scores was tested across consonants separately in each group and between the two groups separately in each consonant.

The results of one-way repeated measures ANOVA comparing estimated truncation duration (resulting in 50% recognition score) across the six consonants showed a significant main effect of consonant in normal hearing group (F [5,10] =15.72, P < 0.001, ηp2 = 0.48) but not in SNHL group (F [5,10] =2.01, P = 0.161: effect size: ηp2 = 0.09). In the normal hearing group, subsequent Bonferroni multiple comparisons showed that the mean truncation duration was significantly higher in/ta/,/ma/, and/na/compared to/pa/,/sa/, and/ʃa/(P < 0.001). There was no significant difference among/pa/,/sa/and/ʃa/(P > 0.05). There was also no significant difference among/ta/,/ma/and/na/.

Comparison between the two groups in the six consonants using independent t-test showed that truncation duration (resulting in 50% recognition score) is significantly higher in normal hearing group compared to SNHL group in/pa/(t = 3.42, P < 0.005),/ta/(t = 7.09, P < 0.005),/ma/(t = 8.34, P < 0.005),/na/(t = 6.26, P < 0.005), and/ʃa/(t = 2.99, P < 0.005) but not in/sa/(t = 1.44, P = 0.159). The truncation duration was close to zero in SNHL group irrespective of the consonant, while in normal hearing group, the truncation duration varied across consonants. The maximum group difference was seen in/na/.


  Discussion Top


The study probed into the ability of normal-hearing individuals in utilizing coarticulatory cues for the perception of consonants and tracked the temporal window up to which coarticulatory cues are useful for the recognition of different consonants. Stop consonants, fricatives, and nasals were studied. Comparison across gates revealed that the consonant recognition scores reduced as a function of truncation of the primary cues of the consonants. However, even after removing the primary cues, the listeners could recognize the consonants. This suggests that the acoustic cues of consonants are available in the following vowel, referring to the presence of coarticulatory cues. The paradigm used to tap the role of coarticulatory cues in the recognition of consonants was similar to that used in Smits[21] and Wagner.[16] The pattern of reduction in the recognition scores observed in the present study is similar to that reported in their studies.[16],[21]

Consonants are generally weaker in their amplitude compared to vowels,[1] due to which they are likely to get masked in the presence of background noise. Vowel being higher in amplitude has a lesser probability to get masked, and therefore, the coarticulatory cues in the adjacent vowel are likely to aid in speech perception in noise. In the present study, there was evidence for the presence of coarticulatory cues for all the consonants although the temporal window up to which coarticulatory cues varied based on the consonant. Smits[21] also had shown evidence for the presence of coarticulatory cues in stop consonants, fricatives as well as nasals, similar to the current findings. It is important to note that none of the consonants could be recognized with 100% accuracy only based on the coarticulatory cues. Therefore, one can infer that the coarticulatory cues are not a substitute for the primary cue but can provide redundant information for the recognition of consonants.

On comparing the perception of various consonants based on their coarticulatory cues, it was found that the temporal window of coarticulatory spread was maximum in nasals followed by stops and fricatives. This suggests that nasal consonants would be more immune to background noise compared to stop consonants and fricatives. However, the notion needs to be experimentally validated. Although the temporal window of coarticulatory spread is more for nasals, its recognition in the background noise is poorer compared to fricatives[39],[40] owing to the low frequency spectra of nasals which is easily masked by the background noise. This suggests that the afore-referred distortion of the primary cues may not apply to noisy backgrounds. The transition from nasal consonant to the following vowel requires movement of the velum, the rate of movement of which is slower than tongue and lips. This could be the reason for the greater spread of coarticulatory cues in nasal consonants. It is important to note that the extent of spread of coarticulation in this study is derived based on the recognition scores. Any spread so derived indicates the perceptually useful coarticulatory cues and not the extent of spread of the acoustical cues. The recognition of fricatives based on the coarticulatory cues was very poor compared to stops and nasals. This indicates that either coarticulatory cues of fricatives are not available in the following vowel or they are not useful for the perception of fricatives. Smits[21] found that the temporal distribution of coarticulatory cues for stop consonants was up to 60 ms in stop consonants and up to 30 ms in nasals. But, in the current study, the spread of coarticulatory cues was found to be more in nasals compared to stop consonants. A possible reason for the differences in the findings could be due to differences in the language under study. While Smits[21] had used syllables from the phonetic inventory of Dutch, the current study used syllables from the phonetic inventory of Malayalam. Narne et al.[41] had found more perceptual weightage for low frequencies in Malayalam compared to English.

The study also probed into the ability of individuals with SNHL to utilize the coarticulatory cues for the recognition of consonants. It was found that individuals with SNHL could not utilize coarticulatory cues as effectively as normal hearing individuals to recognize the consonants when the primary cues were truncated. This suggests that the individuals with SNHL rely mainly only on primary cues for the perception of consonants. Based on the findings in normal-hearing individuals, it was inferred that coarticulatory cues would be useful for speech perception in noise. If the notion is true, individuals with SNHL are likely to have greater difficulty in perceiving speech in noisy backgrounds compared to normals, due to their inability to utilize the available coarticulatory cues. Further, unlike normal hearing individuals, there were no significant differences in the estimated temporal window of coarticulatory cues in individuals with SNHL. The findings strengthen the inference that individuals with SNHL are not able to utilize the coarticulatory cues for the recognition of consonants. In both, the groups of participants stimuli were presented at the most comfortable levels. Therefore, differences in their recognition scores cannot be attributed to differences in the audibility of the stimuli. One can speculate poorer spectral and temporal resolution to be the reason for their inability to utilize coarticulatory cues.[42]

The present study probed into the utility of coarticulatory cues for the recognition of consonants in the CV context, which taps carryover coarticulatory cues. Future studies can probe into the ability to utilize anticipatory coarticulatory cues using the gating paradigm. The vowel context may influence the coarticulatory spread observed across consonants. Future studies can also probe into such vowel context effects.


  Conclusions Top


Based on the current findings, it can be concluded that coarticulatory cues do help in the perception of consonants in Malayalam in normal-hearing individuals. The maximum advantage with coarticulatory cues is seen for nasal consonants. However, individuals with SNHL fail to effectively utilize coarticulatory cues, unlike normal hearing individuals.

Acknowledgments

We wish to thank our Director, All India Institute of Speech and Hearing, for allowing us to conduct the study. We extend our sincere thanks to all our participants for their patient cooperation.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Ladefoged P, Disner SF. Vowels and Consonants. Oxford: Blackwell; 2001. p. 1-91.  Back to cited text no. 1
    
2.
Fogerty D, Kewley-Port D, Humes LE. The relative importance of consonant and vowel segments to the recognition of words and sentences: Effects of age and hearing loss. J Acoust Soc Am 2012;132:1667-78.  Back to cited text no. 2
    
3.
Stevens KN. Toward a model for lexical access based on acoustic landmarks and distinctive features. J Acoust Soc Am 2002;111:1872-91.  Back to cited text no. 3
    
4.
Cooper FS, Delattre PC, Liberman AM, Borst JM, Gerstman LJ. Some experiments on the perception of synthetic speech sounds. J Acoust Soc Am 1952;24:597-606.  Back to cited text no. 4
    
5.
Stilp C. Acoustic context effects in speech perception. Wiley Interdiscip Rev Cogn Sci 2020;11:e1517.  Back to cited text no. 5
    
6.
Steinlen AK, Bohn OS. Consonantal Context Affects Cross-Language Perception of Vowels. In International Conference of Phonetic Sciences; 2003. p. 2289-92.  Back to cited text no. 6
    
7.
Holt LL, Lotto AJ, Kluender KR. Neighboring spectral content influences vowel identification. J Acoust Soc Am 2000;108:710-22.  Back to cited text no. 7
    
8.
Nearey TM. Static, dynamic, and relational properties in vowel perception. J Acoust Soc Am 1989;85:2088-113.  Back to cited text no. 8
    
9.
Aravamudhan R, Lotto AJ, Hawks JW. Perceptual context effects of speech and nonspeech sounds: The role of auditory categories. J Acoust Soc Am 2008;124:1695-703.  Back to cited text no. 9
    
10.
Mann VA, Repp BH. Influence of vocalic context on perception of the [∫]-[s] distinction. Percept Psychophys 1980;28:213-28.  Back to cited text no. 10
    
11.
Ohman SE. Coarticulation in VCV utterances: Spectrographic measurements. J Acoust Soc Am 1966;39:151-68.  Back to cited text no. 11
    
12.
Fowler CA. Production and perception of coarticulation among stressed and unstressed vowels. J Speech Hear Res 1981;24:127-39.  Back to cited text no. 12
    
13.
Repp BH, Mann VA. Fricative-stop coarticulation: Acoustic and perceptual evidence. J Acoust Soc Am 1982;71:1562-7.  Back to cited text no. 13
    
14.
Repp BH, Mann VA. Perceptual assessment of fricative – Stop coarticulation. J Acoust Soc Am 1981;69:1154-63.  Back to cited text no. 14
    
15.
Kalaiah MK, Bhat JS. Effect of vowel context on the recognition of initial consonants in Kannada. J Audiol Otol 2017;21:146-51.  Back to cited text no. 15
    
16.
Wagner A. Cross-language similarities and differences in the uptake of place information. J Acoust Soc Am 2013;133:4256-67.  Back to cited text no. 16
    
17.
Wagner A, Ernestus M, Cutler A. Formant transitions in fricative identification: The role of native fricative inventory. J Acoust Soc Am 2006;120:2267-77.  Back to cited text no. 17
    
18.
Crowther CS, Mann V. Native language factors affecting use of vocalic cues to final consonant voicing in English. J Acoust Soc Am 1992;92:711-22.  Back to cited text no. 18
    
19.
Dubno JR, Levitt H. Predicting consonant confusions from acoustic analysis. J Acoust Soc Am 1981;69:249-61.  Back to cited text no. 19
    
20.
Singh S, Black JW. Study of twenty-six intervocalic consonants as spoken and recognized by four language groups. J Acoust Soc Am 1966;39:372-87.  Back to cited text no. 20
    
21.
Smits R. Temporal distribution of information for human consonant recognition in VCV utterances. J Phon 2000;28:111-35.  Back to cited text no. 21
    
22.
Dubno JR, Dirks DD, Ellison DE. Stop-consonant recognition for normal-hearing listeners and listeners with high-frequency hearing loss. I: The contribution of selected frequency regions. J Acoust Soc Am 1989;85:347-54.  Back to cited text no. 22
    
23.
Trevino A, Allen J. Individual variability of hearing-impaired consonant perception. Semin Hear 2013;34:211-4.  Back to cited text no. 23
    
24.
Humes LE, Dirks DD, Bell TS, Kincaid GE. Recognition of nonsense syllables by hearing-impaired listeners and by noise-masked normal hearers. J Acoust Soc Am 1987;81:765-73.  Back to cited text no. 24
    
25.
Zurek PM, Delhorne LA. Consonant reception in noise by listeners with mild and moderate sensorineural hearing impairment. J Acoust Soc Am 1987;82:1548-59.  Back to cited text no. 25
    
26.
Boothroyd A. Auditory perception of speech contrasts by subjects with sensorineural hearing loss. J Speech Hear Res 1984;27:134-44.  Back to cited text no. 26
    
27.
Glasberg BR, Moore BC. Auditory filter shapes in subjects with unilateral and bilateral cochlear impairments. J Acoust Soc Am 1986;79:1020-33.  Back to cited text no. 27
    
28.
Souza P, Wright R, Bor S. Consequences of broad auditory filters for identification of multichannel-compressed vowels. J Speech Lang Hear Res 2012;55:474-86.  Back to cited text no. 28
    
29.
Kidd G Jr., Mason CR, Feth LL. Temporal integration of forward masking in listeners having sensorineural hearing loss. J Acoust Soc Am 1984;75:937-44.  Back to cited text no. 29
    
30.
Grose JH, Hall JW 3rd. Cochlear hearing loss and the processing of modulation: Effects of temporal asynchrony. J Acoust Soc Am 1996;100:519-27.  Back to cited text no. 30
    
31.
Davies-Venn E, Souza P. The role of spectral resolution, working memory, and audibility in explaining variance in susceptibility to temporal envelope distortion. J Am Acad Audiol 2014;25:592-604.  Back to cited text no. 31
    
32.
Reed CM, Braida LD, Zurek PM. Review article: review of the literature on temporal resolution in listeners with cochlear hearing impairment: A critical assessment of the role of suprathreshold deficits. Trends Amplif 2009;13:4-43.  Back to cited text no. 32
    
33.
Nelson PB, Nittrouer S, Norton SJ. “Say-stay” identification and psychoacoustic performance of hearing-impaired listeners. J Acoust Soc Am 1995;97:1830-8.  Back to cited text no. 33
    
34.
Hedrick MS, Younger MS. Labeling of/s/and [see text] by listeners with normal and impaired hearing, revisited. J Speech Lang Hear Res 2003;46:636-48.  Back to cited text no. 34
    
35.
Zeng FG, Turner CW. Recognition of voiceless fricatives by normal and hearing-impaired subjects. J Speech Hear Res 1990;33:440-9.  Back to cited text no. 35
    
36.
Kacker SK, Basavaraj V. Indian Speech, Language and Hearing Tests: The ISHA Battery-1990. Mysore: Indian Speech and Hearing Association; 1990.  Back to cited text no. 36
    
37.
Margolis RH, Heller JW. Screening tympanometry: Criteria for medical referral. Audiology 1987;26:197-208.  Back to cited text no. 37
    
38.
Basavaraj V, Venkatesan S. Ethical Guidance for Bio-Behavioral Research Involving Human Subject. Mysore, India: All India Institute of Speech and Hearing; 2009.  Back to cited text no. 38
    
39.
Kalaiah MK, Thomas D, Bhat JS, Ranjan R. Perception of Consonants in Speech-Shaped Noise among Young and Middle-Aged Adults. J Int Adv Otol. 2016 Aug;12(2):184-188.  Back to cited text no. 39
    
40.
Phatak SA, Allen JB. Consonant and vowel confusions in speech-weighted noise. J Acoust Soc Am. 2007 Apr;121(4):2312-26.  Back to cited text no. 40
    
41.
Narne VK, Prabhu P, Thuvassery P, Ramachandran R, Kumar A, Raveendran R, et al. Frequency importance function for monosyllables in Malayalam. Hear Balance Commun 2016;14:201-6.  Back to cited text no. 41
    
42.
Winn MB. The use of acoustic cues in phonetic perception: Effects of Spectral degradation, Limited bandwidth and background noise [dissertation]. University of Maryland; 2011.  Back to cited text no. 42
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Methods
Results
Discussion
Conclusions
References
Article Figures

 Article Access Statistics
    Viewed440    
    Printed4    
    Emailed0    
    PDF Downloaded62    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]