• Users Online: 214
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
ABSTRACTS
Year : 2020  |  Volume : 34  |  Issue : 1  |  Page : 37-226

Abstract Proceeding of 52nd ISHACON-2020


Date of Web Publication06-Jul-2020

Correspondence Address:
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/0974-2131.288975

Rights and Permissions

How to cite this article:
. Abstract Proceeding of 52nd ISHACON-2020. J Indian Speech Language Hearing Assoc 2020;34:37-226

How to cite this URL:
. Abstract Proceeding of 52nd ISHACON-2020. J Indian Speech Language Hearing Assoc [serial online] 2020 [cited 2020 Aug 11];34:37-226. Available from: http://www.jisha.org/text.asp?2020/34/1/37/289113




  Abstract - AO274: Spatial and Semantic Localisation and Auditory Stroop Test: Comparison of Young and Older Adults Top


Shubhaganga Dhrruva kumar1 & Asha Yathiraj2

1dshubhaganga94@gmail.com &2asha_yathiraj@rediffmail.com

1All India Institute of Speech and Hearing, Mysuru-570006

Introduction:

In the presence of conflicting stimuli, it has been noted that the reaction time of an individual is reduced. Such conflicting stimuli have been used in the 'Stroop test' (Stroop, 1992) that evaluates cognitive functioning that require selective attention, automaticity, inhibitory processes, and executive control (Davidson, Zacks, & Williams, 2003). These cognitive resources have also been shown to influence localisation ability in individuals. Further, it has been reported that with advancing age, there is a decline in these cognitive abilities (Davidson, Zacks, & Williams, 2003; Dempster, 1992; Stoltzfus, Hasher, & Zacks, 1996; Zacks & Hasher, 1994). The Stroop effect has mainly been studied using visual stimuli.

It has been speculated that the semantic content of stimuli could either augment auditory spatial localisation performance or in certain conditions degrade or delay localisation of stimuli (Loomis, Lippa, Klatzky, & Golledge, 2002; Muller & Bovet, 2002). Spatial localization requires identification of spatial location of the stimuli irrespective of their meaning. On the contrary, Semantic localisation requires responses to the meaning of the stimuli irrespective of the location of the stimuli. These abilities can be assessed by creating an auditory Stroop paradigm.

Need for Study:

It has been found that older adults are more susceptible to stimuli-interferences than young adults (Chiappe, Siegel, & Hasher, 2000; Wurm, Labouvie-Vief, Aycock, Rebucal, & Koch, 2004). This interference could affect their responses to auditory stimuli in daily life such as orientation towards sound sources and understanding speech in noisy situations. Detecting such a problem can be done easily and time effetely using a simple auditory Stroop test that evaluates spatial and semantic localisation. The information would aid in better rehabilitation of affected individuals and counselling their family, which in turn would help the former group function better and become independent.

Aim & Objectives:

Aim: The study aimed to compare the responses of younger and older adults on a spatial localisation and semantic localisation task using an auditory Stroop test.

Objectives: Using an auditory Stroop test, compare:

  1. Responses of young adults and older adults in terms of response accuracy and reaction time on auditory spatial and semantic localisation tasks,
  2. Spatial and semantic localisation in terms of response accuracy and reaction time, within the young and older adults,
  3. Responses to congruent and incongruent stimuli for each participant group for spatial and semantic localisation in terms of response accuracy and reaction time.


Method:

The study was carried out using a standard-comparison design. The participants were selected using a purposive sampling technique.

Participants:

Two groups of participants having normal hearing were evaluated, consisting of 30 younger adults (18 to 30 years) and 30 older adults (58 to 70 years). All had no history of otological, neurological and speech and language problems. Only those who had a score of 24 in the Mini Mental State Examination test (Folstein, Folstein, & McHugh, 1975) were included in the study.

Material:

A software-based test titled, Auditory spatial and semantic localisation Stroop test was developed to present stimuli and obtain responses from the participants. The developed software enabled presenting 4 audio-recorded words representing spatial location (right, left, front, & back) and analysed the responses, obtained on a touch-screen tablet, in terms of response accuracy and reaction time.

Procedure:

The stimuli were presented through the software installed in a personal laptop, the output of which was routed randomly to four speakers located at 0°, 90°, 180° and 270° at 65 dB SPL. For both spatial and semantic localisation tasks, congruent stimuli (stimulus location & meaning matched) and incongruent stimuli (stimulus location & meaning unmatched) were used. The participants were asked to respond on the touch-screen tablet provided to them, to either the meaning or the location of the stimuli. The number of correct responses and their reaction time were calculated by the software.

A Shapiro Wilks test of normality indicated that the data were not normally distributed. Hence, non-parametric statistics were carried out.

Results & Discussion:

Comparison of responses of young and older adults for spatial and semantic localisation tasks: The mean and median of the response accuracy and reaction time were better in the young adults compared to the older adults, both for the spatial and semantic localisation tasks. A Mann Whitney U test confirmed that the older adults had significantly poorer response accuracy (p < 0.01) and higher reaction time (p < 0.01) compared to the young adults for both spatial and semantic localisation.

These results confirm that older adults are more susceptible to auditory distractions. This probably is an indication that there is a decline in cognitive abilities such as attention, processing speed, and inhibitory functioning, as noted in literature (Dempster, 1992; Wurm et al., 2004). Comparison of responses for spatial and semantic localisation tasks within each participant group: The mean and median scores were better for semantic localisation than spatial localisation in both young and older adults. A Wilcoxon-Signed Rank test substantiated that the young adults had significantly higher response accuracy [Z=2.27, p<0.05] for semantic localisation than spatial localization. However, no significant difference was seen between semantic and spatial localisation for reaction time [Z= 0.92, p>0.05]. In the older adults, both the response accuracy [Z=4.20, p<0.01] and reaction time [Z=3.30, p<0.01] was significantly better for semantic localisation compared to spatial localisation.

The findings indicate that spatial localisation declines faster in older adults compared to their use of semantics. It can be construed that spatial localisation abilities could serve as a predatory of semantic localisation abilities, when looking for early signs of cognitive decline. Further, as was noted by Palef and Nickerson (1978), automatic processing of word meaning probably led to better semantic localisation responses than spatial localization responses in older adults.

Comparison of responses to congruent and incongruent stimuli for spatial and semantic localisation within each participant group: The mean and median scores indicated that accuracy in responses were marginally higher for congruent than incongruent stimuli for both spatial and semantic localisation in both groups. Similarly, the reaction time was more for the incongruent stimuli for both tasks and groups. Further, Wilcoxon-Signed Rank test revealed that in the young adults, the congruent stimuli had significantly better response accuracy and reaction time than the incongruent stimuli for spatial localisation (p < 0.05) as well as semantic localisation (p < 0.05). Similar results were obtained in older adults for semantic localisation (p < 0.01) for both accuracy and reaction time. For spatial localisation, the congruent stimuli had better response accuracy (p < 0.01) than the incongruent stimuli but there was no significant difference in reaction time (p > 0.05).

The better responses in congruent condition reflect the ease of the activity compared to the incongruent condition. Further, in the literature, increased reaction time for incongruent stimuli has been attributed to the automatic processing of irrelevant factors rather than the target stimuli (Palef & Nickerson, 1978; Yao, 2007).

Summary & Conclusion:

The study confirms that the Auditory spatial and semantic localisation Stroop test is sensitive to detecting spatial and semantic localization that decline with age. It is suggested that activities that will retard such age-related decline be used as individuals grow older. As spatial localisation declines faster than semantic localisation, the former can be used as a marker of further cognitive decline.


  Abstract - AO275: Effect of Duration of Monaural Amplification on the Speech Identification Scores of the Non-Aided Ear Top


Asha Yathiraj1 & Amruthavarshini B2

1 asha_yathiraj@rediffmail.com &2 amrutha.3563@gmail.com

1All India Institute of Speech and Hearing, Mysuru-570006

Introduction:

The use of binaural amplification by those with symmetrical sensorineural hearing loss has been noted to provide significant improvement in speech identification (Feuerstein, 1992; McKenzie & Rice, 1990; Noble & Gatehouse, 2006). On the other hand, Silverman and Emmer (1993) and Silverman and Silman (1990) found that prolonged use of monaural amplification resulted in auditory deprivation in the non-aided ear. They observed that the non-aided ear of monaurally fitted individuals had significant decrement in word identification scores after use of the device for durations ranging from 2 years to 15 years. No such decrement was reported in the aided ear. Similarly, Gelfand (1995) noted a decline in speech recognition scores in the non-aided ear of 6 individuals using monaural hearing aids. The decrement in scores was found to occur as early as 7 months or as late as 6 years after the use of monaural amplification. Unlike these studies, Azevedo, Santos, and Costa (2015) reported of no major difference in performance between elderly monaural and binaural hearing aid users in quiet as well as in the presence of noise. However, they did not mention the duration of hearing aid usage after which their results were found.

Need for Study:

The review of literature indicates that deprivation in the non-aided ear of monaural hearing aid users commences at varying intervals after the use of the device. The deterioration was found to commence as early as 6 months and was seen to be present in those using the device up to 15 years. However, there is no consensus as to how early the deterioration could start. Also, it has not been studied whether the quantum of deprivation in the non-aided ear varies as a function of the duration of use of monaural amplification.

Aim & Objectives:

The study aimed to examine the effect of the number of years of monaural hearing aid use on the speech identification scores of the non-aided ear. The study also aimed to compare the speech identification scores before and after the use of monaural amplification, separately in the aided and the unaided ear as well as between the two ears before and after monaural amplification.

The objectives of the study were to examine the correlation and difference between the duration of hearing aid usage and speech identification scores; compare the pre and post-amplification speech identification scores in the aided and non-aided ears; and check the difference in speech identification scores between the aided and non-aided ears.

Method:

Using a purposive sampling technique, 39 adults aged 18 to 50 years, having bilateral acquired symmetrical sensorineural hearing loss (pure-tone averages = 30 dB HL to 70 dB HL) were studied. All the participants consistently used their prescribed hearing aid in one ear and did not alternate the device between the two ears. The duration of use of their device ranged from 1 to 9 years. None of them had any history of middle ear problems.

Prior to evaluating the participants, consent was obtained from them, adhering to the guidelines of the institute. The study was conducted using a pre-post design. Information regarding the hearing thresholds and pre-amplification speech identification scores of the participants, measured at the time of prescribing the hearing aid, was obtained from the case records of the participants maintained in the clinic. They were tested to determine their current speech identification scores in Kannada or English using phonemically balanced tests developed by Yathiraj and Vijayalakshmi (2005) and Yathiraj and Muthuselvi (2009), respectively. The testing was carried out in a sound-treated suite that met ANSI standards S3.1-1999 (R2013).

Results & Discussion:

A Shapiro Wilks test indicated that the data were not normally distributed, hence non-parametric statistics were used. The mean and median speech identification scores were similar in the left and right ears, prior to the use of amplification. However, after the use of the hearing aid, the scores were better in the aided ear compared to the non-aided ears of all the participants.

Prior to analysing the impact of duration of monaural amplification on speech identification scores of the non-aided ear, the scores before and after the use of monaural amplification were analyzed separately in each ear. Additionally, the score of the two ears were compared before and after monaural amplification.

Comparison of the scores prior to and after the use of monaural amplification was done using a Wilcoxon signed-rank test. Significant improvement in speech identification scores was seen after the use of amplification in the ear in which the hearing aid was used [Z = 4.75, p < 0.01]. However, a significant decrease in scores was observed in their non-aided ear [Z = 2.49, p < 0.05].

Comparison of the scores in the aided and the non-aided ear was also evaluated using Wilcoxon signed-rank test. This was done before the use of amplification and after the use of amplification. Before the use of amplification, no significant difference was seen between their two ears [Z = 1.32, p > 0.05]. In contrast, following the use of amplification, the speech identification scores were significantly poorer in the non-aided ear compared to the aided ear [Z = 4.63, p < 0.001].

Effect of the number of years of hearing aid use and speech identification scores of the non-aided ear was determined by checking the correlation as well as the significance of difference. A Spearman's correlation, done between the number of years of hearing aid use and the speech identification scores of the non-aided ear, was not significant (r = 0.24, p > 0.05). Further, a Wilcoxon signed-rank test indicated no significant difference in the unaided ear scores between those who used the device for 1 year compared to those who used it for more than 1 year [Z = 0.21, p > 0.05].

The findings of the current study indicate that monaural amplification provided a significant amount of improvement in the speech identification scores in the aided ear, but resulted in a significant deterioration in performance in the non-aided ear. However, the duration of use of the device did not influence the speech identification scores obtained by them. Thus, just one year of not using amplification in an ear can result in similar deterioration in speech identification as seen in those who do not use their device for longer durations. The reduction in performance in the non-aided ear can be attributed to auditory deprivation in those who consistently used the device only in one ear. Unlike what was noted by Gelfand (1995), the commencement of deterioration did not vary across the participants and was uniformly seen by 1 year of use of the device.

Summary & Conclusion:

The findings of the study imply that lack of amplification can lead to auditory deprivation in the non-aided ear of symmetrical hearing loss individuals who use monaural amplification. This can occur as early as 1 year after the use of the device. Thus, it is recommended that monaural hearing aid users should not use their hearing aids consistently in only one ear. It is suggested that they should alternate their monaural hearing aid between the two ears, as this has been found to retard such deprivation (Yathiraj & Amruthavarshini, 2019).


  Abstract - AO276: Simultaneous Acquisition of Click and Tone-Burst ABR: A Novel Method for Hearing Estimation at Audiometric Frequencies in a Quick Time Top


Sandeep M1, Sharath Kumar K. S2, Indira C P3, Sam Publius4 & Shreyank P. Swamy5

1msandeepa@gmail.com,2 sharathkumarks08@gmail.com,3 sharasyar@gmail.com,4sampublius52@gmail.com, &5 shreyankpswamy@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Auditory brainstem responses (ABRs) evaluate peripheral auditory system and the lower brainstem. Clinically, ABR is the most preferred objective tool to estimate hearing thresholds, if the behavioral thresholds are not reliable, as that in infants and malingering adults. Tone-burst evoked ABR (TBABR) is the gold-standard method to estimate frequency-specific hearing thresholds. Conventionally, TBABRs to multiple frequencies are recorded serially and separately for the two ears. But it takes 1 to 3 hours to complete the test in such a case (Karzon & Lieu 2006; Stueve & O'Rourke 2003). Therefore, scientists have attempted to modify the stimulus and acquisition paradigms of ABR in order to estimate hearing sensitivity in the audiometric frequency in a quick time. Polonenko and Maddox (Polonenko & Maddox 2019) invented paralell ABR, which uses randomly timed toneburst stimuli to simultaneously acquire ABRs to 5 frequencies, in both ears. ABRs recorded were found to be similar to the ABRs recorded serially. This technique estimates frequency specific hearing reliably in quick time. But it requires complex algorithms in the stimulus presentation as well as response acquisition, and therefore is not feasible in the existing clinical AEP equipments.

Mamatha and Maruthy (2016) recorded ABRs for tone bursts using a chained stimulus. They showed that ABRs of 4 frequencies could be recorded simultaneously without compromising the quality of TBABRs. However, owing to longer epoch required in this paradigm, stimulus repetition gets restricted to less than 10 per second which in turn extends the testing duration.

Need for Study:

In order to reduce the testing time, majority of the audiologists use click-evoked ABR to estimate the hearing sensitivity as it gives a reliable estimate of the hearing sensitivity in 1 to 4kHz region. Further, in order to estimate the hearing sensitivity at low frequencies, an additional TBABR for tone burst of 0.5kHz is used. In the standard method, click evoked ABR and the 0.5kHz TBABR are recorded serially. To estimate thresholds in these two ABRs, in the two ears of the subject, it takes approximately 30 minutes. In order to improve the time efficiency, we propose a new paradigm in which on can simultaneously record click-evoked and TB-evoked ABRs (SiCT ABR). SiCT can cut down the recording time by half and therefore will be of immense utility in clinical testing, particularly in infants and young children. If SiCT ABR is found comparable to the standard click and TB ABRs, in terms of latency and the thresholds estimated, it will be a potential tool for quick estimation of the hearing sensitivity. Importantly, it will not require any changes to the existing hardware or software of most of the commercially available clinical AEP equipments. Therefore, the paradigm will be available for immediate implementation with no additional cost.

Aim & Objectives:

To compare latency and thresholds of SiCT ABR with that of standard ABR.

Method:

Fifteen normal hearing adults (20 to 30 years) participated in the study. Their hearing thresholds were less than 15dBHL in the audiometric frequencies. They normal middle ear functioning and did not have any relevant neurological or otological dysfunctions. They signed a written informed consent prior to their participation. The study conformed to the guidelines stipulated for bio behavioral research in humans.

ABRs were recorded in two paradigms in each of the participants; a standard paradigm and SiCT paradigm. In the standard paradigm, ABRs were recorded for clicks followed by TBABRs for 0.5kHz tone bursts. ABRs were recorded at 90dBnHL and the thresholds were tracked adaptively. Whereas in the SiCT paradigm, ABRs were elicited for a chained stimulus that had click followed by a 0.5kHz tone burst with onset to onset interval of 20ms. They were generated using Praat software (version 5.3.36). Click was 100Âμs while the 0.5kHz tone burst was of 8ms (2-0-2 envelope and Hanning window). Accordingly, the total duration of the chained stimulus was 28ms. Similar to standard ABRs, SiCT ABRs were recorded at 90dBnHL and the thresholds were tracked adaptively.

The output SPL of the click and tone burst were recorded using a sound level meter (Bruel & Kjaer with Pressure-field 1” microphone type 4144) using standard settings. The stimuli were routed through ER3A insert receivers connected to the Biologic Navigator Pro EP system and were played at 110dBSPL. The amplitudes of the two stimuli (click & tone burst) were then manipulated such that the peak SPL was 110dBSPL.

The individual TBs were played to 20 normal hearing individuals to estimate the hearing thresholds (in dBSPL) and accordingly derive the correction factors to convert it into dBnHL. To generate a chained stimulus, click and the 0.5kHz tone burst was concatenated with onset to onset interval being 20ms.

The ABRs were acquired using Biologic Navigator Pro AEP system with impedance matched insert receivers. The test rooms were air conditioned, acoustically shielded (permissible noise levels as per ANSI S-3, 1991) and electrically shielded. ABRs in the two paradigms were recorded with the standardized stimulus and acquisition parameters (Hall, 2007) except for the epoch. Epoch was set to 43 ms to accommodate the click along with the TBABR.

The averaged ABRs were visually analyzed to mark the presence of Jewett waves, I, III and V. ABRs were analyzed by 3 audiologists experienced in the area of electrophysiology. Latency of Jewett waves at 90dBnHL and thresholds of ABRs were compared between standard and SiCT ABR paradigms. This was done separately for click and 0.5kHz ABRs.

Results & Discussion:

Results showed that there was no difference in the occurrence of the Jewett waves between the two paradigms. This means that, if a particular Jewett wave could be recorded in standard paradigm, it was also recorded in the SiCT paradigm.

Owing to normal distribution of the data, paired t-test was used to compare the latency and thresholds elicited in the two paradigms. Comparison of the peak latency of wave V showed no significant difference between the two paradigms in click ABRs at 90dBnHL (t=1.94, p=0.053), 60dBnHL (t=0.3, p=0.76), 40dBnHL (t=0.59, p=0.56), and 30dBnHL (t=0.78, p=0.44). Similarly, there was significant difference in the latency of wave V of the 0.5kHz TBABR between the two paradigms at 90dBnHL (t=0.42, p=0.68), 60dBnHL (t=1.86, p=0.58), 40dBnHL (t=1.02, p=0.34), and 30dBnHL (t=0.16, p=0.87). The findings support that the latency of ABRs elicited in the two paradigms are not different and therefore support the use of SiCT ABR for time efficiency.

ABR thresholds elicited using the two paradigms were compared using paired t-test. Results showed there is no significant difference between the two paradigms in click (t=0.10, p=0.93) as well as TBABRs (t=1.50, p=0.16). Therefore, thresholds determined by the SiCT paradigms is as accurate as that of standard ABRs.

Summary & Conclusion:

The present study developed a new paradigm to simultaneously record click and 0.5kHz TBABRs. The study provides strong evidence for latency of wave V and the ABR thresholds elicited in the SiCT ABR to be similar to that of standard ABRs. SiCT paradigm records both click and TBABRs in half the time of the standard ABR paradigm. Therefore, SiCT paradigm is recommended as time-efficient hearing estimation tool for clinical set ups and it is feasible for immediate implementation in most of the clinical AEP equipments without any additional cost.


  Abstract - AO277: Audiovestibular Functioning of Post-Menopausal Females with Osteoporosis and Osteopenia Top


Manisha K1, Sanjay Kumar2 & Anuradha3

1manishatherapist1978@yahoo.co.in,2sanjaymunjal1@hotmail.com,3anuradha2ks@yahoo.com

1Post-Graduate Institute for Medical Education and Research, Chandigarh - 160012

Introduction:

Osteoporosis also called “a silent disease” remains asymptomatic until the advanced stages of Osteoporosis.Data from the World Health Organization (WHO) survey exhibited that both these disorders are measured as the second most critical health problems worldwide, following cardiovascular disease. Recent survey of 2017 has shown that 29% of Indian population has Osteoporosis, i.e., 50 million individuals are having this problem.

Although Osteoporosis and Osteopenia can strike at any age, but it is most common among the older population, especially postmenopausal women. Consequences associated with this pathology include body pain, fractures, hearing loss, and balance disorders, etc. Hearing loss in Osteoporosis and Osteopenia subjects can be conductive, sensorineural or mixed, and may be due to otosclerosis, fracture of the ossicles, as well as neural degeneration. Some studies have demonstrated that patients with Osteoporosis might present with balance disorders along with hearing loss. Disintegration is observed between sensory information from in inner hair cells and information from the eyes and joints.

Need for Study:

There is a dearth of clinical data on literature related to the audiovestibular functioning of osteoporosis and osteopenia subjects, especially in postmenopausal women.

Aim & Objectives:

The present study was planned with the aim to evaluate the audio-vestibular functions of the post-menopausal patients having osteopenia and osteoporosis.

Objectives:

  1. To study the prevalence of the hearing loss in postmenopausal patients with osteoporosis and osteopenia in comparison to control group.
  2. To compare the hearing status of postmenopausal patients with osteoporosis, osteopenia, and control group.
  3. To assess and compare the vestibular function of postmenopausal patients with osteoporosis, osteopenia and control group.


Method:

In the present study is a quantitative data analysis for audiological and vestibular testing by using deductive analytical research approach having Cross-sectional research design in Osteoporosis and Osteopenia postmenopausal females.

The study was coducted at Speech and Hearing unit, ENT department,PGIMER in three postmenopausal groups according to the results of BMD measurements. Group 1 Osteoporosis, Group 2 Osteopenia, Group 3 control group consisted of 23,25,28 subjects in the age range of 50-66 years.

Inclusion criterion new subjects diagnosed with Osteoporosis or Osteopenia with no previous history of hearing loss or middle ear pathology. No obvious history of noise exposure, noise trauma, head injury.

Results & Discussion:

Pure tone audiometry:

Three Pure-tone averages (PTAs) were computed (PTA 1: 500Hz, 1000Hz and 2000 Hz; PTA 2: 4000 Hz,8000Hz and 10000Hz; PTA3: 12.5KHz, 16kHz, and 18kHz) using ANSI S3.21-1978 (R-1992) procedure.

In Osteoporosis,

PTA 1 : 91.3% ( 21 subjects ) normal hearing 8.69%(2 subjects) mild SN hearing loss.

PTA2: 34.7% (8 subjects) mild SN hearing loss; 13.04% (3 subjects) moderate SN hearing loss; &

52.17% (12 subjects) moderately severe SN hearing loss.

PTA3: 36.36% (8 subjects) moderately severe SN hearing loss,

65.21% (15 subjects) Severe SN hearing loss.

In Osteopenia, PTA 1 showed that 96% (24 subjects) had normal hearing and 4% (1 subject) had mild SN hearing loss. PTA2 results were: 16% (4 subjects) had normal hearing; mild SN hearing loss was observed in 64%(16 subjects) and 20% (5 subjects) had moderate SN hearing loss. PTA3 revealed that 72% (18 subjects) had moderately severe SN hearing loss and severe SN hearing loss in 28% (7 subjects).

Comparison of Osteoporosis, Osteopenia and control group on pure tone average (PTA) and speech audiometry including Speech Reception Threshold (SRT) and Speech Discrimination Scores (SDS) was done by applying ANOVA. Results indicated that pure tone average thresholds and speech reception thresholds were significantly different between the three groups. The osteoporosis group had poorer hearing sensitivity at all frequencies followed by osteopenia and then the control group.

Otoacoustic emissions: 30% (7subjects) of osteoporosis,32% (8 subjects) osteopenia subjects showed presence of TEOAEs. DPOAEs were present in 34.78% (8 subjects) of the Osteoporotic subjects and 44% (11 subjects) osteopenia subjects with SNR 6dB at three consecutive frequencies.

Vestibular myogenic potential (VEMP): cVEMP was affected in 95.65% of subjects and oVEMP was affected in 87.33% of Osteoporosis group. cVEMP was affected 68% of subjects and oVEMP was affected in 28% of Osteopenia group considering deviation of ± 2 standard deviations of the control group values as affected vestibular functions.

The mean latencies of peak P1 and N1 of the cVEMP was 20.20ms and 28.80ms; 14.82 and 22.38ms; 13.10ms and 21.92ms on right ear. In left ear mean latencies are 19.57ms and 27.70ms; 15.60 and 23.28ms;13.41ms and 22.34ms for osteoporosis, osteopenia and control group respectively.

The mean latencies of peak N1 and P1 of the oVEMP was13.56 and 18.10ms; 11.22 and 19.56ms; 10.90ms and 17.06ms on right ear. In left ear mean latencies are 13.32 and 19.03ms; 11.56 and 19.38ms;11.06ms and 18.32ms for osteoporosis, osteopenia and control group respectively. Significant difference in latencies (p<0.01) and amplitude (p<0.01) was observed in VEMP between three groups. Thus, longest latencies and smallest amplitude in osteoporosis, followed by osteopenia group than the control group were observed. .

Discussion:

Audiological tests:

The findings in the present study demonstrated that both Osteoporosis and Osteopenia group had hearing deficits in both conventional and high frequency and affected OAEs. These findings are indicative of hair cell damage occurring at all frequencies. Results of the present study are in line with the study conducted by Henkin (1972), Oghan (2012), Ozkiris (2013) and Kim (2016).

Menopause leads to a decline in estrogen levels. Estrogen induces the reduction in the rate of bone loss by inhibiting osteoclastic activity. Moreover, loss of BMD is directly linked to Osteoporosis. Studies have shown that an imbalance between formation and resorption of bones leads to decrease in bone mineral and loss of BMD in osteoporotic and Osteopenia patients.

VESTIBULAR TESTS:

Results illustrated that cVEMP and oVEMP was affected more in Osteoporosis followed by osteopenia and then control group. The current analysis showed a significantly higher abnormalities compared to control individuals. A feature observed both in Osteopenia and Osteoporosis, is the decrease in calcium, which might affect both sensory structures functioning in the peripheral vestibular system and neural transmission. Both these functions result in hearing loss, vertigo, imbalance and many other dysfunctions [38]. Also, vestibular functions are affected maybe because of the role of estrogen in maintaining calcium homeostasis via coupled remodeling in the postmenopausal woman. Ekblad (2000), Angelico (2014), Carol Li (2015), and Bigelow (2016) also found positive correlation between reduced vestibular functions and low bone mineral density.

Summary & Conclusion:

Osteoporosis and Osteopenia are the risk factors for vestibular dysfunction and hearing deficits. There is a high incidence of auditory deficits in postmenopausal Osteoporosis and Osteopenia women. Hearing loss more at high frequencies is present and manifests its impact in having difficulty in speech discrimination. Hearing evaluation of these individuals for early detection and intervention should be done to improve their quality of life. Vestibular dysfunctions may not be manifested in the early stages in postmenopausal Osteoporosis and Osteopenia women. VEMP can help in early detection of vestibular dysfunction in these individuals. Thus, post-menopausal women having Osteopenia and Osteoporosis should undergo audiological and vestibular testing pre-treatment and afterward periodically for monitoring of hearing and vestibular status.


  Abstract - AO281: Auditory Evoked Potentials as a Yardstick for Tinnitus Top


Ishita das1, Nilanjan Paul2, Indranil Chatterjee3 & Rima Das (Datta)4

1ishita0401@gmail.com, 2paulnilanjan2@gmail.com, 3inchat75@gmail.com,4rimadatta@gmail.com

1Ali Yavar Jung National Institute of Speech and Hearing Disabilities (Divyangjan), ERC, Kolkata - 700090

Introduction:

Tinnitus is presently viewed as an abnormal, conscious, auditory percept reflecting multiple levels of neuronal dysfunction / dyssynchrony involving either or both the peripheral and central nervous system. Most models and theories proposed for central, subjective tinnitus predict involvement of higher order auditory functions. The Neurophysiological model of Jasterboff, describes distressing tinnitus as reflecting four stage mechanism: generation of peripheral neuronal activity, detection and perception in the subcortical and cortical auditory areas respectively, and a sustained activation of the auditory related limbic and autonomic nervous system. Shulman and Goldstein proposed an algorithm—based Final Common Pathway model of tinnitus involving the neuroanatomical substrates of sensory, affect and psychomotor components of an aberrant auditory stimulus. It postulates the involvement of, and a complex interaction between the brainstem, cochlear nucleus, olivocochlear bundle to the inferior colliculus, medial geniculate body, intralaminar thalamic nuclei, parabrachial nucleus and also the primary ascending reticular activating formation of the lemniscal system to the thalamus. Hyper/depolarization of GABA-influenced thalamic activity results in thalamocortical oscillations in a synchronous signal at brain cortex. Reciprocal innervation from the thalamus to the medial temporal lobe system including the amygdala, hippocampus, etc. comprise an endogenous system which is hypothesized to result in the establishment of a paradoxical memory for the aberrant auditory sensation (tinnitus) with a reciprocal interaction with the thalamus. These models also highlight the reduction in auditory masking and univocally reflect the importance of the auditory thalamo-cortical tract and its connections with the limbic and autonomic nervous system, in tinnitus percept.

The Middle Latency Response (MLR) is postulated to generate from both primary and non-primary auditory-thalamo-cortical pathways; although Pa, Pb, Na, & Nb has slightly different generator sites, overall, they represent the temperoparietal auditory cortex. Studies have hypothesized MLR as highly sensitive indicator of the central auditory function including the associated areas like the limbic system and the reticular formation.

Common approaches for tinnitus management include Tinnitus Retraining Therapy (TRT), tinnitus masking paradigms and the recently proposed medical-audiological approach of Tinnitus Targeted Therapy (TTT), most of which, at least as a part of their regime, target reduction of the perception and interpretation of the aberrant auditory sensation at cortical level. Treatment efficacy has generally been assessed subjectively by such checklists as the THI.

Objective assessment of tinnitus severity, prognosis and the efficacy of various tinnitus treatment options were tried over the years. However, studies like those of Barnea et al. (2009), Milloy et al. (2017) and Helga et al. found no consistent ABR abnormalities within tinnitus population. It might be logical that MLR might provide objective information of the area most importantly involved in the percept of tinnitus, namely the thalamo-cortical tract. However, studies in this regard are sparse in literature. Promising outcome is provided by Singh et al. (2011) who compared ABR, MLR and OAE in normal hearing patients with and without tinnitus and found significantly enlarged Pa- Na amplitude.

Extending the principle to hyperacusis, Formby et al (2017), suggest Pa latency at 2,000 Hz to be a promising objective indicator of hyperacusis treatment effects. In a similar study on workers, Filha et al (2015) found individuals with and without tinnitus and normal hearing thresholds exposed to occupational noise present altered MLAEP, suggesting impaired transmission of neuroelectrical impulses along the cortical and subcortical auditory pathways. Also, individuals with noise-induced tinnitus present more alterations (although not statistically significant) in MLAEP than individuals without tinnitus.

Need for Study:

Objective parameterization of tinnitus is extremely important for accurate prognostic predictions as well as objective quantification of tinnitus symptom. Theories posit hyperactivity in the thalamocortical tracts to be important in the perception of tinnitus. Concomitantly, several MLR components are hypothesized to be generated at the thalamocortical tract and the primary auditory cortex and thus suggest its utility in objectivisation of tinnitus. However, the potential of MLR for such a role is still not empirically proved. Studies thus are needed to explore the utility of MLR in this regard.

Aim & Objectives:

  1. To evaluate AMLR as a possible physiological measure of tinnitus, this study aims to investigate whether increased AMLR amplitude of Pa and Na is characteristic of individuals with severe tinnitus as opposed to individuals without tinnitus. The study thus, would test the hypothesis that individuals with severe tinnitus would have significantly high Pa- Nb and Na-Pb amplitudes compared to a control group.
  2. With a further aim to explore AMLR as a prognostic indicator, it would test the hypothesis that there would be significant decrease in Pa- Nb and Na-Pb amplitudes after exposure to successful TRT in subjects with tinnitus.


Method:

Instrumentation: Tinnitus Handicap Inventory (THI); Pure tone Audiometer Madsen Itera II (Otometrics), Madsen Zodiac Immittance Audiomter; Biologic Navigator Pro Auditory Evoked Potentials Systems.

Subjects: experimental group of 30 patients with complain of debilitating tinnitus but with no complaint of hearing loss was taken up for the study. Hearing loss, and middle ear pathology, if any were ruled out by pure tone audiometry and immittance audiometry. A Control group of age-matched normal subjects were also taken.

Method: Tinnitus loudness and pitch matching was done using conventional psychoacoustical balancing procedures. Severity of tinnitus was assessed using the Tinnitus Handicap Inventory; (THI). Only the patients having moderate to severe degree of subjective, central tinnitus was taken up for the study. Each patient was assessed for maskability using the Feldmann's original test of maskability and only the patients with a positive prognosis were selected. The subjects were then administered the Middle Latency Response (MLR) using EAR-3A insert earphones, 100 ÂμS click and 500Hz tone burst stimuli ( for Pb enhancement), rate < 5.1/sec at 70dBnHL. 200 sweeps were averaged with monoaural presentation.

Each patient underwent Tinnitus Retraining Therapy (TRT) regime comprising of regular sound therapy sessions and counseling sessions with home management strategies for 2 weeks or until subjective perception of tinnitus reduced to 10-20 % of pre therapy status. Entire audiological test regime including MLR was repeated post-therapeutically. The means of Pa-Na & Pa-Nb amplitude ratios between pre-therapeutic tinnitus and control group as well as pre- and post-therapeutic groups were compared for significant differences using one-tailed directional hypothesis testing.

Results & Discussion:

  1. Significantly exaggerated PaNb amplitude in experimental than control group. (at 95% confidence interval)
  2. NaPa is also exaggerated but does not fall within the significant level.
  3. Significant reduction (at 95% confidence interval) of both NaPa and PaNb amplitude in post therapeutic data.


With respect to literature normative, latency values of peaks were not altered. However, absolute amplitudes of Pa, Na, Nb as well as NaPa and PaNb were exaggerated than literature normative. Thus the results were in consonance with the existing models of tinnitus which highlight the role of thalamo-cortical tracts and associated areas in tinnitus percept. If replicable, at least PaNb can be used in objective quantification of tinnitus as well as objectively quantify prognosis.

Summary & Conclusion:

This demonstrates the compatibility of the present neurophysiological models of tinnitus with the Thalamo-cortical tract playing an important role, and highlights the utility of MLR in objectively defining tinnitus. In future, clinical protocols can be developed using AMLR to monitor tinnitus management. Comparable utility of LLR and ERPs may also be studied.


  Abstract - AO282: Relative Efficacy of Veria and Mastoidectomy Techniques of Cochlear Implantation in Preservation of Sound-induced Saccular Responses Top


Niraj Kumar Singh1, Sachchidanand Sinha2, Prawin Kumar3, Nirnay Kumar4 & Sudhir Kumar5

1niraj6@gmail.com,2sachidanand.sinha5@gmail.com,3prawin_audio@rediffmail.com, 4nirnaykeshree@gmail.com, &5sud.cri99@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Nearly 2.1% of the entire population and 1.2% of children under the age of 6 years have significant permanent hearing loss [National Sample Survey Organization, 2011). A high percentage of them have severe-to-profound hearing loss, which makes them candidates for cochlear implantation. With the central and several state governments’ initiative of providing free cochlear implant (CI) to those fulfilling their set criteria, the country is evidencing a recent surge in the number of CI recipients below 6 years of age. However, there is a growing concern regarding the effects of cochlear implantation on the vestibular function (Psillas et al., 2014; Xu et al., 2014; Rah et al., 2016).

Need for Study:

The CI surgery has been found to cause vestibular deficits in many individuals (El-Karaksya et al., 2019; Korsager et al., 2017; Nordfalk et al., 2016; Psillas et al., 2014; Rah et al., 2016). However, only a few have studied the effect of CI surgery on saccule, the organ with closest to the cochlea and the site of electrode insertion (El-Karaksya et al., 2019; Jin et al., 2006; Nordfalk et al., 2016; Psillas et al., 2014; Todt et al., 2008; Xu et al., 2015) and even fewer have investigated the effects in children (El-Karaksya et al., 2019; Jin et al., 2006; Psillas et al., 2014; Xu et al., 2015). While the studies on cervical vestibular-evoked myogenic potentials (cVEMP) in children have reported reduction in response rate and peak-to-peak amplitude of cVEMP following CI surgery, they used unrectified technique of recording cVEMP, which results in highly variable and less reliable cVEMP recordings due to the variation in electromyographic (EMG) activity across recordings (McCaslin et al., 2013, 2014). Further, both the standard mastoidectomy approach and round window electrode insertion techniques were found to equally affect the vestibular function (Klenzner et al., 2004; Kluenter et al., 2010; Korsager, Schmidt et al., 2017b; Nordfalk et al., 2016; Todt et al., 2008). This might be due to extensive drilling of the mastoid in both approaches. A surgery technique named ‘veria technique’ uses canal-wall lifting rather than mastoid drilling and has been considered than mastoid drilling approaches due to lesser blood loss, less surgical duration, faster recovery period, better preservation of facial nerve function and lesser bone trauma (Hans, 2016; Hans & Prasad, 2015; Kiratzidis et al., 2002; Shankar, 2015). However, it is not known whether or not the Veria technique would also cause saccular damage.

Aim & Objectives:

Therefore, the present study aimed to compare the saccular function before and after surgery in children undergoing cochlear implantation using the mastoidectomy approach (MA) and the Veria technique (VT).

Method:

Sixty three children (age range= 3-8 years) fulfilling the cochlear implant candidacy underwent unilateral cochlear implantation using MA (n=43) and VT (n=20). All participants received Nucleus Freedom with straight electrode array. The surgery using MA involved mastoidectomy, posterior tympanotomy and cochleostomy whereas that using VT involved the canal-wall lifting and cochleostomy. Electromyogram (EMG) monitored pre-stimulus rectified cVEMPs were obtained a day before the surgery (pre-implant stage), on the day after the device switch-on (switch-on stage) and 4 months after the device switch-on (follow-up stage) using Neuro-audio evoked potential system with calibrated ER3A insert earphones. The non-inverting electrode was placed at the upper one-third of the sternocleidomastoid muscle, inverting at the sterno-clavicular junction and ground on the forehead. Tone-bursts of 500 Hz were presented at 125 dB peSPL using stimulation rate of 5.1 Hz. The responses were band-pass filtered (10-1500 Hz), amplified 5000 times and averaged for 150 stimuli.

Results & Discussion:

Non-normal data distribution (p<0.05) on the Shapiro-Wilk's test of normality necessitated use of Friedman’s test and Wilcoxon signed rank test for within group comparisons and Mann-Whitney U test for between group comparisons. The response rates were compared between groups using the Equality of test for proportions. Both surgery groups were comparable on all cVEMP response parameters at the pre-implant stage (p>0.05). The response rates of cVEMP declined by nearly 40% at switch-on and follow-up stages in both the groups. In the MA group, the peak-to-peak amplitudes were significantly smaller in the CI ear than the Non-CI ears at the switch-on stage [Z=3.73, p<0.001] and the follow-up stage [Z=2.83, p=0.005]. Further, significantly smaller peak-to-peak amplitudes were obtained at the switch-on stage [Z=-4.16, p<0.001] and the follow-up stage [Z=-2.98, p=0.003] than the pre-surgery stage, but no significant difference was observed between the switch-on and the follow-up stages [Z=1.58, p=0.113] in the CI ear. In the VT group, like the MA group, the peak-to-peak amplitudes were significantly smaller in the CI ear than the Non-CI ear at switch-on stage [Z=2.93, p=0.003] and follow-up stage [Z=2.90, p=0.004]. Further, significantly smaller peak-to-peak amplitudes were found at switch-on stage [Z=-2.94, p=0.003] and follow-up stage [Z=-2.98, p=0.003] than the pre-surgery stage, but not between the switch-on and the follow-up stages [Z=-0.59, p=0.553] in the CI ear. Thus, the response rates and peak-to-peak of cVEMP reduced after the CI surgeries, which complements the findings of previous studies (Basta et al., 2008; Handzel et al., 2006). However, there are also few contradictory reports also exist (Buchman et al., 2004; Vibert et al., 2001; Ajjalloueyan et al., 2017), possibly due to use of unrectified cVEMP and the differences in the types of cochlear implant electrodes arrays. At the follow-up stage, the present study showed no significant improvement in amplitude of cVEMP or reappearance of cVEMP in those with absence of it at the switch-on stage. This shows that the saccular damage caused by the surgical trauma is relatively permanent, which supports the theory that saccular membrane distortion, loss of saccular membrane, saccular collapse or hydrops, and local fibrosis induced by the insertion trauma of the cochlear implant electrode might be the main reason for the psot-surgery abnormalities of cVEMP (Tien & Linthicum, 2002; Basta et al., 2008). In the present study, the peak-to-peak amplitude was significantly smaller in the VT group than the MA group at the switch-on stage [Z=-2.65, p=0.007] and the follow-up stage [Z=-3.04, p=0.002] in the CI ear, although there was no significant difference at the pre-implant stage [Z=1.37, p=0.169]. The poorer preservation of cVEMP (smaller amplitudes) following CI using VT than MA might be due to changes in middle ear characteristics brought about by the use of VT in addition to drilling the mastoid (Hans & Prasad, 2015) as against relatively unmanipulated middle ear in MA. It is well established that energy reaching the saccule is important for cVEMP (Bath et al., 1999; Halmagyi et al., 1994; Han et al., 2016; Sandhu et al., 2018) and manipulations of middle ear could iatrogenically affect the middle ear transformer characteristics, thereby impeding the energy reaching the saccule.

Summary & Conclusion:

Cochlear implantation causes significant reduction in response rate and amplitude in the implant ear. Although both the surgery techniques resulted in nearly 40% drop in the response prevalence of cVEMP, the amplitudes were better preserved at switch-on and follow-up stages after using mastoidectomy approach than veria technique. The results suggest that the surgical procedure for cochlear implantation causes damage to the saccule; nonetheless, mastoidecomy approach is better for preservation of saccular responses than Veria technique.


  Abstract - AO283: Towards the Identification of Optimal Tool for Auditory Spatial Assessment Top


Nisha K. V.1 & Ajith Kumar2

1nishakv1989@gmail.com &2ajithkumar18@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Spatial hearing is a highly specialized mechanism which analyzes the spatial layout of the sound sources in the three-dimensional space using temporal, spectral and intensity cues (Grantham, 1995). Traditionally, spatial processing is assessed using a battery of psychoacoustic tests such as localization in free-field, interaural time (ITD) and level differences (ILD). While the former method is constrained by the availability of high infra-structure (array of loudspeakers) and cost involved, the latter two measures evaluate only a single aspect of spatial hearing. Further, the use of questionnaire-based measures such as Speech, Spatial and Qualities of Hearing (SSQ) can be limited by the subjective bias. Virtual Acoustic Space Identification (VASI) test opens a new perspective for the assessment of spatial acuity as it uses virtual acoustic techniques to synthesize sound location within head (Nisha, 2018; Nisha & Kumar, 2017). Each of these tests can confirm or refute the presence of spatial acuity deficits and identify the processes underlying the deficit. Ideally, clinical tests should correctly identify all patients with the spatial deficits as positive (sensitivity) and identify those with no such deficits as negative (specificity). Most clinical tests fall short of this ideal scenario.

Need for Study:

There is no systematic exploration of sensitivity and specificity of spatial acuity tests in a single cohort. In order to derive an optimal test of spatial acuity, a systematic comparison of the diagnostic values of these test procedures is warranted. This exercise will ensure the identification of the best possible test for spatial processing assessment in clinical setups.

Aim & Objectives:

The purpose of this investigation was to identify the optimal measure/ measures for auditory spatial processing. The specific objectives of the study were to evaluate and compare the sensitivity and specificity of various measures (ITD, ILD, VASI, and SSQ) of spatial acuity using receiver operating characteristic curves. Further, the study also aimed to define cut-off points for the classification of spatial acuity performance based on the scores obtained in these measures.

Method:

A standard group comparison research design was adopted in the present study, which consisted of two groups of participants (N=85) aged between 18-60 years. Group I consisted of 60 participants with normal hearing (NH) sensitivity (PTA: 15dB HL), while group II comprised 25 listeners with binaural symmetrical mild or moderate flat sensorineural hearing loss (SNHL, PTA: 26-55 dB HL). The sample size (N = 85) considered in the study is determined using G-Power 3.1 (Faul, Erdfelder, Lang, & Buchner, 2007), based on a previous study on the spatial processing (Neelamegarajan, Vasudevamurthy, & Jain, 2018). An additional cohort of 19 participants (10 NH and 9 SNHL) was considered for validation of the classification results.

The study was conducted in two phases, including the establishment of diagnostic criterion and validation phases. For the establishment of diagnostic efficacy of tests, a series of psychoacoustical and perceptual measures were conducted on all the participants of the study. The psychoacoustical measures included test of interaural time (ITD) and level (ILD) differences and virtual acoustic space identification (VASI) test while the Speech, Spatial and Qualities of Hearing (SSQ) questionnaire served as the perceptual measure. The stimuli in all the psychoacoustical tests were 250ms white noise bursts (WBN), with suitable modifications. In the test of ITD or ILD, the time or level of signal was adaptively varied using Psychoacoustic toolbox (Soranzo & Grassi, 2014) implemented in Matlab software version 7.10.0 (The Mathworks Inc., Natick). WBN convolved with head-related transfer impulses (Miller, Godfroy-Cooper, & Wenzel, 2014) was used in VASI test (Nisha, 2018; Nisha & Kumar, 2017). On the other hand, SSQ adapted for Indian context (Shetty, Palaniappan, Chambayil, & Syeda, 2019) was utilized to obtain the perceptual ratings. In the validation phase, all these tests were re-administered on 19 participants who were classified into respective groups based on the criterion cut-offs.

Statistical analyses. Shapiro-Wilks test of normality, descriptive statistics and Multivariate analysis of variance (MANOVA) were carried out using SPSS version 20.0 (IBM SPSS Inc, Chicago). Further, MedCalc Statistical Software version 19.1 (Ostend, 2019) was used to estimate receiver operating characteristic curves for each measure. This facilitated calculation of optimal sensitivity and specificity at cut-off criterion (measured using Youden Bootstrap procedure) apart from evaluating the absolute and pairwise differences in area under the curve.

Results & Discussion:

The Shapiro-Wilk test showed normality in data distribution across all dependent variables (ITD, ILD, VASI and SSQ scores). Descriptive statistics expressed through mean and standard deviations (SD) showed group differences in spatial performance on all tests. This finding was verified statistically using MANOVA, which revealed significant main effect of group for all measures [ITD: F(1,83) = 0.81, p < 0.01; ILD: F(1,83) = 11.06, p < 0.01; VASI: F(1,83) = 33.49, p < 0.01; SSQ: F(1,83) = 204.09, p < 0.01]. Deficits in frequency selectivity (Liberman & Dodds, 1984; Turner, Chi, & Flock, 1999), temporal resolution, and altered filtered shapes (Dubno & Dirks, 1989; Glasberg & Moore, 1986) in listeners with SNHL can be conceived as factors that account for these group differences.

The results of receiver operating characteristic (ROC) curve showed that SSQ test had significantly (p < 0.001) higher (0.97±0.03SE) area under the curve (AUC), followed by VASI (0.83±0.05SE), ITD (0.82±0.05SE) and ILD (0.68±0.07SE, p < 0.05). Pairwise comparison of the ROCs indicated significant differences between AUC for SSQ scores with all other spatial measures (ITD: /Z/ = 2.31, p <0.05; ILD: /Z/ = 3.46, p <0.01; VASI: /Z/ = 2.62, p <0.01). The optimal criterion point determined for the diagnosis of spatial processing deficit in ITD was >0.12ms, >2.75 dB for ILD, <46 for VASI and <146 for SSQ. The sensitivity corresponding to these criterion points in the same order was 58.33%, 48.00%, 80.00% and 96.00%, while their specificity was 92.00%, 83.33%, 66.67%, and 95.00% respectively. These findings from ROC curves showed that SSQ questionnaire exhibited the highest sensitivity, specificity and AUC. Despite this finding, the utility of SSQ in clinical setups becomes questionable as it not an objective measure and can be limited by subjective biases. The next highest acceptable combination of sensitivity, specificity and AUC is obtained by VASI test. The high sensitivity and specificity of VASI test (relative to ITD or ILD) can be attributed to the inclusion of all cues of spatial processing (intensity, spectral and temporal) in this test. While ITD and ILD account for time and intensity related aspects, they largely ignore the role of spectral cues in spatial processing. Inclusion of spectral cues along with time and level variations using virtual techniques in VASI test would have boosted the diagnostic accuracy of this test. Complementary to these findings, the validation on 19 subjects using cut-off of <46 for VASI test yielded 90.00% sensitivity and 77.7% specificity values.

Summary & Conclusion:

Findings of study showed that spatial performance can be diagnostically well-evaluated using SSQ and VASI test, owing to their high sensitivity and specificity. In the light of present findings, we recommended the use of VASI test as it provides an objective index for quantification of spatial errors and has more practical implications. VASI test allows more flexibility, is cost effective and requires only minimal equipment for testing and can be easily adopted in clinical setups.


  Abstract - AO285: The Efficacy of Cervical Vestibular-Evoked Myogenic Potentials in Mapping Current Levels in Cochlear Implant Recipients Top


Niraj Kumar Singh1, Priya K P2, Asha Yathiraj3, Manjula P4, Geetha C5, Megha Janardhan6 & P Jawahar Antony7

1niraj6@gmail.com,2kppriya13@gmail.com,3asha_yathiraj@rediffmail.com,4manjulap21@hotmail.com,5geethamysore.cs@gmail.com,6meghajp11@gmail.com, &7jawaharantony@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

The cochlear implant is an electronic device that replaces the function of the damaged inner ear [Food & Drug Administration (FDA), 2018]. It benefits individuals with moderate to profound degree of hearing loss who receive little or no benefit from hearing aids (FDA, 2018; Sladen et al., 2018). The cochlear implant consists of two major parts- (1) a wearable processor which gathers the acoustic inputs and converts to electrical impulses; and (2) an implanted electrode array which receives the electrical impulses from the processor and spreads the current within the cochlea for the neural ganglions in the vicinity to pick-up them (FDA, 2018). However, the current spread is not always localized to the cochlea. Facial twitching caused by the spread of current to the facial nerve (Bigelow et al., 1998; Niparko et al., 1990) is an example of the undesirable current spread to other structures in the proximity of the cochlea.

Need for Study:

The facial nerve traverses through the internal auditory meatus (Ho et al., 2015; Monkhouse, 1990), a path that places it in the immediate vicinity of the cochlea and potentially within the current field generated by cochlear implant. Thus, whenever the current impulses are large and not limited by the mapping of current level (C-levels), the resultant is an undesirable facial nerve stimulation, easily evidenced though observation of facial twitching (Bigelow et al., 1998; Niparko et al., 1990). Sharing of a common fluid environment with cochlea and the proximity of vestibular nerve fibres to the auditory nerve fibres could potentially render the vestibular system predisposed to electrical stimulation by the cochlear implant (CI). In fact, the Scarpa's ganglions arising from the saccule is in closer proximity to the spiral ganglions arising from the cochlea than the facial nerve fibres (Mei et al., 2019). This might be a potential reason for persistent dizziness long after the cochlear implantation surgery despite no facial symptoms. However, there is dearth of studies to prove this assumption.

Aim & Objectives:

The present study aimed at identifying the presence of vestibular stimulation due to CI and studying the efficacy of cVEMPs information assisted mapping in controlling the undesirable vestibular stimulation by the CI.

Method:

The study was conducted in 3 phases. Phase I used 14 children with unilateral CI (age range = 3-7 years). In this phase, acoustic tone-burst stimuli (frequency = 500-Hz, intensity = 109 dB SPL ) were delivered to the ear through the insert earphones. During this time, the CI was switched-off. Those with the presence of cVEMP response from the implanted ear side (n=8) were carried forward to the second phase. In phase II, the acoustic tone-burst stimuli (frequency = 500 Hz, intensity = 85 dB SPL & 60 dB SPL) were delivered in the sound field using an impedance-matched loudspeaker from a distance of 1 meter from the participant. The responses were recorded in the CI switched-off and the CI switched-on conditions in response to both stimulus intensities. Those with the presence of cVEMP responses in the CI switched-on condition but not in the CI switched-off condition for stimulus intensity of 60 dB SPL (n=2) were included in phase III of the study. Phase III of the study involved remapping of the C-levels, administration of pure-tone and speech audiometry before and after the remapping, and re-administration of cVEMP at both stimulus intensities and in both the device conditions (CI switched-on and CI switched-off). The other parameters for obtaining cVEMP, such as the stimulus rise-plateau-fall times (2-1-2 ms), polarity (rarefaction), stimulation rate (5.1 Hz), response filter setting (band-pass filter of 10-1500 Hz), amplifier gain (5000 times) and averages (200 stimuli) were kept constant between the phases of the study.

Results & Discussion:

In the implanted ear, cVEMPs were present in 8 out of 14 children (57%). In phase II, cVEMPs were present in all 8 participants in the CI switched-on condition and in 5 (62.5%) in the CI switched-off condition when the stimulus intensity was 85 dB SPL. At the stimulus intensity of 60 dB SPL, cVEMPs were absent in all participants in CI switched-off condition whereas, they were evidenced in 2 participants in the CI switched-on condition. Presence of cVEMP response at 60 dB SPL in 2 children in the CI swicthe-on condition but not in the CI switched-off condition is a testimony to the assumption that the current level used for auditory stimulation through CI is causing undesirable stimulation of inferior vestibular nerve, similar to the ones well known for facial stimulation (Bigelow et al., 1998; Niparko et al., 1990). In order to further ascertain this, the cochlear implant was remapped by reducing the overall C-levels by 3 current levels and cVEMPs were re-recorded in phase III. There was no evidence of cVEMP at 60 dB SPL in either of the device conditions with the new map. Similar elimination of facial twitching has been evidenced for small reductions in the overall C-levels (Bigelow et al., 1998; Niparko et al., 1990). Therefore, it appears that the current spread to the inferior vestibular nerve was eliminated due to lower current levels being used by the device. This shows that electrical current generation by CI could cause undesirable vestibular stimulation in a few CI users. Further, there was no significant alteration of performance on pure-tone and speech audiometry with the new map compared to that with the old map values of the C-levels. Therefore it is imperative that, in addition to monitoring non-auditory stimulations like facial nerve, vestibular system stimulation be also monitored when mapping and cVEMP could play an important role in making the decision about the C-levels in particular.

Summary & Conclusion:

There are no clear symptoms; however, a few cochlear implant recipients receive undesirable vestibular stimulation during the device switched-on condition. Presence of cVEMP for low-level signals, such as the normal conversation level, is a potential indication for such a phenomenon. Using cVEMP as a guide to reduce the C-levels eliminated the vestibular stimulation for such low signal intensities. Therefore, vestibular system stimulation, like facial nerve stimulation, should also be monitored when mapping and cVEMP could play an important role in making the decision about particularly mapping the C-levels.


  Abstract - AO286: Variations in VOR Gain in Individuals with Auditory Neuropathy Spectrum Disorder with Hypo-functional Caloric Response Top


Niraj Kumar Singh1, Anuj Kumar2 & Sujeet Kumar Sinha3

1niraj6@gmail.com,2anujkneupane@gmail.com, &3sujitks5@yahoo.co.in

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

An audio-vestibular nerve consists of auditory and vestibular receptors ( first-order neuron) which are intrinsically coupled in a common pathway. Therefore, any pathophysiological insults at these receptors might lead to neuropathy in both the auditory as well as vestibular nerves (Buetti & Luxon, 2014). Auditory neuropathy spectrum disorder (ANSD) is one of such disorder where both the auditory and the vestibular nerves are affected (Sinha et al. 2014). There is immense literature on various auditory linked pathophysiology of ANSD (Kumar & Jayaram, 2006), however, the description of various vestibular functions are limited (Kumar et al. 2007).

Need for Study:

The caloric test measures the functioning of the lateral semicircular canal and is considered to be one of the gold standard test (Perez & Rama-Lopez, 2003). Video head impulse test (vHIT) also measures the VOR gain of all six semi-circular canals (Mac Dougall et al.2013). However, in many individuals with peripheral vestibular pathology, the results of the vHIT and caloric test are dissociated (Sinha et al. 2019). VOR reflex is an important vestibular reflex which helps individuals in stabilizing the image during the head movement. Since there is a peripheral vestibular pathology (inner hair cells, synapse, or auditory nerve) in the individuals with ANSD, the VOR might be affected. A combination of caloric test and vHIT could help us in understanding the nature of vestibular insult, nature of VOR gain and central compensation, if any. This can be helpful in understanding the sensitivity and specificity of vHIT to predict the range of deficiencies in these individuals.

Aim & Objectives:

The present study focuses on measuring the VOR gain and refixation saccades in individuals with ANSD having hypoactive caloric responses.

Method:

The study comprised of 20 individuals, 9 males and 11 females, in the age range of 17 to 38 years (mean age = 29.25yrs) diagnosed with ANSD. These individuals were diagnosed to have bilateral ANSD based on poor hearing sensitivity absence of ipsilateral and contralateral acoustic reflex response, presence of otoacoustic emissions/ cochlear microphonics and absence of auditory brainstem response. The bithermal caloric test was using monaural open loop water irrigation at temperatures of 30â—¦C and 44â—¦C for 30 seconds each. The volume of water used was 200 ml and the duration of each irrigation was 30 seconds. The recorded VNG tracings were classified as hypoactive, hyperactive and normal based on the previous studies (Claussen & Schlachta, 1972; Sinha et al., 2014). All the individuals had shown hypoactive responses on bithermal caloric test. During the vHIT, the patient was seated comfortably in chair and the vHIT goggles were fitted tightly. After the initial calibration for 10o of eye movement, vHIT was done for lateral plane alone. Total 20 impulses were given in right plane and left lateral planes each. The VOR gain for two horizontal semi-circular canals was measured with the help of high speed digital infrared camera attached to the instrument. vHIT response was analysed for the VOR gain and the presence of overt and/or covert refixation saccades.

Results & Discussion:

All individuals with ANSD in the present study had hypoactive caloric responses which is suggestive of vestibular insults in all of them. The overall mean VOR gain was 0.72 (SD = 0.23) for left side impulses (hereafter called left plane) and 0.73 (SD = 0.21) for right side impulses (hereafter called right plane). The analyses of individual data revealed three categories of responses based on the VOR gain- (1) bilateral normal VOR gain; (2) unilateral reduced VOR gain; (3) bilateral reduced VOR gain. The presence of bilateral normal VOR gain in 7 (35%) participants. Among them, 1 individual had no refixation saccades for left plane while the right plane showed presence of overt and covert saccades both; 2 planes had overt saccades and 11 planes were associated with both covert and overt saccades. Unilateral reduction of VOR gain was observed in 5 (25%) individuals. Among them, 3 had reduced VOR gain for left plane whereas as 2 had reduced VOR gain for right plane. In terms of the findings of refixation saccades in these individuals with unilateral reduced VOR gain, 4 planes (all with normal VOR gain) had both covert and overt saccades and 6 had covert saccades (3 in the affected VOR gain plane and 3 in the unaffected VOR gain plane).

The third sub-category, individuals with bilaterally reduced VOR gain, was evidenced in 8 (40%) individuals and all of them had bilateral presence of covert saccade. Therefore, the results of the present study showed a mismatch between the results of caloric and vHIT in ANSD. These results in agreement with the only published study of vHIT in individuals with ANSD (Sinha et al., 2019). Such discrepancies have been found for other vestibular disorders such as Menier's disease, BPPV, and vestibular neuritis (McCaslin et al., 2014; Singh et al., 2018; Tsuji et al., 2000). The reason for such discrepancy could be differences in vestibular afferents that carry the inputs of the caloric and vHIT. The cochlea-vestibular nerve consists of regularly firing neurons as well as neurons with irregular firing (Tranter-Entwistle et al., 2016). The neural inputs for low-frequency stimulation, such as that used in caloric test, is operated through the nerve fibers with regular neural spike timing. However, the high-frequency stimuli, like the ones used in vHIT, driven VOR impulse are carried forward by the fibers with irregular firing rate (Foster et al., 2013; Lasker et al., 2000). It is believed that there can be a selective impairment of nerve fibers encoding the low and the high frequency information (Park et al., 2005). Therefore, it is possible to have an impairment of neural impulse generated by the caloric test with complete sparing of the ones generated by vHIT, and vice versa, which explains the dissociation of vHIT and caloric results in ANSD. Further, the results of the present study showed a significant association of the disease duration with the type of refixation saccades (covert, overt, or both) for left plane [χ2(6) = 16.00, p = 0.014] and right plane [χ2(6) = 16.42, p = 0.012]. Generally, the presence of covert saccades alone was associated with longer disease duration and evidence for overt saccades alone was associated with a more recent disease onset. There was also a significant association of type of VOR gain (reduced or normal) with the type of refixation saccade in left plane [χ2(6) = 12.80, p < 0.001] and right plane [χ2(6) = 15.24, p < 0.001]. It has also been reported that long duration since the onset of the disease aids central compensation which might increase the chances of normal results on vHIT (Zellhuber, Mahringer & Ramb., 2014) but not the results of bithermal caloric test (Zellhuber et al., 2014). Thus the type of saccades in individuals with ANSD could possibly lend itself to comments about whether or not central compensation has occurred or is occurring.

Summary & Conclusion:

The outcomes of the present study show that the results of vHIT have a high degree of dissociation with those of caloric test. Therefore, vHIT must be included along with a caloric test in the vestibular test battery to understand the various mechanism of compensation or VOR deficit in individuals with ANSD.


  Abstract - AO287: Aided Speech Evoked Cortical Potential in Children with Hearing Impairment (6 Months to 5 Years) Top


Prawin Kumar1 & Geetha C2

1prawin_audio@rediffmail.com &2geethamysore.cs@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Infants with hearing impairment are always at risk of delayed speech and language development compared to typically developing infants. The negative impact of hearing impairment on speech and language development can be reduced by initiating intervention and providing appropriate amplification at an early age. Behavioural assessment of hearing aid benefit in young infants with hearing impairment is always questionable and hence professionals are dependent on more accurate measures like aided cortical auditory evoked potentials (CAEPs) using speech stimuli. It is also reported that a tool used in HEARLab cortical measures i.e. statistical detection of CAEPs which were noticed to be consistent with those of an expert examiner, thus reported to be an alternative and reliable method of response detection by several researchers (Golding et al., 2007, Dun, Dillon & Seeto, 2015; Hoth, 1993).

Need for Study:

There has been an increasing interest in the use of CAEPs to evaluate speech perception abilities in a clinical population. However, there is a need to explore specific electrophysiological measures, which might contribute to the objective evaluation of the participants, who for reasons like age, hearing impairment, lack of auditory, linguistic and/or cognitive pre-requisites for behavioral measures. Studies reported the relationship between CAEPs and auditory perception abilities (Kraus et. al., 1993; Purdy et al., 2003; Tremblay et al., 2006). The presence of identifiable peaks in speech evoked CAEPs inferred detection of speech stimuli at cortex level as reported by Hyde in 1997. However, a study have been shown that CAEP waveform is affected by changes in speech stimulus parameters (Tremblay et al., 2003). Keeping the above factors in mind, developers designed a protocol for CAEPs using speech sounds [/m/, /t/ & /g/] for paediatric populations (Carter et al., 2013; Carter et al., 2010; Chang et al., 2012). However, there is dearth of literature to validate the CAEP responses using different speech stimuli in Indian population. Hence, there is a need to study the CAEPs in children using hearing aids at different intensities with different speech stimuli in Indian Population.

Aim & Objectives:

Present study aimed is to estimate the aided cortical potential responses in children using hearing aids. The specific objectives are to see the effect of intensity [75, 65 & 55 dB SPL], and effect of speech stimuli [/m/, /g/ & /t/] among children with hearing aids and typically developing children (TDC).

Method:

There were 94 children in the age range of 6 months to 5 years participated in the study. Out of 94 children, 44 children had normal hearing sensitivity (TDC) and 50 children with severe-to-profound hearing loss using own digital hearing aids (Clinical group) considered for the study. The mean (SD) age of the clinical and control group were 3.90 (0.90) and 3.16 (1.06) years, respectively. The mean (SD) aided threshold for the frequencies 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz were 36.14 dB HL (3.46), 38.85 dB HL (4.02), 41.31 dB HL (4.17) and 44.91 dB HL (4.32) respectively in the hearing aid users (within speech spectrum).

Results & Discussion:

All statistical analyses were carried out using the software Statistical Package for the Social Sciences (SPSS, version 21). Shiparo-Wilk test showed that data is not uniformly distributed (p>0.05), hence non-parametric tests were done. Descriptive statistics showed mean latencies, and absolute amplitude of the P1 and N2 in the hearing aid users poorer (reduced), for all the three speech stimuli at all three intensity levels in comparison to control group. Further, Mann-Whitney U test showed statistically significant difference between both groups for latencies of the peaks P1 [at 75 dB SPL for /m/ sound (Z= -4.80, p<0.01); for /g/ sound (Z= -4.39, p<0.01); for /t/ sound (Z= -4.57, p<0.01); at 65 dB SPL for /m/ sound (Z= -3.52, p<0.01); for /g/ sound (Z= -4.58, p<0.01); for /t/ sound (Z= -0.94, p<0.01); at 55 dB SPL for /m/ sound (Z= -3.90, p<0.01); for /g/ sound (Z= -3.10, p<0.01); for /t/ sound (Z= -3.11, p<0.01)] as well as for the latencies of peak N2 [at 75 dB SPL for /m/ sound (Z= -3.64, p<0.01); for /g/ sound (Z= -2.40, p<0.05); for /t/ sound (Z= -4.22, p<0.01); at 65 dB SPL for /m/ sound (Z= -3.15, p<0.01); for /g/ sound (Z= -2.87, p<0.01); for /t/ sound (Z= -3.32, p<0.01); at 55 dB SPL for /m/ sound (Z= -2.30, p<0.05)] except N2 latencies at 55 dB SPL for /g/ sound (Z= -1.77, p>0.05) and /t/ sound (Z= -1.02, p>0.05).

Similarly, Mann Whitney U test shows statistical significant differences for the absolute amplitude of peaks P1 [at 75 dB SPL for /m/ sound (Z= -3.91, p<0.01); for /g/ sound (Z= -4.03, p<0.01); at 65 dB SPL for /m/ sound (Z= -4.41, p<0.01); for /g/ sound (Z= -3.52, p<0.01); for /t/ sound (Z= -2.83, p<0.01); at 55 dB SPL for /m/ sound (Z= -4.19, p<0.01); for /g/ sound (Z= -3.29, p<0.01); for /t/ sound (Z= -3.36, p<0.01)] as well as for the amplitude of peak N2 [at 75 dB SPL for /m/ sound (Z= -2.26, p<0.05); for /t/ sound (Z= -3.88, p<0.01) except for P1 peak at 75 dB SPL for speech stimulus [t] sound (Z= -1.75, p>0.05) and N2 peak at 75 dB SPL for /g/ sound (Z= -1.55, p>0.05); at 65 dB SPL for /m/ sound (Z= -1.23, p>0.05); for /g/ sound (Z= -0.81, p>0.05); for /t/ sound (Z= -1.58, p>0.05); at 55 dB SPL for /m/ sound (Z= -1.31, p>0.05); for /g/ sound (Z= -0.76, p>0.05); for /t/ sound (Z= -0.02, p>0.05)]. Friedman test showed statistically significant difference for amplitude of P1 for /t/ sounds [ χ2(2)=10.35, p<0.05] and N2 for /m/ sounds [χ2(2)=7.40, p<0.05] in hearing aid users. Wilcoxon signed rank test did not show significant difference between different pairs of intensities in general for both latencies and amplitude at each speech stimulus in both control and clinical group at 0.05 levels. The above outcomes very clearly indicate that children with normal hearing do have the ability to reflect the minute changes in intensity in terms of change in slope (P1-N2 complex) of cortical potentials at any frequency region (low, mid and high). However, such changes are not reflected well in hearing aid users in cortical potential measures though fitting of appropriate hearing aids with optimum gain was done. Present study finding is in consonance with previous literature in terms of effect of speech stimulus on CAEP. Study done by Dun et al in 2012 found that there was no significant effect of speech stimuli ([m], [t] and [g]) on the latency and amplitude measures of the P1 and N2 in the hearing impaired infants. In contrast, Golding et al in 2006 reported that the speech stimuli [t] elicited significantly larger amplitude in comparison to speech stimuli [m] and [g]. The significant differences observed for the effect of the speech stimuli on the latency and amplitude measures at few intensity levels could be attributed to chance factors, since there were no clear trends noticed for all speech stimuli.

Summary & Conclusion:

The present study highlights the significance of CAEPs in estimating hearing aid benefit in younger children with hearing impairment. CAEPs can be used for monitoring the progress after rehabilitation in hearing aid user children.


  Abstract - AO290: Efficacy of Computer-based Speech-in-Noise Training Module in Indian-English in Children with Auditory Processing Disorders Top


Reesha O.Abdul Hussain1, Shwetha G.N.2, Prawin Kumar3 & Niraj Kumar Singh4

1reesh.oooov@gmail.com,2shwethagn22@gmail.com,3prawin_audio@rediffmail.com, &4niraj6@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Speech perception in adverse listening conditions is one of the major challenges faced by children with auditory processing disorders (APD) (Hind et al., 2011, Muthuselvi & Yathiraj, 2009). If not intervened timely, it can have marked impact on the academic and social achievements of the child. Central auditory nervous system is plastic and responds well to auditory training by improving the neural timings and synaptic connections (Phillips 1995, 2002). Timely intervention in such children has shown remarkable improvements in auditory as well as academic aspects (Delphi & Abdollahi, 2018; English et al., 2003; Maggu & Yathiraj, 2011a, 2011b; Moncrieff & Wertz, 2008). However, conventional non-computer based training can often be poorly structured, dreary and non-motivating.

Need for Study:

Computer-based auditory training modules have been introduced for management of APD, with their advantages of being structured, systematic and interactive (Loo et al., 2016; Miller et al., 2005; Russo et al., 2005; Tawfik et al., 2015). Interactive computer-based auditory training programs not only help to maintain the interest of the child but also reduce the time and effort of the professionals. An effective computer-based training module would thus helps in a deficit-based formal intervention in children with APD, even when there is an unavailability of a skilled professional. Speech-based tasks are essential for efficient speech-in-noise training, as they provide a better approximation to the naturally occurring listening situations. India,a multilingual country, demands validated modules in different Indian languages. Since English is followed as the medium of instruction in the CBSE (Central Board of Secondary Education)/ICSE (Indian Certificate of Secondary Education) schools in the country, a training module in Indian English would be helpful for children going to English medium schools throughout the country. However, there is no such computer-based interactive learning module for training of children in Indian English. Therefore, there is a need for a validated computer-based training module in Indian English for speech-in-noise training which can be used for Indian children. Also it is highly important to quantify and report the training related changes followed by the module.

Aim & Objectives:

To develop and evaluate the efficacy of computer-based speech-in-noise training module in Indian-English on children with APD.

Method:

The present study consisted of 10 children (age range= 9-14 years) whose scores on speech-perception-in-noise in Indian-English (SPIN-IE) test were 2 standard deviation (SD) below the mean for the age and who were diagnosed as having APD based on the criteria of failure in any one of the APD tests by 2 SD, or any 2 APD tests by 1 SD, given by Yathiraj and Vanaja (2018) for age ranges up to 10 years and extrapolated using a regression equation for ages >10 years, as used previously (Bhat, 2015). All 10 children are studying in English medium schools at least for the past 4 years; had hearing thresholds were within 15 dB HL across all octave frequencies from 250Hz to 8000Hz; had speech identification scores in quiet > 80% in both the ears; had A type tympanogram with ipsilateral reflex thresholds within 100 dB HL at 1000Hz and 2000Hz; normal morphology and absolute latencies within normal limits on click evoked ABR. Further, major language, reading and cognition deficits were ruled out based on Bankson language screening test, Early reading skill test and modified mini-mental state of examination test, respectively. Pre-training evaluations included behavioral auditory processing tests as well as electrophysiological tests. The behavioral tests included, gap detection test (GDT), auditory memory and sequencing test (AMST), dichotic CV test (DCV), duration pattern test (DPT) and SPIN-IE test. Electrophysiological test included auditory long latency responses (ALLR) for speech stimuli in quiet and in the presence of noise. The stimulus used was 218 ms long /da/, recorded using a male voice. The same was presented in quiet and also in the presence of speech shaped noise. . The children's attention was ascertained by the stimulus counting task performed silently. Inter-stimulus interval used was 590 ms and the stimulus repetition rate was 1.2/sec. The responses from the Cz site were re-referenced to linked mastoid position, filtered through 1-30 Hz band-pass filter and pre-processed. An interactive computer-based training module which focuses on words-in-noise training was developed and used in the study. The module consists of English words, which were selected after familiarity testing on 10 school-going children of 8-9 years of age, studying in English medium school for the past 2 years. The module involved training in the presence of both speech shaped noise (SSN) and multi-talker babble (MTB). The different SNR levels included in the training are +20 dB SNR, +10 dB SNR, +6 dB SNR, +4 dB SNR, +2 dB SNR, 0 dB SNR, -2 dB SNR and -4 dB SNR. Picture-selection task was used for obtaining responses to each stimulus token. Training was given for 30 minutes per day for 3-4 days a week. Training continued till the child passed the last level (i.e., -4 dB SNR for monosyllables identification in the presence of MTB. All the tests were repeated soon after the completion of training.

Results & Discussion:

Shapiro-Wilks test for normality revealed non-normal distribution of behavioral and electrophysiological data (p>0.05) resulting in subsequent non-parametric statistical analyses approach. Wilcoxon signed rank test revealed a significant post training improvement in the SPIN right (Z=-2.84,p=0.004,Ï•=0.63) and left (Z=-2.81,p=0.005,Ï•=62) scores, GDT right (Z=-2.80,p=0.005,Ï•=0.62) and left (Z=-2.80,p=0.005,Ï•=0.62) scores and DPT right (Z=-2.66,p=0.008,Ï•=0.59) and left (Z=-2.25,p=0.024,Ï•=0.50) scores. The scores of auditory memory and sequencing test, and single or double correct scores of Dichotic CV test showed no significant improvement after training.

Wilcoxon signed rank test revealed a significant reduction in the amplitudes of P1 (Z=-2.39,p=0.017,Ï•=0.53) and N2 (Z=-2.59,p=0.009,Ï•=0.58) peaks in quiet and those of P1 (Z=-1.98,p=0.047,Ï•=0.44), N1 (Z=-2.29,p=0.022,Ï•=0.51), P2 (Z=-2.19,p=0.028,Ï•=0.49),and N2 (Z=-2.39,p=0.017,Ï•=0.53) peaks of ALLR in the presence of noise. Results show an improvement in the SPIN scores for both ears after training using the computer-based speech-in-noise training module in Indian English. Along with SPIN scores, improvements were also noted in the temporal processing measures like gap detection test and duration pattern test. This supports the findings of Hoover et al (2015) who reported a strong correlation between speech-in-noise perception and temporal processing. The computerized module was successful in fetching an improvement in the speech-in-noise perception and also reflecting the effect of training in the related auditory processes. The improvements shown in the cortical potentials further highlight the efficacy of the training module in enhancing the neural plasticity, and bringing changes at the cortical level. Previous studies have shown improvements in the cortical responses for stimuli in quiet as well as noise, after the auditory training (Hassaan & Ibraheem, 2016; Hayes et al., 2003; Russo et al., 2005; Warrier et al., 2004). The present study also showed significant changes in the amplitudes of ALLR responses in both quiet and noise after training, in consensus with the above studies.

Summary & Conclusion:

The computer-based training module in Indian English used in the present study was efficient in improving the speech-in-noise perception along with auditory temporal processing skills. The efficacy of the module is further backed by the improvements shown in the cortical potentials. Hence, the module, along with being interactive and catching the child's attention and interest, bring improvement in auditory processing as well as the neural integrity of the central auditory system.


  Abstract - AO293: Age-related Changes in Cortical Coding of Auditory Space: Evidence from Electroencephalography Top


Nisha K V1 & Ajith Kumar2

1nishakv1989@gmail.com &2ajithkumar18@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

The spatial auditory system relies on sensitivity, speed and temporal precision of periphery and central auditory pathway. Advancing age deteriorates spectral, and temporal precision (Frisina & Walton, 2006) of spatial coding. Spatial acuity deficits with age are noticeable starting from 40 years, even with clinically normal hearing sensitivity (Abel, Giguère, Consoli, & Papsin, 2000). Although exploration of such changes have been attempted using interaural phase difference (Ross, Fujioka, Tremblay, & Picton, 2007), interaural time difference (Ozmeral, Eddins, & Eddins, 2016) and localization of sources in free-field (Briley & Summerfield, 2014), these attempts were only confined to peak latency and amplitude analyses at few electrodes or restricted regions of interest. The analysis of peak amplitude and latency is subjected to limitations such as the experimenter bias, prior focus bias (i.e., errors that manifest as a result of ignoring the time periods beyond interest) and insensitiveness to spectral cues (Murray, Brunet, & Michel, 2008) (Murray, Brunet, & Michel, 2008). In addition, generalization of results from such studies where a single spatial cue is manipulated to derive neural basis of complex spatial processing mechanisms is not strongly justifiable.

Need for Study:

The current study was designed to address the need to overcome the above mentioned shortcomings by the implementation of virtual acoustic space (VAS) stimuli and global spatio-temporal analysis of high density electroencephalographic (EEG) recordings. While the former focuses on need for engaging optimal stimuli to evaluate complex processes involved in spatial processing, the latter attends to the need of understanding age-related changes in spatial processing at the cortical level.

Aim & Objectives:

The aim of the study was to characterize aging- related changes in spatial hearing using electroencephalographic recordings evoked to spatial deviants (virtual acoustic stimuli) presented in active odd-ball paradigm (P300). The specific objectives of the study were to compare the cortical correlates of spatial processing across three groups of participants (young, middle-aged, and elder adults) using global measures of EEG processing i.e., global field power (GFP), dissimilarity index (DIS), and topographic maps.

Method:

Ten healthy young subjects (31 - 40 yrs), 12 middle-aged adults (41 - 50 yrs), and 13 elder adults (51 - 60 yrs) participated in the study. All the participants had normal hearing thresholds (PTA: 15 dB HL) and did not report of otological, neurological & cognitive deficits.

Participants provided their informed consent before the start of the study, which was approved by institutional ethical board.

Compumedics Neuroscan (Charlotte, NC, USA) with curry 7 acquisition software was used to record continuous EEG data. EEG was picked up from 64 electrode locations on the scalp placed according to the 10-10 electrode placement system (Chatrian, Lettich, & Nelson, 1985) with a left mastoid (M1) reference. Stimuli comprised of three white noise (WN) bursts of 250ms duration, presented in virtual auditory space at locations corresponding to center (0 degree), 45 degree to the right (R45) and 45 degree to left (L45). These were routed from audio CPT program of Stim-2 through insert earphones (ER 3C). The stimuli were presented in a three- stimulus odd ball paradigm blocked into two conditions. In each of the two conditions, center was the standard (74%) stimuli while either L45 (13%) or R45 (13%) were assigned as target or distracter stimuli. A total of 296 standards and 52 each of the target and distracter were presented in each condition. In L45 condition, L45 served as target (13%) while R45 was the distractor (13%). It was vice-versa in R45 condition. The participants were asked to button press for targets and ignore distracters.

The recorded EEG was offline processed using curry 7 analysis software. The processing included DC offset correction, ocular artifacts reduction, filtering (0.1-30 Hz, FIR 30dB/octave zero phase shift), epoching (200ms pre-stimulus to 800ms post stimulus), baseline correction and re-referencing to the mathematical average of two mastoids. The use of conventional amplitude and latency were not employed in the study as these measures are limited by effects of experimenter bias, prior focus and insensitivity to spectral variations (Luck, 2014). In order to ascertain the presence of P300, grand average waveforms and point-by-point comparison of the waveform modulations was done using Cartool version software (http:/brainmapping.unige.ch/Cartool.htm). Detailed analyses of age-related changes were evaluated using electrical response strength (global field power - GFP), topographic modulations (global dissimilarity index - DIS), and topographic pattern analyses.

Results & Discussion:

The results of grand-average waveforms and pointwise paired-t-tests revealed the presence of P300 in the deviants (both targets and distracters) within the latency region of 270-400 ms for all the three age-groups. Contrary to the expectation, global field power (GFP) analyses across groups showed that elder-adults and middle-aged adults exhibited a significantly (p <0.05) larger N1 and P3 amplitude compared to young-adults for both R45 and L45 targets. This surprising finding can be interpreted in the context of an opponent channel model (Magezi & Krumbholz, 2010). Based on this model, reduced inhibition in the central auditory pathway with increasing age, leads to larger responses in elder-adults compared to younger listeners. Similar findings were reported using dynamic ITD shifts by Ozmeral, Eddins & Eddins (2016).

Although distinct electrical field responses (GFP) were observable across groups, the global dissimilarity index (DIS) revealed no variations in topography, indicative of similar cortical areas of activation.

The topographical pattern analysis revealed that 6 topographic templates explained the global variance (GEV) of 82.53% for L45. On the other hand, for R45 targets stimuli, 8 topographic templates explained a GEV of 82.11%. On close visual inspection of these templates, dynamically changing scalp-maps (parieto-occipital positivity corresponding to time window 355-472ms) in younger adults became extinct in middle-aged and elder listeners for L45 targets, while for R45 targets these maps (centroparietal positivity) emerged early (147-213ms) and appeared for a longer duration in elder-adults relative to other two groups (middle-aged: 188-268ms; elder adults: 163-243ms). Time regions where topographical differences between the groups emerged from 150ms onwards and continued till 472ms. These regions correspond to the conventional LLR (N1, P2, N2) and P300. The activity around 150 ms post stimulus onset is indicative of auditory cortical responses that mediate top-down processes such as selective attention (Golob, Johnson, & Starr, 2002) and short-term memory load (Conley, Michalewski, & Starr, 1999). Furthermore, activity around this latency region is suggested to be linked with the cognitive processes of stimulus identification and distinction (Hoffman, 1990). The age-related deterioration in all the above processes (attention, memory and cognition) could have lead to the manifestation of topographical changes in the time-region where these cues emerge. While LLR region majorly constitutes the pre-processing stage of stimulus processing, P300 component indicates the conscious classification of the stimuli. Hence it can be concluded that age-related changes manifest in both pre-processing and conscious stage of spatial deviance processing, with apparent changes starting from middle-age (4th decade) itself.

Summary & Conclusion:

The present study systematically characterized age-related changes in cortical processing and entangled the links between spatial processing deficits in elderly with the underlying neural processes. From findings of the study, it can be concluded that cortical changes with advancing age begins as early as 4th decade of life. Future studies can explore these group differences at a subtle source level or dipole level.


  Abstract - AO294: Perception of Tinnitus Handicap and Stress across Age Groups in Normal Hearing Top


Anuradha

anuradha2ks@yahoo.com

1Post-Graduate Institute for Medical Education and Research, Chandigarh - 160012

Introduction:

A phantom sound not associated with any external stimulus is known as tinnitus, 33% of the elderly population is affected by tinnitus (Davis A et al 2000,Jasterboff PJ etal 1998) The prevalence of tinnitus has been estimated as 10 to 15% on the basis of data obtained from epidemiologic studies conducted in different countries (Jasterboff PJ et al, 1993).

Tinnitus perception has been found to be strongly correlated with emotional impact. A recent study reported that tinnitus can lead to significant distress, depression, anxiety and a decrease in quality of life (Stouffer JL et al ,1990). Tinnitus is more prevalent among men, but is variable across various age groups ( Alper Yenigun et al 2014). Tinnitus becomes more severe with stress, & frequency of occurrence increases in the elderly.(Lockwood, 2005). The prevalence of chronic tinnitus increases with increasing age, peaking at 14.3% in people between 60 and 69 years of age (Shargorodsky J et al., 2010).

Literature has shown that changes in the neuroplastic potential across the life span, plays a critical role in tinnitus generation. As the neuroplastic changes are predominant during senescence, it not only has an influence on the incidence of tinnitus, but also on distress related to tinnitus (Hoffmann H J et al., 2004).

Need for Study:

There is requirement for further investigation of impact of tinnitus among different age groups, as the studies done till now are very less in numbers as well as none of them clearly explain the severity, handicap or stress variation among different age groups. Understanding these parameters would help us facilitate treatment and lead to better patient morbidity.

Aim & Objectives:

Aims: To study characteristics, handicap, stress and severity of tinnitus among four age groups.

Objectives:

  1. To examine the variations in pitch and loudness of tinnitus among different age groups
  2. To evaluate the association between severity of tinnitus and different age groups on Tinnitus Severity Index
  3. To examine if the association exists between tinnitus distress and different age groups on perceived stress scale.
  4. To evaluate if the association exists between the tinnitus handicap and different age groups on tinnitus handicap inventory.


Method:

A total of 60 subjects participated in the study which were divided into four groups on the basis of their age range. Group 1: age range-15-25 N=15, Group 2: age range-26-40, N= 15, Group 3: age range-41-55, N=15, Group 4: age range-56-65,N=15. Pure tone Audiometry was carried out, using MADSEN OB-922. Audiometry was carried out to assess the auditory thresholds in the frequency range from 250 Hz-8000Hz using TDH-39 headphones. The subjects with hearing thresholds within normal range on pure tone average (500Hz,1000Hz, and 2000Hz) were only enrolled for study. A detailed case history evaluation was carried out followed by measurement of tinnitus frequency and loudness.

All the subjects were assessed using a systematic interview and observation protocol for assessment of tinnitus related distress, handicap and severity. Effects of tinnitus on hearing, life style, general health and emotional disturbances like despair or frustration were recorded through Questionnaires: Tinnitus severity index (TSI), Tinnitus handicap inventory(THI), Perceived Stress scale (PSS). The tinnitus handicap was graded as mild (18-36), moderate (38-56) and severe (58-76) according to tinnitus handicap inventory. The stress level was rated as low (8-11), average (12-15), high (16-20) and very high (21 & over) on the basis of scores obtained on PSS. Tinnitus severity was rated on basis of 12 questions and rating of 0- to 5 was given, scores of 1-12(very mild), 13-24 (mild),25-36(moderate),37-48( severe), 49-60 (catastrophic).

SPSS version 21 windows software was used for analysis. The intergroup results were compared using MANOVA and Independent t test was used for the purpose of examining statistical significance (p<0.05).

Results & Discussion:

Among the four age groups 49.8% were men and 50.2% were women. The age ranged from 15- 65years. Subjects of group1 (15-25years) showed significant differences from group 4 (56-65years) in terms of tinnitus frequency as well as loudness. Group 4 subjects perceived the pitch of tinnitus as high (71.5% subjects) and tinnitus was loud (53.2% subjects), however group 1 reported more of tonal tinnitus, which was moderately loud.

On comparison of tinnitus handicap scores between different age groups, the significant difference was seen between Group 1 and Group 2 (p=0.021), Group 1 and Group 3(p=0.001), Group 1 and Group 4 (p=0.00) ,& Group 2 and Group 4 (p=0.047).Most of the subjects with catastrophic score on tinnitus handicap inventory fell in the group 4 (56-65years age ), however large number of mild , moderate or severe scoring subjects were found in other three groups group 1, 2 and 3).

Perceived stress scale was used for comparison of distress among four groups, likewise to THI scores, the scores were again significantly different between the younger group (group 1 ,15-25years) and group 4(56-65years). The significant p values were observed between Group 1 and Group 4 (p=0.014), Group 1 and Group 3 (p=0.009), Group 1 and Group 4 (p=0.006), respectively. The scores of group 4 were more in severe range, however the group 1 scores mainly settled in mild to moderate distress range.

Lastly, the severity of tinnitus between different age groups was compared using Tinnitus Severity Index. The significant difference was chiefly observed as group 1 and other three groups. Group 1 perceived tinnitus severity in mild to moderate range, in comparison to other three groups essentially consisting of moderate, severe as well as catastrophic categories. Group 1 and Group 2 (p=0.000), Group 1 and Group 3 (p=0.00), Group 1 and Group 4 (p=0.00), Group 3 and Group 4(p=0.010).

Tinnitus prevalence has been many times associated with senescence as well as concomitant hearing loss (Meikle M et al 1984). However, our study, emphasises age as a critical contributing factor towards the distress, handicap and severity of tinnitus. Recent research revealed that subjects above 40 years of age were essentially affected with severe tinnitus. The severity of stress and hearing loss was far greater in these patients. (Swiahb JA et al 2016). Literature also stated that subjects above 40 years of age are less able to deal with tinnitus and thus have higher depression scores (Jasterboff PJ et al 1996).

The results of our study also support the research stating that patient's reaction to tinnitus is a complex interaction between acoustic phantom symptoms, somatic attention and depressive symptom (Mc combe A et al 1999). Most of the subjects in the group 4 (56-65 years) had higher score of distress, and severity, increasing their risk for depression, anxiety and handicap.

Summary & Conclusion:

This study shows that increasing age not only led to changes tinnitus frequency but also increased its severity and handicap. In younger population tinnitus was less stressful and had limited effect on their daily life. The gradual increase in tinnitus severity can be attributed to deterioration of brain mechanics due to aging.


  Abstract - AO295: Maturation of Speech Perception in Noise Abilities during Adolescence Top


Chandni Jain1, Vipin P G2 & Aishwarya Lakshmi3

1chandni_j_2002@yahoo.co.in,2vipinghosh78@gmail.com, &3aishwaryalakshmi.611@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Speech perception encompasses the perception of the spectro-temporal cues. Speech perception in noise (SPIN) involves the auditory process of auditory separation or auditory closure wherein the ability for these cues to be separated from the background noise is assessed. During the task of speech perception in noise, the right hemisphere activity is increased and that of left hemisphere is reduced which results in changes in functional asymmetry of speech perception task alone (Shtyrov et al., 1999). There is employment of new neural networks in right hemisphere because of the presence of noise (Sowell et al., 2001). When speech is presented with noise, there is a resultant neural delay at brainstem and cortex level as the neural synchrony is degraded in the presence of noise (Warrier et al., 2004; Billings et al., 2009; Russo et al., 2009; Burkard and Sims, 2002; Russo et al., 2004).

The auditory closure/auditory separation abilities can be tested using SPIN with different types of speech materials and different types of noises. There are various factors which affect SPIN scores including age, hearing loss, cognition, type of material used etc. Studies have been done in the past to assess the maturation effect on SPIN and it is found to mature only by around 13 to 15 years of age (Crandell & Smaldino, 2000). Neijenhuis et al. (2002) reported no significant differences on word-in-noise test between adolescents (14 to 16 years) and adults while significant differences were observed between the two groups for sentence-in-noise test.

Need for Study:

Learning difficulties were seen in children who had difficulty of speech recognition in noisy situations (Bellis, 2011). This difficulty can be attributed to the factors such as poor acoustics of classrooms, poor speech to noise ratio and increased reverberation time. This makes it essential for auditory closure/auditory separation process to be evaluated. Age effect of SPIN has been done across children, young adults and older adults (Mamatha & Yathiraj, 2019; Yathiraj & Vanaja, 2015). SPIN scores for phoneme and word recognition were found to improve with age from 7 to 10 years with no gender effect being seen (Mamatha & Yathiraj, 2019). It has been reported in literature that with increase in age there is an increase in SPIN scores. However, the age at which the SPIN scores mature and become adult like is yet to be understood and whether the maturation differs for right and left ear needs to be explored. This makes it imperative to study the SPIN scores as a function of age in adolescents and adult and to assess ear effects if any.

Aim & Objectives:

The aim of the present study was to assess the maturation of speech perception in noise abilities during adolescents.

The objectives of the present study include,

  1. To assess speech perception in noise scores across adolescent sub-groups.
  2. To compare the results of SPIN scores for right and left ears for all the age groups


Method:

A total of 168 participants (129 females and 39 males) in the age range of 10 years to 19 years with the exception of age bracket of 16 to 17.11 years were recruited in the study. The above mentioned age group was not included as studies report maximum morphological changes in central nervous system is completed by 15 years of age (Luders, Thompson &Toga, 2010). According to WHO, 'adolescents' are individuals in the age group of 10-19 years. All the participants were native Kannada speakers, and none of the participants had a history of hearing loss, ear disease, head trauma, ototoxic drug intake, ear surgery, speech-language problems, or neurological issues or. None of them reported any illness during the time of testing. The above participants were divided into seven subgroups with group 1 included participants from 10 to 10.11 years (n=25), group 2 from 11 to 11.11 years (n=25), group 3 from 12 to 12.11 years (n=25), group 4 from 13 to 13.11 years (n=26), group 5 from 14 to 14.11 years (n=24), group 6 from 15 to 15.11 years (n=22) and group 7 from 18 to 19 years (n=21). The routine audiological evaluation was done to ensure normal hearing sensitivity in all the participants.

Auditory closure/auditory separation was assessed using phonemically balanced word lists for adults in Kannada (Manjula, Antony, Kumar, & Geetha, 2015). There are total 21 word lists to be used with noise with each list consisting of 25 words. Two word lists (List 1 and list 2) were utilized for the current study. For the current study, the words of list 1 and list 2 were mixed with speech noise at 0 dB SNR. One list was presented to right ear and another list was presented to left ear and the presentation of the lists was randomized between the ears. The task of the participants was to repeat the words heard in right and left ears. The numbers of words correctly identified were counted and were converted into percentage for both the ears. The lists were presented at 70 dB HL using a calibrated HAD 200 headphones connected to a laptop.

Results & Discussion:

The normality for the data was assessed using Shapiro-Wilks test of normality. The data was found to be normally distributed (p>0.05). One way ANOVA (Analysis of variance) was done to assess the significance of age for SPIN scores in the right ear and left ear. Results showed a significant main effect of age on SPIN scores in the left ear (p<0.05) but no significant effect of age was observed on SPIN scores in the right ear (p>0.05). Pair-wise comparison with Bonferroni's corrections for multiple comparisons for SPIN scores in the left ear showed a significant differences between group 1, 2 and 3 in comparison to group 7 (p<0.05). While, group 4, 5 and 6 showed no significant differences in comparison to group 7 (p>0.05). This indicates that SPIN scores in the left ear becomes adult like by around 14 years of age.

Thus results of the present study showed that the right ear SPIN scores are matured by 10 years itself but left ear SPIN scores becomes adult like by around 14 years of age. This could be attributed to the morphological changes taking place with increase in age at the level of central auditory pathway. For the left ear speech perception task, the contralateral hemisphere would receive the stimulus first and then it is transferred to the left hemisphere for processing through corpus collasum. It has been reported that there is an increase in the thickness of corpus callosum in the age range of 5 to 18 years (Luders, Thompson &Toga, 2010). The difference in SPIN maturation between right and left could also be attributed to an increase in white matter and gray matter of the cortex during preadolescent years (Giedd et al., 1999). It was also reported that increase in gray matter in the temporal lobe was non-linear and region-specific. In the present study, adult-like scores are achieved by 14 years of age which could be attributed to the non-linear growth of the white and gray matter.

Summary & Conclusion:

The current study demonstrates that auditory closure/auditory separation abilities mature at different times for right and left ears. Right ear SPIN reach adult-like scores by 10 years itself whereas to attain adult-like values for left ear SPIN, it takes around 14 years of age.


  Abstract - AO296: Combined Effects of Noise and Stimulation Rate on Noise Exposed Human Auditory Brainstem Responses Top


Sreeraj Konadath1 & Megha K N2

1sreerajkonadath@aiishmysore.in &2meghaknagaraja@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

One of the widely studied approach in evaluating the functioning of auditory system is Auditory Brainstem Response (ABR). It is a well know fact that, broadband noise masker would increase the latency and decrease the amplitude of wave V in ABR. Further, there is also adaptation due to an increase in the rate of stimulation that would also have similar effect, that is, increase in wave V latency. Hence, both masking as well as adaptation has the same effect on wave V latency. However, when these two effects are combined, the results are different. That is to say, wave V latency shift due to increase in the rate would be decreased in the presence of a continuous broadband noise in normal hearing individuals without any noise exposure. To see the effect of the same parameters on the latency of ABR waves in individuals with noise exposure having normal hearing thresholds, ABR is an appropriate approach.

Need for Study:

In order to investigate the effects of noise on human auditory system prior to hearing loss, a metric is required. Hence, for a better understanding of the effects caused by masking and stimulation rate on latency of ABR peaks in individuals exposed to noise this study was taken up.

Aim & Objectives:

To know the effects of noise and repetition rate stimulation parameters on ABR wave I, III, and V in individuals exposed to occupational noise. Also, to check which among the two that is, masking noise or repetition rate is a better.

Method:

Seventy five adult male participants were and were divided into four groups. Control group included individuals not exposed to occupational noise and experimental group consisted of individuals with occupational noise exposure with noise greater than 80 dB(A) [mean=85.5 dB(A)] for a duration of 8 hours per day. They were futher divided into 3 groups based on the years of noise exposure (Group 2: upto 3 years of noise exposure; Group 3: 3-6 years of exposure; and, Group 4: 6-9 years of exposure). All participants underwent pure tone audiometry, immittance and ABR. The 20 dB HL threshold criteria for pure tone audiometry were fixed in order to rule out any peripheral hearing loss. Further, they were tested with distortion product otoacoustic emissions (DPOAEs) at 8-points per octave from 1000Hz to 8000Hz at 80 dB. They were recorded with a frequency ratio of 1.22 for the primary tones and the level of f2 primary was kept 10 dB lower than the f1 level. The ABR assessment was carried out using Interacoustics Eclipse EP-25 system. The electrical potentials were obtained with electrodes placed at Fz, M1, M2; and ground at Fpz position. The study was done by varying the rate of presentation of CE-chirp (11.1/sec, 30.1/sec, and 80.1/sec) and the level of masking noise (0dB, 20dB and 40dB SPL ipsilaterally) relative to the stimuli. The stimulus level was kept constant at 80 dB nHL. The absolute amplitude and absolute peak latencies were recorded for peak I, III, and V.

Results & Discussion:

The overall results suggest that the rate induced latency shift of wave V in the presence of noise reduced, as the level of noise increased in individuals not exposed to noise (Group 1). However, this was not true with the experimental group. That is, the latency of wave V increased consistently with increase in rate of presentation and noise levels. A statistically significant difference (p<0.05) was exhibited for wave V wherein a noise level of 20 dB SPL at 80.1/sec repetition rate, F (3, 71)= 3.78, = 0.10; and, a noise level of 40 dB SPL at 80.1/sec repetition rate, F(3, 71)= 3.92, = 0.11, had an effect on wave V latency on all the noise exposed groups compared (Group 2, 3, & 4). When an attempt was made to find a better predictor among repetition rate and noise levels using discriminant function analysis with both being presented simultaneously, the masking/noise level could predict or differentiate the two groups better than the repetition rate. It indicated a significant difference for two predictor variables L80.1PK5N20 (Note. Latency of wave V at 80.1/sec repetition rate with 20 dB SPL noise) with F (3, 71)=2.789, Λ=0.895; and, L80.1PK5N40 (Note. Latency of wave V at 80.1/sec repetition rate with 20 dB SPL noise) with F (3, 71) = 2.669, Λ=0.899.

The results of the present study suggested that the rate and noise level function would attain a point where there is no increase in the wave V latency with increase in neither rate or noise function for individuals not exposed to noise. This was in line with the results obtained by Burkard and Hecox (1987) wherein, they stated that as both adaptation and masking increases wave V latency shift tends to reduce. Although, when studied as an individual function their combined effects are occlusive. That is, there is a reduction in rate induced latency shift in the presence of broadband noise when studied on a general population. In the current study, this effect was not established in individuals exposed to noise. There was a parallel increase in wave V latency with increase in rate and noise level. In other words, the occlusive effect found in individuals not exposed noise was absent in individuals exposed to noise suggesting an overlap in the mechanisms sub-serving adaptation and masking. It could be that adaptation is a form of temporal remote masking wherein, wave V is a high frequency response and this could be masked by a low frequency noise. However, in individuals exposed to occupational noise the occlusive effect could be absent as this remote masking might get affected due to insult at the region of auditory nerve fibers. The wave V, a high frequency response might have adaptation effect in these individuals which might further not be involved in temporal masking with the noise being presented simultaneously. This trend was evident only at higher rates of presentation that is at 80.1/sec and not evident at lower rates, 11.1 and 30.1/sec. Also, noise at only higher levels of presentation (40 dB SPL) had an effect on ABR wave V as there was no significant masking at lower levels. It is clear from this study that at very high rates of stimuli presentation and at greater levels of noise, there might be no effect of adaptation or masking on one another resulting in no decrement of wave V latency in individuals with occupational noise exposure. The discriminant function analysis indicated that noise level was a better predictor compared to rate of stimulation when both being presented simultaneously, although the difference was not vast. However, there was no clear dominance of either rate of presentation or noise levels stating a combination of both rate and noise functions would serve as a better parameter in evaluating, as overlapping between these two measures was responsible for the obtained results.

Summary & Conclusion:

The interaction of rate and noise-induced latency shift suggests an overlap in the neural mechanism underlying these effects which was evident in the latency shift of wave V and further, it may help in early identification with an objective measure.


  Abstract - AO299: Effects of Ageing versus Noise Exposure on Auditory System in Individuals with Normal Audiometric Thresholds Top


Sreeraj Konadath1, Megha K N2 & Ganapathy M. K3

1sreerajkonadath@aiishmysore.in,2meghaknagaraja@gmail.com, &3ganapathy.mk8@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

It is accepted widely that the permanent threshold shift (PTS) following noise exposure is caused due to permanent damage to the auditory structures. Recent animal studies have revealed that exposure to noise causes not just temporary threshold shifts, but can also produce permanent damage to the synapses in the cochlea, termed as 'Hidden hearing loss'. It is explained as the loss of synapses and cochlear-nerve terminals innervating inner hair cells. There are also evidences to show that ageing also has the same mechanism. Findings have shown that there is a permanent destruction of synapses between the inner hair cells (IHCs) and type I auditory nerve fibers (ANFs), thus leading to a slow degeneration of the ANFs. However, the hair cells are not affected, leaving the hearing sensitivity to be normal.

Need for Study:

By reviewing the literature, there is a need to find the effects of both noise exposure and ageing on the auditory system as there is heterogeneity in the results of previous studies. Moreover, a detailed audiological assessment tapping the auditory pathway is necessary to reveal the effects seen on them.

Aim & Objectives:

The study aims at investigating the effect of ageing and noise exposure on the auditory system using ABR, DOPAEs and CSOAEs. The objective was to compare DPOAEs, CSOAEs and ABR in aged and noise exposed individuals along with normals, to find an indicator for early diagnosis of auditory damage.

Method:

Sixty adult male participants were divided into three groups. Group 1 included individuals not exposed to occupational noise and Group 3 included individuals exposed to occupational noise with <35 years of age. Group 2 consisted of individuals with an age range of 45-65 years without any occupational noise exposure. DPOAE fine structure was studied at 8 points per octave at different f2 frequencies. TEOAEs were measured with and without contralateral BBN at 30dB SL (CSOAEs). ABR was recorded using click stimuli at different levels from 90 till 50 dB nHL. The absolute amplitude and peak latencies for peak I, III, V; and wave V/I amplitude ratio were analysed.

Results & Discussion:

To compare CSOAEs between the groups, Kruskal-Wallis Test was administered and it revealed greater contralateral suppression in Group 1 when compared to Group 2 and Group 3 (χ2 (2) = 6.35 at 1kHz, χ2 (2) = 5.22 at 1.5kHz, χ2 (2) = 5.20 at 2kHz, χ2 (2) = 5.55 at 3kHz, χ2 (2) = 2.67 at 4kHz, and χ2 (2) = 11.63 (p < 0.05) for global suppression. Further, Mann-Whitney U test was administered to check for the difference between two independent groups and was observed that there was a difference seen between Group 1 and 2; and, Group 1 and 3 for global amplitude and no difference between Group 2 and 3. However, there was no significant difference observed when individual frequencies were considered. Contralateral suppression of TEOAEs was observed more in non-exposed individuals compared to aged and noise-exposed individuals. Amplitudes of TEOAEs were also reduced in noise-exposed and aged individuals compared to non-exposed individuals. This could be attributed to a damaged efferent auditory system due to occupational noise exposure which in turn failed to suppress.

Kruskal-Wallis Test was administered to compare the groups for latency parameter at different intensities. The test indicated a significant effect, χ2 (2) = 18.79 at 90 dB nHL for wave I; χ2 (2) = 10.06 at 60 dB nHL for wave V; χ2 (2) = 13.75, and χ2 (2) = 7.52, at 50 dB nHL for wave V and wave III respectively (p < 0.05). Further, the Mann-Whitney U test revealed that there was a significant difference in latency, |Z|= 3.78 at 90 dB nHL for wave I; |Z|= 2.73 at 60 dB nHL for wave V; |Z|= 3.52, and |Z|= 2.39 at 50 dB nHL for wave V and wave III respectively between Group 1 and Group 2 whereas, the difference, |Z|= 3.57 was present only for latency of wave I at 90 dB nHL, between Group 1 and Group 3. The difference was evident only for wave V, |Z|= 2.01 at 50 dB nHL; and |Z|= 2.67 at 60 dB nHL when Group 2 and 3 were compared. The wave I latency at 50 and 60 dB nHLwas excluded from the analysis as the number of subjects who demonstrated a response at that intensity were very few (N < 3 in each group). However, there was no clear trend observed for latency parameter across all the intensities. This could be because, there is a reduction in the number of fibers firing which is evident as reduced wave I amplitude and not with the speed of transmission of the signal which is characterized by the latency.

When the absolute amplitude of ABR waves and wave V/I ratio between the Groups were compared using Kruskal-Wallis Test, it indicated a significant effect for wave I amplitude and wave V/I amplitude ratio at higher intensity levels. However, the difference was significant at only very few selected intensities for wave III and V, which did not follow any trend. Further, the Mann-Whitney U test was administered to check for the difference between two independent groups. It was observed that there was a difference seen between Group 1 and 2; and, Group 1 and 3 in most of the parameters that exhibited a significant difference in the Kruskal-Wallis test.

The amplitude of ABR wave I and wave V/I ratio showed a significant difference between the three groups wherein, there was a reduction in amplitude of wave I for Group 2 and 3. This trend of reduced amplitude was not well established at lower intensity levels. A possible reason could be that even in individuals with a normal hearing without occupational noise exposure, the presence of wave I and III reduces at lower intensity levels and hence it is difficult to use this as an indicator at low intensities. In contrast to the results obtained for wave I, there was no decrement in the wave V amplitude at supra-threshold levels. The results of the present study provide support to the idea that noise-induced synaptopathy is selective to low SR fibres, which is indicated by a reduction in amplitude at higher compared to lower intensities.

Summary & Conclusion:

We can infer from the present study that, ABR wave I amplitude reduction and increment in wave V/I amplitude ratio (which is due to the

lessening of wave I amplitude and not because of wave V changes), along with CSOAEs would act as former clinical indicators when compared to DPOAEs alone, suggesting, prior to hair cell damage there is a damage at the synaptic level.


  Abstract – AP987: Relation between Performance on an Auditory Memory Test and Perception of Global Memory in Young and Older Adults Top


Shubhaganga Dhrruva kumar1 & Asha Yathiraj2

1dshubhaganga94@gmail.com &2 asha_yathiraj@rediffmail.com

1All India Institute of Speech and Hearing, Mysuru-570006

Introduction:

A plethora of studies have reported decline in auditory memory ability with ageing (Anders & Fozard, 1973; Heinrich & Schneider, 2001; Murphy, Craik, Li, & Schneider, 2000; Neils, Newman, Hill, & Weiler, 1991). To detect auditory memory problems in older adults, it is essential that they be evaluated appropriately.

A few studies report that subjective self-reported information regarding memory problems help in predicting performance on a memory test in older adults as the two are correlated (Cook & Marsiske, 2006; Mendes et al., 2008). On the contrary, a few researches report a non-significant relationship between self-reported memory problems and the performance on a memory test (Mattos et al., 2003; Van Bergen, Jelicic, & Merckelbach, 2009). Hence, there is a lack of consensus regarding the relationship between performance on objective tests and subjective memory problems.

Need for Study:

Studies have shown that the amount of decline in memory abilities through self-report questionnaires or checklist do not correspond to the actual impairment present in older adults (Mattos et al., 2003; Taylor, Miller, & Tinklenberg, 1992; Van Bergen et al., 2009). Although memory tends to decline with age, the changes might be so subtle that individuals may not perceive the problem, thereby affecting their self-assessment of the condition (Schmand, Jonker, Geerlings, & Lindeboom, 1997). In literature, studies have evaluated global memory, but similar research on auditory memory is sparse. Thus, there is a need to assess the relationship between performance on an auditory memory task and subjective perception of memory impairment. This information would provide better insight as to whether self-assessment of memory problems is reliable. Further, there is a need to see if the auditory memory impairment evaluated through a performance test can be detected through checklist tapping global memory abilities.

Aim & Objectives:

Aim: The study aimed to investigate the relationship between performance on an auditory memory test and self-perception of global memory in young and older adults.

Objectives:

  1. Compare the auditory memory test scores of young and older adults,
  2. Compare the scores on a global memory ability checklist in young and older adults,
  3. Determine the relationship between the performance on an auditory memory test and perception on a global memory ability checklist in young and older adults.


Method:

Participants:

Using a purposive sampling technique, 60 native Kannada speakers were selected, 30 of whom were young adults (18 to 30 years) and 30 were older adults (58 to 70 years). Only individuals with normal hearing sensitivity and who did not report of any history of otological, neurological and speech and language problems were included in the study. The participants were required to have scores of ≥ 24 on the Mini-Mental State Examination (Folstein, Folstein, & McHugh, 1975).

Material

The Kannada auditory memory and sequencing test (Yathiraj & Vijayalakshmi, 2006) and the ‘Memory ability checklist (Mythri & Yathiraj, 2012) were used to evaluate the participants. The maximum possible score for the former was 118 and for the latter were 18.

Procedure:

The participants were evaluated following the ethical guidelines of the institute. The study was done using a standard comparison design. Testing was carried out in a sound field suite meeting ANSI (R2013) specifications. The Kannada auditory memory and sequencing test was administered at 40 dBSL (ref: SRT) binaurally through headphones via a dual channel audiometer. Memory scores were calculated by giving a score of one for every correct word that was repeated. The memory ability checklist, which consisted of 9 questions that tapped global memory, was administered on all the participants. Half of the participants were evaluated with the test first and the other half were evaluated with the checklist first. The responses were scored on a 3-point rating scale.

A Shapiro Wilks test of normality indicated that the auditory memory test scores were normally distributed, whereas the memory ability checklist score were not. Hence, parametric tests were done for the auditory memory test scores, while the data of the memory ability checklist data were analysed using non-parametric tests. The relation between the responses on the test and the checklist was analysed using a non-parametric test.

Results & Discussion:

Comparison of young and older adults on the memory test: The mean and median auditory memory test scores were lesser for older adults compared to the young adults. An independent sample t-test confirmed that the older adults had significantly poorer scores on the auditory memory test than the younger adults (t (36) = 10.79, p < 0.01).

Comparison of young and older adults on the memory checklist: The mean and median scores on the memory ability checklist were higher for the older adults compared to the young adults. To find whether the scores were statistically significant, a Mann Whitney U test was done. The results revealed that the memory ability checklist scores were significantly higher in the older adults compared to the young adults (|z| = 6.14; p < 0.01).

Relation between the memory test (performance memory) and memory checklist (perceived memory) in each age group: The correlation between the auditory memory scores and memory ability checklist were examined separately for both young and older adults. The results of the Spearman's test revealed no significant correlation between the two in the young adults (r = 0.22; p > 0.05) and the older adults (r = 0.70; p > 0.05).

The findings of the present study add to the corpus of studies in the literature that indicated that there is a decline in auditory memory abilities with advance in age (Anders & Fozard, 1973; Heinrich & Schneider, 2001; Murphy et al., 2000; Neils et al., 1991). However, the current study indicates that there is no relationship between performance on an auditory memory test and self-report on a checklist regarding global memory abilities in both the age groups. This mismatch between the performance and self-reported scores could have occurred because the two measures evaluated different aspects of memory. While the checklist evaluated global aspects of memory, the test specifically evaluated only auditory memory. From this it can be construed that a deficit in auditory memory is not reflected in global memory activities, indicating that memory is modality specific. Hence, auditory cognitive abilities require to be evaluated using auditory based tests and not from global memory abilities. Probably, if a checklist that specifically taps auditory memory is used, a higher correlation would be obtained between an auditory memory performance test and a checklist evaluating perception of the same.

Summary & Conclusion:

The study confirms that there is a decline in auditory memory abilities with advance in age. As memory aspects are modality specific, a checklist specific to auditory memory would be a better predictor of actual impairment in auditory memory abilities rather than a checklist evaluating global aspects of memory.


  Abstract – AP1041: Does Electrical Stimulation Improve Neural Synchronization following Cochlear Implantation? A Comparative Study in Sensorineural Hearing Loss, Auditory Neuropathy Spectrum Disorder & Hypoplastic Cochleo-Vestibular Nerve Top


Radhika Mishra1, Neevita Narayan2 & Prabhash Kumar3

1radhikamishra23@gmail.com,2neevita@sphear.in, &3prabhashaiish@gmail.com

1SpHear Speech and Hearing clinic, Delhi - 110024

Introduction:

Currently there are many objective measures available to validate cochlear implant functioning and electrode array position intra-operatively as well as post-operatively. The electrically evoked compound action potentials (ECAP) measurements are most popular due to the ease of recording and less time required for recording. The ECAP provides response of the auditory nerve to the electrical stimulation. The ECAP represents a synchronous response from electrically stimulated auditory nerve fibres and is essentially the electrical version of Wave I of the auditory brainstem response (ABR). These ECAP measurements have the sensitivity of 89% and specificity of 96% (Botros, Dijk & Killian, 2007). There is increasing number of cochlear implantation being carried out in auditory nerve anomalies due to advancement in cochlear implant technology and proven outcomes in this population. Studies have reported improvement in neural synchronisation measured through ECAP measurements in children/adults with no nerve anomalies (Moura et al, 2014; Caldas et al, 2015).

Need for Study:

Currently there is progression in number of cochlear implantation being carried out in auditory nerve anomalies across India. Objective measurement like ECAP has been used to report improved neural synchronization over time in cochlear implant user with no nerve anomalies. However there are scarce studies objectively reporting such improvement in CI users with nerve anomalies. Through this study we aimed to demonstrate similar results in recipients with auditory nerve anomalies also.

Aim & Objectives:

  1. To look for an improvement in auditory nerve synchronisation following electrical stimulation over a period of time in children with Cochlear Implants.
  2. To then compare the improvement in auditory nerve synchronisation across different groups such as : Sensorineural Hearing Loss (SNHL) without auditory nerve anomaly, Auditory Neuropathy Spectrum Disorder (ANSD) & SNHL with hypo-plastic cochlea-vestibular nerves (CVN).


Method:

  1. 45 children who underwent cochlear implantation with a minimum of 12 months follow up were selected for this study.

  2. Of these, 25 children in the age range of 2 to 6 years had SNHL without any auditory nerve anomalies.
  3. Ten children in the age range of 2 to 6 years were in the ANSD group.
  4. Ten children in the age range of 2 to 6 years were in the hypo-plastic cochleo-vestibular nerve group.
  5. All the children had undergone adequate hearing aid trial prior to cochlear implants with limited results.
  6. They also underwent an extensive evaluation process for cochlear Implants.
  7. At the end of the cochlear implant surgery, Intra-operative ECAP measurements were done in auto and manual mode for 5 electrodes across the array.
  8. All the children are successful users of cochlear implants and have demonstrated positive outcomes with cochlear implants.
  9. Postoperatively ECAP measurements were also done at regular intervals of 3,6,9 and 12 months
  10. A retrospective analysis was done on ECAP measurements post 12 months of electrical stimulation. The reason for this analysis was to see if there is any improvement in neural synchronisation following electrical stimulation over time
  11. We looked at the change in T-NRT values and N1-P1 amplitude at T-NRT level.
  12. Comparison of the results of SNHL group, ANSD group and hypo-plastic cochlea-vestibular nerve group was done. The data was analysed statistically.


Results & Discussion:

  1. Electrical stimulation over a period of time has improved the neural synchronisation of auditory nerve in all the children with cochlear implants as seen by an improvement in the N1-P1 amplitudes at T-NRT.
  2. Analysis showed a statistically significant improvement in neural synchronisation in all three groups of recipients.
  3. The SNHL with normal auditory nerve group demonstrated the most significant improvement, followed by the ANSD group as compared to hypo-plastic CVN group.


Summary & Conclusion:

Overall there is an improvement in neural synchronisation post electrical stimulation with cochlear implants over time. This had been previously reported in cochlear implant users without any nerve anomalies (Moura et al, 2014; Caldas et al, 2015). We too have demonstrated improvement in neural synchronisation post electrical stimulation in all our cochlear implant recipients. In addition, we have been able to show that children with auditory nerve anomalies such as ANSD and CVN anomalies have also demonstrated improvement in neural synchronisation following cochlear implants. This gives us confidence to take up such children for cochlear implantation when we see such improvement in objective measures as well.


  Abstract – AP990: Effect of Physical Activity on Temporal Processing and Speech Perception in Noise in Middle Aged Adults Top


Sanket Satish1, Aakanksha Pandey2 & Chandni Jain3

1sanketaslp2017@gmail.com,2aakanksha162@gmail.com, &3chandni_j_2002@yahoo.co.in

1All India Institute of Speech and Hearing, Mysuru - 57006

Introduction:

Physical activity has been shown to have physical and emotional benefits. Long-term participation in the physical activity can improve cardiovascular functioning and would result in strengthening the muscles along with balance and flexibility (Chodzko-Zajko, 2012). Studies related to cardiovascular exercise and central nervous system (CNS) health on animal models have shown that aerobic fitness has a positive functioning effect on the brain (Hawkins, Kramer, & Capaldi, 1992). It is also reported that physically active individuals have increased brain-derived neurotrophic factor which leads to the growth of neural genesis as well as improvements in learning (Cotman & Berchtold, 2002). Further, numerous epidemiological studies have suggested that improved hearing is associated with the increased physical activity (Mikkola et al., 2015; Tomioka et al., 2015).

Need for Study:

Physical activity has shown to have a positive effect on cognitive abilities (Wiese et al., 1990). It has been reported that physical exercise increases the alpha activity in the brain (Boutcher & Landers, 1988). Thus, it is hypothesized that physically active individuals may have better auditory abilities also. It was reported in an animal study that regular exercise slows the onset of presbycusis (Han, Ding & Lopex, 2017).

Studies have shown that working memory abilities, psychophysical abilities and perception of speech in the presence of background noise deteriorate with aging (Harris, Mills, & Dubno, 2007).

Further, in few electrophysiological studies using P300, it has shown that physical activity had a positive effect on P300. Thus, in line with the physiological studies, the present study hypothesize that various forms of physical activity may result in the reorganization of neuronal circuitry and improve cognitive abilities. This might have a beneficial effect which should also be reflected in the auditory domain. Thus, in the present study, the effect of different levels of physical activity was evaluated for various temporal processing abilities, and speech perception in noise.

Aim & Objectives:

The study aimed to assess the effect of different levels of physical activity on temporal processing and speech perception in noise.

Objectives: 1). To compare temporal processing through duration discrimination at 1000 Hz and gap detection in white noise among middle aged adults with various levels of physical activity. 2).To compare speech perception in noise among middle aged adults with various levels of physical activity.

Method:

Participants

A total of 52 participants within the age range of 40 to 60 years (Mean Age: 49.65, σ ± 5.41 years) participated in the study. The participants were selected based on convenient sampling and were equally divided into four groups as Active (A), Moderately active (MA), Moderately inactive (MI) and Inactive (I) groups based on The General Practice Physical Activity Questionnaire (GPPAQ) (Health, 2006). This questionnaire is developed by WHO for physical activity surveillance in countries. It gives information in three domains based on different types of physical activities comprising of 16 questions (P1-P16). The domains are: activity at work, travel to and from places and recreational activities. The GPPAQ version 2 was used for the study, which is a validated screening tool that assesses adult's (16-74 years) physical activity levels. Based on this questionnaire the participants were classified as Active, Moderately Active, Moderately Inactive, and Inactive.

All the Participants had hearing sensitivity within normal limits (<15 dB HL) at all octave frequencies from 250 Hz to 8000 Hz [ANSI S3.1 (1991)] and normal functioning of the middle ear as indicated by bilateral A type of tympanogram with acoustic reflex (ipsilateral and contralateral) present within normal sensation levels at 500 Hz and 1000 Hz. Further, Participants with a history of otologic, neurologic problems history of alcohol consumption, smoking, diabetes, hypertension, cardiovascular diseases and reported illness on the day of testing were not considered for the study. Written consent was taken from all the participants.

Experimental tests:

Temporal processing was assessed using gap detection for broad band noise (GDT) and duration discrimination thresholds (DDT) at 1000 Hz. These tests were done on a PC loaded with the Maximum Likelihood Procedure (MLP Green, 1990, 1993) toolbox implemented in MATLAB. All the tests were performed using a three-interval alternate forced-choice adaptive technique to track a 79.4% correct response criterion.

Each trial consisted of three blocks, wherein, two blocks had the standard stimulus and the third block is chosen randomly had the variable stimulus. The participant's task was to identify the block containing the variable stimulus.

Speech in noise perception was done using quick speech perception in noise test in Kannada (Methi et al., 2009). The test had 7 equivalent lists, and every list consists of seven sentences with five key words each. The SNR reduced from +8 dB SNR to -10 dB SNR in 3 dB steps from sentence 1 to 7 in each list. The participants were instructed to verbally repeat the target sentences. A score of one was given to each correctly identified key word, and the number of correct key words recognized at each SNR was counted. All the tests were done binaurally at 60dBSPL through Sennheiser 200HDA headphones.

Results & Discussion:

The descriptive statistics was done to estimate the mean and standard deviation for all the parameters for both the groups. It was noted that active participants had better scores for all the tests compared to inactive participants. A Kruskal Wallis test was performed to estimate the effect of different levels of physical avtivity on GDT, DDT and speech perception in noise. The results showed that the scores of GDT [χ2 (3) = 15.2, p < 0.01] , DDT [χ2 (3) = 12.07, p < 0.01] and speech perception in noise [χ2 (3) = 18.19, p < 0.01] significantly differed across different groups. Further, Mann Whitney U test was done for pair wise comparison. Results showed that there was a significant difference between the active and inactive groups for all the three tasks. However there was no significant difference noted for A and MA as well as MI and I groups for all the three tasks. Thus it can be inferred from the present study that physically active individuals had better temporal resolution and speech perception in the presence of noise as compared to the physically inactive individuals. Similar results have been reported in the in the literature where they have found better auditory abilities in participants who performed various forms of exercises through electrophysiological measures (Boutcher & Landers, 1988; Wiese et al., 1990). This could be attributed to the benefits to brain functions provided by physical activity. Studies have shown that there are beneficial effects of physical activity on the cardiovascular system and which leads to benefit in CNS (Hawkins, Kramer, & Capaldi, 1992). It is also reported that regular exercise leads to an increase in vascularisation of activated brain areas (Isaacs et al., 1992).

Summary & Conclusion:

The effect of physical activity on temporal processing, and speech perception in noise was assessed on middle aged adults. It was seen that physically active individuals had better temporal resolution and speech perception in noise as compared to the physically inactive individuals. These findings have strong implications indicating that physically active individuals have better auditory perceptual abilities and it can reduce the aging effect.


  Abstract – AP996: Relationship between Dichotic Listening and Working Memory across Age Groups Top


Kavya Hegde1, Rucha Palkar2 & Rucha Vivek3

1kavyahegde2502@gmail.com,2rhpalkar317@gmail.com, &3ruchaavivek@gmail.com

1School of Audiology and Speech language Pathology, Bharati Vidyapeeth (Deemed to be) University, Pune - 411030

Introduction:

The term 'working memory' refers to temporary storage of information in connection with the performance on other cognitive tasks such as reading, problem solving or learning (Baddeley, 1983). Auditory working memory is the process of actively maintaining sounds in memory over short periods of time. It is based on the maintenance of sound-specific representations in the auditory cortex by projections from higher order areas including hippocampus and frontal cortex (Kumar et al, 2016). Auditory working memory uses the mechanism of phonological loop which stores and rehearses speech-based information and acoustic items (Baddeley & Hitch,1947; Baddeley & Logie,1999). Auditory working memory can be assessed using digit span or speech stimuli like sequences of words on both forward and backward recall.

Dichotic listening involves attending to the different stimuli that are presented to each ear simultaneously. It studies a broad range of cognitive and emotional processes related to brain laterality and hemispheric asymmetry (Kimura,1961; Bryden,1988; Hugdahl,1922a). It is also used to investigate conditioning and learning (Corteen & Wood,1972; Dwason & Schell,1982; Hugdahl & Brobeck, 1986) as well as memory (Christianson, Nilsson & Silfvenius,1986; Hugdahl, Asbjornsen & Wester,1993). Most commonly used stimuli in dichotic tests involve consonant-vowel combinations, digits or words. The difficulty of dichotic listening task is dependent on various factors such as ear advantage, stimulus complexity, linguistic load, interaural delay, etc. Dichotic listening can be used to assess temporal lobe function (Spreen & Strauss,1991), attention, stimulus processing speed and hemispheric language asymmetry (Hugdahl,1992a). Dichotic digit test has been found to be sensitive to brainstem, cortical and corpus callosum lesions (Musiek,1983).

Need for Study:

There is known to be a deterioration in auditory processing skills including dichotic listening, as well as working memory with age (Mukari, Umat & Othman, 2010). However, limited studies have been found for the age group 40-60 years in comparison with younger adults. The role of stimulus complexity in dichotic tests in association with working memory has not yet been established. It is essential to obtain a comprehensive data about the auditory working memory, and its association with dichotic listening in normal hearing older adults to assess age related decline as well as deficits in clinical population.

Aim & Objectives:

The current study was undertaken to study the following- Effect of age on dichotic listening

Effect of stimulus complexity in dichotic tasks across the two age groups Effect of age on auditory working memory

Correlation between working memory and stimulus complexity in dichotic tests

Method:

36 individuals with normal hearing sensitivity who participated in the study were divided into two groups on the basis of age. The control group consisted of young normal hearing (YNH) participants (age range 18-30 years, mean age= 22.6 years), while the experimental group consisted of older normal hearing (ONH) individuals (age range 40-60 years, mean age =46.2 years). All Participants had hearing thresholds within 15 dB HL at octave frequencies between 250 Hz and 8000 Hz and good speech identification scores in quiet. All participants were screened to rule out any auditory processing deficits using SCAP-A (Yathiraj & Vaidyanath, 2014) and had no otological or neurological complaints.

Auditory working memory was assessed using forward span on auditory memory and sequencing test in Marathi (Sone & Vanaja, 2019). Memory scores, sequencing scores, memory span and sequencing span were calculated for comparison.

Three Dichotic Tests were administered using a laptop with calibrated stereo headphones.

  1. Dichotic consonant-vowel test (Yathiraj,Vanaja & Muthuselvi, 2012) consisted of 30 pairs of syllables where the participants were instructed to circle two syllables that were perceived out of a set of six syllables (/ba/, /da/, /ga/, /ka/, /ta/, /pa/) on a closed-set task.
  2. Dichotic Digit test in Marathi (Vanaja,2009) consisted of 30 presentations of monosyllabic digits in sets of four. Participants were asked to write the four digits perceived (open-set task).
  3. Dichotic word test in Marathi (Kulkarni & Kaushik, 2016) consisted of 20 pairs of words where the participants were instructed to write both the words that were perceived (open set task).


Presentation level for all the dichotic tests administered was 40 dBSL with respect to speech recognition thresholds. Free recall paradigm was used for the purpose of scoring. Scores were calculated as right ear single correct (RSC), left ear single correct scores (LSC) and double corrected (DC) scores awarded when both the stimuli of a pair are repeated correctly. Scores obtained on each Dichotic test were co-related with working memory test results in each group. Mean scores on each task were compared across the two groups to investigate effect of age.

Results & Discussion:

Shapiro Wilkes tests of normality revealed that the data follows normal distribution, hence further correlations and comparisons were done using parametric tests. Independent sample T-tests revealed no significant difference seen across the age groups on all scores except for dichotic consonant vowel left, dichotic digit right and memory scores. Scores of the YNH group were significantly better on auditory memory measures (p value=0.025) suggesting that age related decline exists in auditory working memory in the studied age group.

Pearson's correlation coefficient shows significant difference between dichotic word test's double DC score and working memory [(rho=0.692 memory score); (rho=0.775 sequence score); (rho=0.635 memory and sequence span) with (p<0.01)] suggesting a good positive correlation between dichotic word test DC scores and working memory capacity.

Dichotic listening task generally tap on short term and working memory capacity (Mukari, Umat & Othman,2010). Words have a linguistically more load in the hierarchy followed by digits and consonant vowels. Hence a significant correlation was seen between dichotic word test and working memory which allows us to conclude that as stimulus complexity increases, the load on working memory is significant. Dichotic digit and CV tests have smaller number of units (four and two respectively) and high predictability and hence may not lay strain on the working memory capacity in either group. Since a large number of our subjects were below the age of 50, with good educational and vocational background, we may expect higher working memory capacity. Further studies in age groups >50 years of age may reveal a greater statistical effect.

Summary & Conclusion:

In the present study, the comparison between two age groups revealed no statistically significant difference in scores obtained on dichotic listening task and auditory working memory test. However, dichotic word test shows significant correlation with auditory working memory suggesting that stimulus complexity and redundancy taps into working memory capacity in dichotic tests. No effect of age is seen in dichotic listening and auditory working memory as there is no rapid decline in the scores in the age group 40 to 60 years. To establish correlation in results obtained in the study, there is a need to study a larger population with small age ranges.


  Abstract – AP997: Assessment of Auditory Working Memory in Children with Abacus Training Top


Mansi Roy1, Keerti Swarna2 & Prashanth Prabhu P3

1mansi.s.r1997@gmail.com,2keertiswarna@gmail.com, &3prashanth.audio@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Abacus training is an arithmetic learning program where individuals learn to quickly carry-out arithmetical operations. It is a Chinese method which focuses on whole brain development and attempts to improve their numerical skills. It also teaches the children to calculate on dictation, through visualization and practice speed writing. The individuals manipulate the abacus instrument through their fingers and do the calculations and at a later stage, use a visual image for carrying out mathematical operations (Yathiraj & Vijayalakshmi, 2009). Abacus training involves carefully listening to the numbers for calculations. It improves the overall attention including auditory attention. Abacus involves remembering multiple numbers and calculations, and thus auditory working memory is getting trained. Hence, auditory working memory could be enhanced in children who practice working memory.

Need for Study:

In abacus training, children listen to dictated problems and carry out mathematical operations. If the auditory stimuli are processed faster, the results would be quick and accurate. This requires enhanced working memory which gets better with constant training. Musiek (2004) has reported that intense training of the auditory system can lead to increased plasticity and improve auditory performance. Yathiraj and Vijayalakshmi (2009) reported that dichotic performance was better in children with abacus training compared to the control group. They suggested that this could be due to central reorganization that happens due to training which results in enhanced auditory performance.

Auditory working memory refers to the ability of the person to process the auditory information, analyze it and store the information which has to be recalled later (Baddeley, 1974). There are limited studies which have attempted to analyze auditory working memory abilities in children with abacus training (Bhavya, Kuriakose, Priyadarshini & Suresh, 2016).

Bhavya et al. (2016) reported that the children with abacus training performed significantly better on the auditory 2-back test compared to the control group. They reported that the enhanced plasticity due to training could have resulted in better performance. However, there are a dearth of literature which has attempted to determine the effect of abacus training on other auditory working memory tasks such as ascending, descending, forward and backward digit span tests. Also, there has been no attempt to determine differences in reaction time for the auditory working memory tests in individuals with and without abacus training.

Aim & Objectives:

The study aimed to evaluate if there are any differences in auditory working memory between children with and without abacus training.

Objectives of the study

  1. To compare the score and reaction time for digit forward test in individuals with and without abacus training
  2. To compare the score and reaction time for digit backward test in individuals with and without abacus training
  3. To compare the score and reaction time for ascending digit span test in individuals with and without abacus training
  4. To compare the score and reaction time for descending digit span test in individuals with and without abacus training
  5. To correlate the level of abacus training with the scores and reaction time for all the tests in individuals with abacus training.


Method:

A total of 60 children (30 with abacus training and 30 without abacus training) were included in the study. Thirty children (15 males and 15 females) with abacus training and 30 children (15 males and 15 females) without abacus training between the age of 9-13 years participated in the study. The children with abacus training group were taking intensive abacus training in the range of 2-4 years. These children were at different abacus levels ranging from level four to level 10. All the participants in the study didn’t have any otologic history, metabolic, and systemic disease causing hearing loss and use of ototoxic drugs. All of them had average academic performance as reported by their school teachers. All the participants had pure tone thresholds less than 15 dB HL and A-type tympanogram which indicates normal middle ear functioning and presence of acoustic reflexes (ipsilateral and contralateral) at 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz. The auditory working memory was assessed for tests using the Smrithi-Shravan software (Kumar & Sandeep, 2013).

The evaluation of auditory working memory was performed using an auditory cognitive module which included digit forward, backward, ascending and descending span tasks (Kumar & Sandeep, 2013). The stimuli consisted of digits in English which ranged from one to nine. The test was administered using a calibrated audiometer at 40 dB SL (ref: speech recognition threshold) using the staircase method. The mid-point, maximum and average reaction time was recorded for all the tests from the participants of the study.

Results & Discussion:

A descriptive statistical analysis was done for the collected data, and the mean and standard deviation of mid-point, maximum and reaction time was determined. The results showed that the scores were higher, and the reaction time was faster in children with abacus training.

Shapiro Wilks test of normality showed that the data was not normally distributed (p<0.05). Hence, non-parametric Mann Whitney U tests were done to determine if there is any significant difference in mid-points, maximum and reaction time between the two groups. The results of the study showed that the scores were significantly higher (p<0.05) and the reactions time was significantly shorter (p<0.01) for individuals with abacus training compared to control group in all the tests. The effect size for the data was calculated using the formula Z/√N. The effect size was between 0.73-0.91 which suggests a strong difference between the groups. Pearson's correlation coefficient was determined between the abacus levels and scores and reaction time for the auditory working memory tests. The results showed moderate to strong correlation (r = 0.53 to 0.84).

The results of the study showed an enhanced auditory working memory in individuals with abacus training. The results agree with previous studies which also report superior auditory performance and enhanced auditory working memory in children with abacus training (Yathiraj & Vijayalakshmi, 2009; Bhavya et a. 2016). Yathiraj and Vijayalakshmi (2009) reported that abacus training significantly improves performance on dichotic tests. They suggested that enhanced auditory attention and central re-organization could have resulted in enhanced performance. Bhavya et al. (2016) also reported enhanced auditory working memory in children with abacus training and attributed it to training-induced plasticity. The results of the study showed that the level of abacus correlated with the scores and reaction time of the tests. It is also well reported that constant training causes neuro-plastic changes in the auditory cortex through magnetic resonance imaging studies (Elbert, Pante, Wienbruch, Rockstroh, & Taub, 1995). Thus, the enhanced auditory plasticity in the due to training could explain the improved scores and faster reaction time obtained in the present study.

Summary & Conclusion:

The results of the study showed that the auditory working memory was superior for individuals with abacus training compared to the control group. The results are in accordance with previous studies which also suggest a superior auditory performance in individuals with abacus training due to enhanced neural plasticity. Further studies are essential in this area for better generalization of the findings.


  Abstract – AP1002: Effect of Quiet and Noise on P300 Response in Individuals with Auditory Neuropathy Spectrum Disorder Top


Kumari Apeksha1 & Ajith Kumar2

1apeksha_audio@yahoo.co.in &2ajithkumar18@gmail.com

1JSS Institute of Speech and Hearing, Mysuru - 570004

Introduction:

Auditory Neuropathy Spectrum Disorder (ANSD) is a clinical condition where individuals have normal cochlear responses and abnormal neural responses (Starr, Sininger, & Pratt, 2000). The audiological test report shows normal to severe hearing loss as evident on pure-tone audiometry, presence of otoacoustic emission, abnormality in auditory brainstem response and middle ear muscle reflexes (Berlin et al., 2010, 2005; Starr et al., 2000). The abnormality at the cochlear, its synapse with the neurons and the neural pathway affects primarily, the perception of auditory temporal information (Picton, 2013; Starr, Picton, & Kim, 2001). One of the commonly encountered problems by individuals with ANSD is abnormal speech perception in the presence of noise (Apeksha & Kumar, 2017b; Berlin et al., 2010; Narne et al., 2015).

Need for Study:

Few researchers have investigated the speech processing ability in individuals with ANSD using different test measures (P1-N1-P2, MMN, and P300). In these studies, the auditory evoked responses were recorded from limited electrode sites (Apeksha & Kumar, 2017a; Gabr, 2011; Kumar & Jayaram, 2005; Michalewski, Starr, Zeng, & Dimitrijevic, 2009; Narne, Prabhu, Chandan, & Deepthi, 2014; Narne & Vanaja, 2008).

To our knowledge, there are only two studies reported in the literature which discusses the multichannel recording in individuals with ANSD(Apeksha & Kumar, 2018, 2019). Apeksha and Kumar (2018) recorded P300 response in individuals with ANSD for speech contrast /ba/-/da/ whereas Apeksha and Kumar (2019) recorded P300 response in individuals with ANSD for the three different speech contrasts /ba/-/da/, /ba/-/ma/ and /ba/-/pa/. In both the studies, the P300 response was recorded only in quiet listening condition. There is a lack of information regarding speech processing in individuals with ANSD, especially when the signal is presented with noise.

Aim & Objectives:

This study aimed to investigate speech processing ability in quiet and in the presence of noise, in individuals with ANSD and to compare the findings with individuals with normal hearing sensitivity.

Method:

Thirty individuals with normal hearing sensitivity and 30 individuals with acquired ANSD in the age range of 15 years to 55 years, with the mean age of 27.86 years, were the participants. The individuals were diagnosed as having ANSD by certified audiologists following the recommendation of Starr, Sininger, and Pratt (2000) and by neurologists based on detail clinical neurological examination, including CT/MRI. All the individuals with ANSD reported of having speech understanding difficulty, more in presence of noise. P300 response was recorded from both the groups using syllable pairs \ba\-\da\ in oddball paradigm and to syllable /da/ in repetitive paradigm in quiet and at +10 dB signal-to-noise ratio (SNR). This stimulus pair was selected as it differed in phonetic feature, place of articulation, which is reported to be more susceptible to noise (Boothroyd, 1984; Hornsby, Trine, & Ohde,2005). The SNR of +10 dB was selected based on the pilot study which showed that the behavioral performance of individuals with ANSD on the discrimination task dropped below the chance level at SNR poorer than +10 dB SNR. For the noise condition, the syllable was mixed with speech noise such that the onset of the syllable was 1000 ms after the onset of the noise and offset of the syllable was 1000 ms before the offset of the noise. The continuous background noise was not presented to the participants as it might have caused neural adaptation in individuals with ANSD (Wynne et al., 2013). The 75 dB SPL signal was presented using loudspeaker kept at a 1-meter distance and at 0° azimuth. Neural responses were recorded using a Neuroscan Scan 4.5 system (Compumedics, Charlotte, NC, USA) and QuickCapTM with 64 sintered electrodes fitted with quick cells. Informed consent was taken from all the participants following the Ethical Guidelines for Biobehavioral Research Involving Human Subjects (Venkatesan, 2009).

Institutional ethical committee approval was taken prior to the study. The non-parametric statistic was used to analyze the data as the data were non-normally distributed (p < 0.05). Paired randomization method and Topographic pattern analysis, available in Cartool, was used to analyze the neural response.

Results & Discussion:

When compared within group and across conditions, Wilcoxon signed-rank test showed significantly shorter RT (z = 3.65, p < 0.05, r = 0.66) and greater sensitivity (z = 3.06, p < 0.01, r = 0.55) in quiet condition compared to in noise condition with large effect size for normal hearing individuals. Similarly, RT (z = 4.40, p < 0.05, r = 0.86) was significantly shorter and sensitivity (z = 3.64, p < 0.05, r = 0.68) was significantly greater in quiet condition compared to noise condition with large effect size in individuals with ANSD. The result showed that the sensitivity and RT were better in quiet condition compared to noise condition in both the groups of individuals with normal hearing and with ANSD. The grand average P300 response for both the groups showed prominent P300 response with clear morphology in both quiet and in noise conditions. P300 response compared across quiet and noise conditions, showed an overall reduction in amplitude of P300 response in both the groups of individuals with ANSD and with normal hearing, with a greater reduction in amplitude of P300 for individuals with ANSD. P300 latency showed prolongation in individuals with ANSD compared to individuals with normal hearing in both the conditions. The individuals with ANSD showed more prolongation in P300 response in presence of noise compared to in quiet condition. The longer reaction time and the prolonged latency of P300 response suggest that the individuals with ANSD have slow processing speed and thus require more time to detect and respond to the target stimuli, and much more in presence of noise. P300 amplitude also showed a reduction in amplitude in presence of noise as compared to in quiet, which could be because of the increase in memory load and deficit in attention allocation to the task (Donchin & Coles, 1988) in presence of noise. Topographic pattern analyses showed more activation in central-parietal-occipital region of the brain in individuals with ANSD whereas individuals with normal hearing showed activation of central-parietal region. There was additional activation towards the occipital lobe with more diffused activation pattern in individuals with ANSD. The difference in activation pattern across groups suggests the differential distribution of the electric field across the scalp. This difference in scalp activation pattern might have been caused by the difference in the configuration of underlying brain sources generating these potentials, and differential activation of brain network (Song et al., 2015).

Summary & Conclusion:

The individuals with ANSD required more time to discriminate the stimuli and showed less accuracy in identifying the target stimuli compared to individuals with normal hearing sensitivity as evident on reaction time and sensitivity measures. P300 response showed prolongation in latency and reduction in amplitude in individuals with ANSD, compared to normal hearing individuals. Based on the RT and sensitivity (behavioural measures), and latency, amplitude, and scalp topography of P300 response (neural measures), it is evident that the individuals with ANSD showed deviation in both behavioral and neural measures compared to individuals with normal hearing, which could be the result of the difference in the underlying generation sources for the responses.


  Abstract – AP1004: Hidden Balance Problem in Individuals across Age Groups Top


Kumari Apeksha1, Priyanka A2 & Niha Fatima3

1apeksha_audio@yahoo.co.in,2Raghavgowda1999@gmail.com, &3naushabanur@hotmail.com

1JSS Institute of Speech and Hearing, Mysuru - 570004

Introduction:

Dizziness is one of the most frequent complaints with estimates of 20-30% of the population having experienced it at least once in their lifetime (Neuhauser & Lempert, 2009). The prevalence of dizziness is reported to be even more (30 to 40%) in older adults (Derebery, 1999). One of the epidemiological studies conducted by the National Health and Nutrition Examination Survey from the year 2001-2004, suggested 35% of the adults aged 40 years or older to have some form of vestibular dysfunction. This high prevalence of dizziness as reported in different studies arise the need for the balance assessment in individuals of all age groups. Maintenance of balance requires a coordinated action involving sensory information from the vestibular system, proprioceptive system, and visual system with the help of vestibular-ocular reflex, vestibule-colic reflex and vestibule-spinal reflex (Herdman, 2007).

Need for Study:

The symptoms of dizziness or vertigo may have a significant impact on the quality of life and can also impair the activities of daily living. Many individuals with dizziness limit their daily activities and restrict their participation in the community in order to avoid the unexpected episode of dizziness. Considering the Indian population, Dizziness Index of Impairment in Activities of Daily Living Scale (DII-ADL, Singh, Kumar, Apeksha, & Barman, 2015) was developed to assess the limitation faced by individuals with different vestibular disorders. Since the prevalence of the dizziness, in most of the study, is based on the patients reporting to the hospitals, we lack information about how many actually have dizziness but do not ask for help from the healthcare professionals.

Aim & Objectives:

This study was conducted to find the relationship between the perceptual rating and clinical evaluation of balance function using Dizziness Index of Impairment in Activities of Daily Living Scale (DII-ADL) questionnaire and vestibulospinal and cerebellar function tests. The objective of the study was to find the relationship between perceptual rating and (i) Sharpened Romberg test (ii) Fukuda Stepping test (iii) Tandem Gait test and (iv) Finger-to-nose test.

Method:

Total of 60 participants, subdivided into three different age groups, young adults (20-40 years), middle-aged adults (40-60 years) and older adults (> 60 years) were considered for this study. Detailed case history including demographic details, medical history, family history, hearing, and balance issues was collected from all the participants. None of the participants reported any balance issues and had no history of consultation to the medical professionals with respect to the balance issues. Following case history, all the participants underwent hearing screening and had hearing thresholds within the normal range (< 15 dB HL) for both the ears. All the participants were an active member of the society and were independent in carrying out their daily routine. Dizziness Index of Impairment in activities of daily living scale (DII-ADL, Singh et al., 2015) questionnaire was administered to all the participants. The questionnaire in printed form along with the 7 point rating scale was given to the participant and was asked to rate all the 23 items on 7 point rating scale. The questionnaire has 6 items under Functional section, 9 items under Ambulatory section and 8 items under Instrumental section of the questionnaire. The 7 point rating scale includes 1 being the problem has not changed my performance in any way, to 7 being I no longer perform the activity. The balance issues in all the participants were assessed using test for vestibulospinal pathways, such as Sharpened Romberg test, Fukuda Stepping test, Tandem Gait test and Finger-to-nose test to assess cerebellar function. Perceptual feedback was also taken from all the participants regarding balance issues. The data obtained from all the participants were tabulated and analyzed using SPSS (version 17) software.

Results & Discussion:

The data obtained from all the three groups of individuals were checked for normality using the Shapiro-Wilks test. The result showed that the data were non-normally distributed (p < 0.05) and thus non-parametric statistics were done. Descriptive statistics showed an increase in balance problem with the increase in age on all the sections of DII-ADL questionnaire and also on the Sharpened Romberg test and Fukuda stepping test. Qualitative analysis of the tandem gait test and finger-to-nose test showed abnormality in test findings with age. Kruskal-Wallis test showed significant difference between the groups for functional [χ2(2) = 46.41, p < 0.001], Ambulatory [χ2 (2) = 43.34, p < 0.001], Instrumental [χ2 (2) = 45.29, p < 0.001], Sharpened Romberg test [χ2 (2) = 25.40, p < 0.001], and Fukuda stepping test [χ2 (2) = 29.96, p < 0.001] separately and for the overall rating of the questionnaire considering all the three subsection [χ2 (2) = 46.65, p < 0.001]. Mann-Whitney U test for the pair-wise comparison showed all the groups to be significantly different from each other (p < 0.05).

Spearman's correlation analysis was done to find the correlation between the sections of the DII-ADL questionnaire and Sharpened Romberg and Fukuda stepping test. The result showed significant moderate negative correlation between Sharpened Romberg test and Functional section [ρ= -0.57, p <0.001], Ambulatory section [ρ= -0.57, p <0.001], Instrumental [ρ= -0.58, p <0.001], and overall [ρ= -0.58, p <0.001]. Fukuda stepping test showed strong positive correlation with the Functional section [ρ= 0.64, p <0.001], Ambulatory section [ρ= 0.66, p <0.001], Instrumental [ρ= 0.67, p <0.001], and overall [ρ= 0.67, p <0.001]. The results of the present study showed that balance issues are present even in younger individuals, as shown by the questionnaire and the clinical tests results, who never reported of any balance problem in their life and have never consulted any medical professional for the balance issues. And the severity of the hidden balance issues increased with increase in age, as the majority of the individuals tested above 60 years showed an abnormality on almost all the tests results. All the participants showed more abnormality for ambulatory skills more than functional and instrumental skills. Some of the ambulatory skills which the participants found it difficult to manage were climbing up/down stairs or use an escalator, walking on an irregular surface, walking on a slippery, soggy or soft surface, walk alone in the darkness, moving around quickly and freely. Result of this study raises a major concern about the hidden abnormality on the balance function across all the age groups. This necessitates the need for balance assessment in individuals across all age groups. As the balance is coordinated by three sub-systems, namely visual, vestibular and proprioceptive, there is a need to screen all the three sub-systems, in all the individuals of all age groups.

Summary & Conclusion:

The result of this study highlights the need for screening individuals across age groups for balance problem which might not be noticed by the individuals but certainly affects their daily living activities. Since older adults showed more difficulty in performing some task, it is very important that the professionals take care of their balance issues, which will indirectly improve their activities of daily living. There is also a need to spread awareness among the general population regarding the hidden balance issues prevalent in all age groups, especially in older adults and to motivate them to seek help from the healthcare professionals as early as possible.


  Abstract – AP1005: Cortical Auditory Evoked Potentials in Assessing Benefit from Cochlear Implants, Hearing Aids and Bimodal Stimulation Top


Snehal Purkar1, Aditi Kasliwal2, Rudravi Jain3 & Vanaja C.S.4

1snehalpurkar23@gmail.com,2aditikasliwal1999@gmail.com,3rudravijain6605@gmail.com, &4csvanaja@gmail.com

1School of Audiology and Speech language Pathology, Bharati Vidyapeeth (Deemed to be) University, Pune - 411030

Introduction:

A number of children with hearing impairment are habilitated with bimodal stimulation wherein the child uses cochlear implant in one ear and a hearing aid in the other ear. It has been reported that speech and language development is better in children receiving bimodal stimulation when compared to those with unilateral cochlear implants (Huang, Sheng, Ren, Li, Huang and Wu, 2018). A majority of the studies investigating benefit from bimodal hearing have used behavioural measures. For example, studies have shown bimodal stimulation improves speech perception in noise (Yang and Zeng, 2017), provides greater benefit with wireless technology (Wolfe, Duke, Schafer, Jones, Mülder, John and Hudson, 2016), improves localisation ability (Ching, Incerti, Hill and van Wanrooy,2006) and improves quality of life (Potts, Skinner, Litovsky, Strube and Kuk, 2009). It has been construed from such studies that bimodal stimulation should be recommended to those who are not candidates/cannot afford bilateral cochlear implantation. The benefit from bimodal stimulation should be assessed before recommending use of a hearing aid in the contralateral ear, there is a need to evaluate the usefulness/benefit from bimodal stimulation. However, in young children, it may not always be possible to get voluntary responses and hence may have difficulty in judging the benefit from bimodal stimulation solely based on the behavioural tests. Thus, there is a need of an objective test which can help us correlate with the information obtained on subjective measures.

Recording Aided Cortical Auditory Evoked Potentials (CAEPs) can be one of the objective methods to asses benefit from hearing devices in difficult-to-test population. Aided CAEPs can also throw light on central auditory processing and neural encoding of speech sounds in children using cochlear implants/hearing aids. The latencies and amplitudes of CAEPs may provide an objective indication of the spectral characteristics of speech stimuli encoded at different cortical levels.

Need for Study:

A number of studies have shown the usefulness of CAEPs in verification of hearing aids (Ching, Dillon, Carter, Van Dun, & Young, 2012; Golding et al., 2007; Koul & Vanaja, 2007; Munro, Purdy, Ahmed, Begum, & Dillon, 2011; Vanaja & Khandelwal, 2016). A review of literature also indicates that reliable CAEPs can be recorded from children using cochlear implants and latency of P1 and N1 can be used as biomarkers of development of central auditory pathway in children using cochlear implants (Sharma, Dorman, & Spahr, 2002; Sharma, Martin, Roland, Bauer, Sweeney, Gilley & Dorman, 2005 ; Sharma, Glick, Deeves & Duncan, 2015; Sharma, Campbell, and Cardon (2015). However, there is a dearth of studies using CAEPs to evaluate the benefit/advantage obtained with bimodal stimulation. Also there is a need to evaluate if CAEPs can be used to assess the benefit/advantage with bimodal stimulation in individual cases. Documenting such benefit in individual cases will help in providing clinical evidence to translate research on CAEPs into clinical practice.

Aim & Objectives:

The aim of the study was to investigate if CAEPs can be used as an objective measure to assess benefit from bimodal stimulation. The specific objectives were to investigate the following in children receiving bimodal stimulation:

  1. To compare CAEPs with hearing aids and CAEPs obtained from cochlear implant.
  2. To investigate if there is any association between scores obtained on Early Speech Perception tests and CAEPs.


Method:

Six children in the age range five to eleven years using bimodal amplification participated in this study. The implant age ranged from 7 months to six years. All the children had pre-lingual deafness, with no associated intellectual or neurological problems. Informed written consent was taken from the parents of all participants prior to testing.

All the procedures were carried out in three conditions, with only hearing aid in one ear, with only cochlear implant in one ear, bimodal condition (hearing aid in one ear and cochlear implant in one ear). Initially sound field thresholds were obtained to assess the benefit from hearing device/s. To assess speech perception behaviourally, Early Speech Perception Test (ESP) in Marathi Language (Mathew & Sarda, 2011) was administered in auditory mode.

CAEPs were recorded using Biologic Navigator Pro auditory evoked potential system in an acoustically treated room. Participants were seated comfortably and were engaged in watching a video to reduce occular artifacts and movement artifacts. Custom recorded speech stimuli /t/ was presented through calibrated loud speakers, placed at 0° azimuth, at a distance of 1 foot from the amplification device. The stimuli were presented at 70 dB nHL with a repetition rate of 1.1/sec. Single channel recording was carried out. To avoid electrical interference from hearing devices, contralateral mastoid was used for the inverting electrode with vertex (Cz) as non-inverting electrode, common electrode was placed at Fpz. A time window of 533ms with 50ms pre stimulus time with an online filter of 1-100Hz was used for acquisition. Multiple averages in blocks of 50 were obtained and offline analysis was done wherein 3 waveforms of 50 sweeps each were added to get a single waveform of 150 sweeps. The averaged waveforms obtained after weighted addition were analyzed for latency and amplitude values.

Results & Discussion:

CI assisted thresholds for all the children were within speech spectrum whereas hearing aid assisted thresholds were within speech spectrum for three children and outside speech spectrum for three children. The sample size was small for carrying out inferential statistical analysis. However, the data revealed an association between the behavioural and electrophysiological measures. The children who benefited from hearing aids showed an improvement in speech perception scores and an increase in the amplitude of the P1 N1 and N1 P2 peaks with bimodal amplification. On the contrary, children who did not benefit from hearing aids did not show an improvement in amplitude of CAEPs. It was interesting to note that one of the children had greater benefit from the hearing aid when compared to cochlear implant and the CAEPs of that child showed greater amplitude with hearing aid when compared to CAEPs obtained with cochlear implant. These observations were also supported by reports of parents and therapists.

Earlier studies have reported the usefulness of CAEPs in measuring hearing aid benefit (Ching, Dillon, Carter, Van Dun, & Young, 2012; Golding et al., 2007; Koul and Vanaja, 2007; Munro, Purdy, Ahmed, Begum, & Dillon, 2011; Vanaja and Khandelwal, 2016). Obuchi, Harashima and Shiroma (2012) reported a good correlation between CAEPs and speech perception abilities in children using cochlear implants. The results of the present study support that CAEPS are useful in assessing benefit from hearing devices. Further, they are useful in assessing benefit from bimodal amplification.

Summary & Conclusion:

The results of the present study indicate that CAEPs can be used to evaluate benefit/advantage from bimodal stimulation in difficult-to-test population. This case report series provides empirical evidence (though a low level evidence) for use of CAEPs in clinics. More studies are needed in this direction to support the results of the present study.


  Abstract – AP1007: Effect of Different Blood Groups on Tympanometric Findings and Acoustic Reflex Thresholds: A Preliminary Study Top


Sneha Shaji1, Krishnapriya M V2 & Basaiahgari Nagaraju3

1sneharshaji@gmail.com,2krishnapriyamv98@gmail.com, &3nagarajuaslp01@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Noise susceptibility can be different between individuals depending upon several factors (Plontke & Zenner, 2004). The susceptibility to noise can be different because of changes in the melanin content, blood pressure, cholesterol level, race etc (Henderson et al., 1993). It was reported by Chow, McPherson and Fuente (2016) and Chen, Chow and McPherson (2018) that blood group can have an effect on otoacoustic emissions and suggested that it could be an indicator of noise induced hearing loss. Prabhu, Chandrashekhar, Cariappa & Ghosh (2017) reported that ultra high frequency auditory sensitivity was poorer in those with blood group O compared to others. There could be changes in the immittance findings too in adults having different blood groups.

Need for Study:

It is reported that the genetic content differences between the blood groups could be a pre-disposing factor on the susceptibility of few disorders (Sircar, 2008). It is also well reported that persons with blood group O are relatively more susceptible for hearing loss due to noise exposure. Chow, McPherson and Fuente (2016) studied the differences in different types of otoacoustic emissions across blood groups in 60 individuals with normal hearing. They indicated that the amplitude of otoacoustic emissions were reduced in persons with blood group compared to others. They hypothesized that this could be because of relatively lesser outer hair cells in them which could result in them being more susceptible to noise induced hearing loss. Similar hypothesis was also proposed by Prabhu et al. (2017) who also reported that ultra-high frequency sensitivity was poorer in persons with blood group O. Chen, Chow and MchPherson (2018) also reported similar findings who suggested that persons with blood group O may be more susceptible for cochlear damage. They recommended carrying out other audiological tests in persons with different blood groups. Immittance evaluation includes tympanometry and determining acoustic reflex thresholds (ipsilateral and contralateral). Considering the previous studies, immittance results could be different in adults having different blood groups. However, there is a dearth of literature which has attempted to determine changes in immittance findings across different blood groups.

Aim & Objectives:

The aim of the study was to determine if there are any differences in tympanometric findings and acoustic reflex thresholds (ART) between individuals with different blood groups (A positive, B positive, O positive and AB positive).

Objectives of the study

  1. To compare admittance, peak pressure, gradient, resonant frequency and ear canal volume across adults with different blood groups
  2. To compare ipsilateral and contralateral ART at 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz across adults with different blood groups.


Method:

Eighty normal hearing adults between the age of 18-27 years (mean age: 21.2; SD=4.6) were considered for the study. They were divided into 20 participants each with blood group A, B, AB and O (Rh positive in all of them). There were equal number of males and females in each group. They did not have any history of otological problems, intake of drugs which could cause hearing loss and did not exposure to loud sounds. All the participants had pure tone thresholds less than 15 dB HL. Informed consent was obtained from all the participants considered for the study. Pure tone audiometry and speech audiometry was administered in all the participants using a calibrated dual channel audiometer.

Immittance evaluation was carried out using GSI tympstar middle ear analyzer. Otoscopic examination was carried out and all the participants had normal tympanic membrane with cone of light present. Tympanogram was recorded using a 226 Hz probe tone and the admittance, peak pressure and ear canal volume was noted. In addition, gradient was recorded as tymapnometric width at the 50% height of the tympanogram. Resonant frequency of the middle ear was determined using sweep frequency method. Acoustic reflex thresholds (ipsilateral and contralateral) were determined at 500 Hz, 1000 Hz, 2000 Hz and 4000 Hz with 5 dB step size. The differences in the above mentioned findings were compared between individuals with different blood groups.

Results & Discussion:

A descriptive statistical analysis was done for the collected data and the mean, median, range and standard deviation were determined.

The results showed that the resonant frequency was slightly higher in blood group O compared to other blood groups. In addition, the acoustic reflex thresholds were slightly elevated at all frequencies (ipsilateral and contralateral) for individuals with blood group O. Shapiro Wilk test of normality showed that the data was not normally distributed (p<0.05). Thus, non-parametric tests such as Kruskal Wallis H tests were done for admittance, peak pressure, ear canal volume, gradient, resonant frequency and acoustic reflex thresholds separately which showed significant difference (p<0.01) for resonant frequency and acoustic reflex thresholds. Further, Mann Whitney U tests were done to determine if there is any significant difference between the blood groups. The results showed that the resonant frequency was significantly higher (p<0.05) and the acoustic reflex thresholds were significantly elevated (p<0.01) at all the frequencies for blood group O compared to individuals with other blood groups. Effect size for the data was calculated using the formula Z/√N. The effect size was between 0.7-0.9 which suggests a strong difference between the groups.

The results of the study showed elevated acoustic reflex thresholds in individuals with blood group O. Previous studies have shown that there could be a possible reduction in outer hair cell (OHC) functioning in those with blood group O (Chen Chow & McPherson, 2018; Prabhu et al. 2017). This could have lead to reduction in the sound intensity leading to poorer acoustic reflex thresholds. This result is supported by previous studies which also report poorer ultra high frequency thresholds and reduced otoacoustic emissions in persons with blood group O (Chow, McPherson & Fuente, 2016; Chen Chow & McPherson, 2018; Prabhu et al. 2017). Thus, the difference in ART may be associated with difference in number of healthy outer hair cells (Kemp, 2002; Lonsbury-Martin and Martin, 2007). The results of the study also showed an increase in resonant frequency for adults with blood group O. This suggests that there is a relatively more stiffness dominated middle ear in those with blood group O. The effect of this on the auditory system of individuals with blood group should be further explored. However, further studies are essential in this area for better generalization of the findings.

Summary & Conclusion:

The study attempted to determine differences in tympanometric findings and acoustic reflex thresholds in persons with different blood groups. The results showed that the acoustic reflex thresholds were elevated and the resonant frequency was increased in individuals with blood group O compared to others. The results are in accordance with previous studies which also suggest possible reduced OHCs in persons with blood group O. This could have resulted in elevated acoustic reflex thresholds.


  Abstract – AP1013: Parent's Awareness and Attitude on Early Identification and Early Rehabilitation of Hearing Loss - A Questionnaire Based Report Top


Basaiahgari Nagaraju1, Devi N2 & Darga Baba Fakruddin3

1nagarajuaslp01@gmail.com,2deviaiish@gmail.com, &3dbaba786@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Hearing loss refers to the inability of an individual to sense/perceive sounds. It has been estimated that about 466 million people in the world have disabling hearing loss and 34 million of these population are children (WHO, 2018). Hearing loss as a condition itself is very debilitating and when present from birth can have a profound impact on the child and the family in many domains. There are no permanent solutions to this condition but, due to the advancement in technology a range of solutions ranging from hearing aids to the therapy related management strategies are applicable. Congenital hearing loss, if identified during the earlier years of life and if apt rehabilitative measures are sought, there are high chances that it will aid in the near normal development of the child.

Need for Study:

Dudda et al. (2017) aimed to explore the mothers' knowledge and their attitude towards risk factors of infant hearing loss, early identification and intervention and results revealed that parents are aware about some of the causes like head injury, slap to the ear and family history of hearing loss and less awareness about the early identification and intervention. Govender et al. (2017) studied the knowledge of mothers in Durban, South Africa, regarding risk factors of hearing loss in infants and their awareness of audiology services and result report that there was average awareness about the risk factors but less awareness about the audiological services. There is a need to survey the regarding the caregivers regarding these condition in a country like India where 63 million people (6.3%) suffer from significant auditory loss where four in every 1000 children suffer from severe to profound hearing loss with over 100,000 babies that are born with hearing deficiency every year (Varshney, 2016).

Aim & Objectives:

The aim of the current study is to develop a questionnaire on the Parents awareness and attitude on early identification and early rehabilitation of hearing loss. The objectives include the following:

  • To develop questionnaire on the Parents awareness and attitude on early identification and early rehabilitation of hearing loss
  • To administer the developed questionnaire on the parents of normal hearing children and parents of children who have a hearing loss
  • To compare the scores of the questionnaire between two groups.


Method:

A total of eighty participants, participated in the present study. These participants were divided into two groups. Group I consists of forty parents of normal hearing children. Group II had forty parents of children who have a hearing loss. The methodology consists of two phases. The phase I is the development and validation of the questionnaire, and phase II is administration of the developed questionnaire.

Phase I:

As there is very less literature regarding the questionnaire based report on knowledge on early identification and rehabilitation of children with hearing impairment a questionnaire with total 60 questions were developed from resource books, and awareness materials. These questions were given to 18 Speech & Hearing Professional, and 10 parents for content validation. The modified questionnaire consists of 50 questions which met 75% criteria of the suggestions provided at the time of content validation. And each question consists of 'yes' or 'no' choices for response. The developed questionnaire had five subsections. Section I consists 11 questions that are related awareness on causes of hearing loss, section II consists of 8 questions that are related to the awareness on early identification of hearing loss, section III consists of 10 questions that are related to the need for hearing aid and therapy, section IV consists of 7 questions that are related to the education of a child, section V consists of 5 questions that are related to the attitude of parents towards their child with hearing impairment.

Phase II:

In this phase, the developed questionnaire was administered in their respective language of the parents of the two groups. The responses were scored as '1' for correct response and '0' for incorrect response. The collected data was documented for the further statistical analysis across the groups and subsections of the questionnaire.

Results & Discussion:

Kolmogorov-Smirnov and Shapiro-Wilk test for normality indicated that the data of the current study did not follow normal distribution. Hence non-parametric statistical test were carried out for each of the sub sections separately. Mann-Whitney test showed that there was a significant difference between the groups for all the subsections except I. Further the raw scores were converted to percentage scores and Friedman test was done to check if there was significant difference between the subsections within the group. The findings reveal that there is a need for more awareness to be created among the parents of normal hearing children. Though the parents of disabled are better in their awareness on the cause and early identification, there awareness regarding the education for their children are still lesser which is much lower in the group I participants.

Summary & Conclusion:

The present study reveals that a regular, systematic and structured sensitization program are required mandatorily in our country to create better awareness to the parents of normal hearing children toward the disability which would improvise the attitudinal barrier of the society towards hearing disability.


  Abstract – AP1015: Revisiting Dichotic Listening Paradigm using Event Related Potential - A Neuropsychophysiological Study Top


Mayur Bhat1, Hari Prakash P2 & Krishna Yerraguntla3

1bhatmayur0@gmail.com,2hari.prakash@manipal.edu, &3krishna.y@manipal.edu

1Manipal College of Health Professions, Manipal, 576104

Introduction:

Dichotic Listening (DL) is a behavioral, non-invasive paradigm used in assessing cerebral dominance. In this test two different auditory stimuli are presented to two ears independently. The basic principle is that, the speech is lateralized to the left hemisphere, hence the individual prefers to repeat the stimulus presented to right ear which is termed as Right Ear Advantage (REA). Forced attention variant of this test is used to study the interaction of attention with speech laterality.

Event related potentials have been used to study neurophysiology of dichotic listening in non-forced paradigm. Using speech stimulus ERPs have proven without any doubt that left hemisphere is dominant for speech and language function. Results suggests most often amplitude modulation attributed to lateralized function (speech).

Need for Study:

The earlier studies on DL using ERPs have numerous drawbacks certainly with method especially, how they created dichotic environment, the participant's task and reporting of results. First of all, the use of speakers to simulate dichotic listening task as a method adapted by many earlier studies with which inferring DL processing poises serious limitations. Second, in most studies the participants were passive listeners while measuring dichotic ERPs. Hence, the inferences made are far from the behavioral DL measure, where he/she needs to actively listen and verbally repeat what was heard. Moreover, none have measured ERPs during a behavioral measure, rather done at different time points. Further, none of the ERP studies have simulated forced recalled conditions of dichotic listening which mirrors the top-down processing. Exploring this may provide additional tests to study the cognitive processes like attention (forced right) and executive function (forced left). Overall there are several limitations in inferring ERPs using DL. There is a need for a novel paradigm which would closely mimic the behavioral DL task in stimulus presentation and also in terms of attention and participant's response. Thus, the current study aims to develop and validate a paradigm for simultaneous behavioral and electrophysiological (neuropsychophysiological) dichotic test procedure.

Aim & Objectives:

  1. Develop and validate a paradigm for simultaneous behavioral and electrophysiological (neuropsychophysiological) dichotic test procedure
  2. To investigate the behavioral and electrophysiological correlates in normal healthy adults using both dichotic free recall and forced attention paradigm.


Method:

This prospective observational study recruited 35 normal individuals in phase 1 and 10 normal individuals in phase 2. Native Kannada speakers having normal hearing sensitivity, normal middle ear functioning and right handed dominance were recruited for the study.

Procedure Phase 1

Dichotic preparation and presentation

The dichotic numbers developed by Bhat et al (2019) were used as stimulus for acquiring electrophysiological data. 4 dichotic stimuli were prepared using 6,1 and 7 numbers in different combinations (6-1, 6-7, 7-1 and 1-7). Stimuli were presented using 'Audio CPT' module of Stim 2 software and 'Acquire module' of Compumedics NeuroScan system were used to record CEAPs.

Task and condition

  1. In free recall condition subject were asked to press right button when '6' is heard in right ear and left button when '6' is heard in left ear.
  2. In forced right condition subject were instructed to pay attention to right ear and press right button when '6' is heard and left button when any other number is heard.
  3. In forced left condition subject were instructed to pay attention to left ear and press left button when '6' is heard and right button when any other number is heard.


Validation

The developed novel ERP paradigm were tested behaviorally against the gold standard dichotic digit test-Kannada (Bhat et al 2019) on 35 normal individuals

Phase 2

Preparation and Recording

Participants included in the study were instructed to relax and be seated in a comfortable manner. Eye and reference (Mastoid) electrode placement sites were prepared using skin preparing gel (Nuprep). A 32-channel electrode cap was placed on the participant's scalp and each electrode was cleaned with the conduction paste to improve conduction (Quick cell Electrolyte and Ten20. Raw EEG recording were acquired for free recall, forced right and forced left condition with a band pass filters set between 1 and100Hz; sampled at 1000Hz.

ERP Analysis

Brain Electrical Source analyzer research version 7.0 (BESA GmbH, Germany) was used to analyze ERP waveforms. Latency and amplitude N1, P2 and N2 components exported to MS Excel 2016 and SPSS version 15 (SPSS Inc., Chicago) for statistical analysis.

Results & Discussion:

The study aimed at developing new paradigm for simultaneous ERP and behavioral assessment of dichotic listening. The first phase included criterion validation of the new paradigm with the participant's performance in a classical DL test. The statistical analysis revealed both these tests results in significant right ear advantage (t (27) =8.00, p=0.001). These findings can be explained with respect to Kimura's structural model. According to this model, REA has been interpreted as a result from the rigid bottom-up neural connections (Eichele, Nordby, Rimol, & Hugdahl, 2005), i.e., the contralateral projections of the ascending auditory system consist of more fibers and consequently produce more cortical activity than the ipsilateral projections. Further, Intra Class Correlation (ICC) for criterion validation revealed a good correlation between the tests (0.825) hence, the developed paradigm can be considered as reliable.

Phase one also aimed at criterion validation of the new test in forced recall condition. Results showed no significant difference between right and left scores in forced paradigm for (t (28) = 6.68, p = 0.07). These findings are in line previous study (Hugdahl et al., 2009) which showed REA in forced right condition and LEA in forced left condition. The authors stated that FR and FL attention condition varies in the degree to which they reflect cognitive control where forced right reflects an attention process (FR), and forced left reflects executive cognitive control process (FL). ICC between the new and the classical test is 0.72 indicating a good reliability in forced attention condition too.

Second phase consisted of simultaneous electrophysiological recording during DL listening task of the newly developed paradigm. Results indicated a significant N1 latency difference between the ear, where the stimulus presented to right ear evoked shorter latency mimicking REA in a free recall task (F (1,9) =21.08, p =0.003). Similar findings were observed for the amplitude measure too. The shorter latency and larger amplitude in right ear in DL task could be explained using the same explanation as mentioned above (Kimura's structural model). In forced condition, N2 amplitude in the non-attended ear was larger than the attended ear, though was consistent, but not statistical significant probably owing to smaller sample size of this study. This observation is noteworthy, since attention modulation (top-down modulation) is more often reported in later cognitive component than early sensory components of ERP. Previous studies have reported association between N200 and executive functioning, specifically inhibition (Geppert et al 2010). In the current study, larger N200 amplitude in the non-attended ear probably indicates inhibitory process of executive function.

Summary & Conclusion:

The current study provides a novel, validated dichotic listening neuropsychophysiological tool that can assess perceptual, attentional and executive control process in normal individuals. If criterion validated on disordered population, this paradigm can be used in clinical setting for diagnostic purposes.


  Abstract – AP1016: Comparison of Psychosocial Development and Academic Performance of Cochlear Implantees Vs Hearing Aid Users Top


Noorain Alam1, Ramandeep Kaur2, Preeti Sharma3 & Gurmannat Kaur4

1noorain.aslp@gmail.com,2ramandeepkaur32111096@gmail.com,3preetiaslp@yahoo.com, &4mannatshipra26@gmail.com

1Post-Graduate Institute for Medical Education and Research, Chandigarh - 160012

Introduction:

Hearing loss affects communication development and behavioural skills, in turn influences educational and social development. According to researches, hearing loss children are at higher risk for a number of academic, social and behavioural difficulties (Knoors et al, 2014).

Fellinger et al (2012) showed that deafness may have consequences on children's social, emotional, and cognitive development. The prevalence of mental health problems among hearing impaired children is estimated to be 20-40% higher than normal hearing peers (Bottcher & Dammeyer, 2013).

Opportunities to learn social interaction are significantly reduced in hearing impaired children than normal hearing and are affected by degree of hearing loss, age at diagnosis, age at intervention, and type of intervention received (Leigh et al., 2008).

It has been reported that many children with hearing loss face a higher risk of academic difficulties in primary and secondary school (Lederberg et al, 2013).

AIM OF THE STUDY

The present study aims to find out the effect of hearing loss on psychosocial development on children and also to compare this development among hearing aid users and cochlear implant users. This study also aims to find out the educational development of children using hearing aids or cochlear implants.

Need for Study:

The psychosocial development and academic achievement are two main aims of the intervention in children with hearing impairment either using cochlear implants or hearing aids. Hence there is a need of this study to compare both the skills in the two groups.

Aim & Objectives:

The present study aims to find out the effect of hearing loss on psychosocial development on children and also to compare this development among hearing aid users and cochlear implant users. This study also aims to find out the educational development of children using hearing aids or cochlear implants.

Method:

Data was collected from 30 children with severe- profound hearing loss. All patients had congenital hearing impairment. Children were divided into 2 groups- 15 hearing aid and 15 cochlear implant users (U/L) in the age range of 5-10 years.

All of the children studied in mainstream classes while receiving speech therapy. None of the children had additional problems or disabilities. Age of hearing loss detection ranged between birth and 4 years among the CI users and between birth and 7 years among HA users.

Independent samples t-test found a significant difference in age of detection between the two groups [p < .05].

Children with CI were implanted between the ages of 18-84 months (M =4.57 years, SD 1.42). During study, their implant usage was 1 year. (M = 3.69, SD = 1.34). Children with HA started using their device between the ages of 2-84months (M = 3.1, SD = 1.2).

Questionnaires:

  1. Strengths and Difficulties Questionnaire (Hindi) (Goodman & Scott, 1999): SDQ is a child mental health screening questionnaire for parents of 3 to 16-year-old children and used to assess child psychosocial development. The extended parent version of the SDQ used in this study comprised five items for each of five scales- Emotional Symptoms, Conduct Problems, Hyperactivity/Inattention, Peer Problems, and Prosocial Behaviour.
  2. SIFTER Scale(Anderson & Matkin, 1996) : SIFTER is a screening instrument by teachers that have been designed to identify educational problems in hearing impaired individuals.


Results & Discussion:

SDQ scale: On correlation measures between SDQ scores and identification age of hearing loss, no significant correlation was detected. Six linear regressions were conducted for each of the SDQ's sub-scales. In analyses, children's hearing device and age at hearing loss identification were independent variables. Significant results were found in relation to hyperactivity [F(2, 49) = 4.60, p < .05, f 2 = .20] and pro-social behavior [F(2, 49) = 3.38, p < .05, f 2 = .14]

Hearing device was a significant predictor of children's hyperactivity and hearing device and age when hearing loss was detected. CI children's parents reported reduced hyperactivity and more levels of pro-social behavior than HAs. Also, older age when hearing loss was detected was related to more pro-social behaviour.

The association between age at implantation and social-emotional functioning among CI children was measured using correlation measures revealed significant negative correlations between age at implantation and children's hyperactivity (r = -.40, p < .05) and conduct problems (r = -.35, p < .05). Correlations between age at implantation and children's total difficulties (r = -.29, p = .07), emotional symptoms (r = -.22, p = .14), peer problems (r = .04, p = .42), and pro-social behavior (r = .08, p = .36) were not statistically significant.

SIFTER scale:

Thirty teachers returned completed Preschool S.I.F.T.E.R. questionnaires. The raw scores were calculated for each subscale and then categorized into pass, marginal pass, and fail. The SIFTER results revealed that the children with CI scored well in class behavior and class participation subscales but scored very poorly in communication subscale, whereas the scores in attention and academic subscales were average. The school behavior subscale had the highest passing rate 11(73.3%) and the lowest failure rate 1(6%). Next to school behavior was participation subscale, where 10(66%) children passed, 2(13.33%) had marginal pass, and 3(20%) failed. The attention 9(60%) and academic 7(46.66%) subscales tied for third and fourth places, respectively. Teachers rated children poorly in communication subscale in which only 2(13.33%) passed, 1(6.7%) had marginal scores while the majority 12(80%) failed. When all the five SIFTER subscales were considered, only 2(13.33%) of the children passed all the five components, 4(26.7%) passed in four components, another 4(26.7%) passed three components while the majority 8(53.0%) failed in at least three SIFTER subscales.

Among children with hearing aids scored well in the class behaviour and class participation but poorly on communication and attention subscale while academic subscale was average. 11(73.3%) and the lowest failure rate 2(13.33%). In participation subscale 9(60%) children passed, 2(13.33%) had marginal pass, and 4(26.7%) failed. The academic 7(46.66%) and 7(46.66%) subscales had similar score respectively. Teachers rated children poorly in communication subscale in which only 1(6.7%) passed, 2(13.33%) had marginal scores while the majority 12(80%) failed. When all the five SIFTER subscales were considered, only 1(6.7%) of the children passed all the five components, 3(20%) passed in four components, another 4(26.7%) passed three components while the majority 10(66.7%) failed in at least three SIFTER subscales.

The present study revealed that parents of children with CIs reported decreased hyperactivity and higher pro-social behavior than children with HAs. Significant negative was measured between age at implantation and children's hyperactivity and conduct problems.

Overall, only 13.3% of CI subjects and 6.67 % of hearing aid users passed all five subscales in SIFTER while the rest failed in at least one component, indicating that majority of children have weaknesses in at least one of subcomponents in SIFTER. The present findings in which the majority of the children had pass scores in the areas of classroom behavior, participation, and attention indicate that most children with cochlear implants were able to conduct themselves in socially acceptable manners.

On SIFTER scale, cochlear implanted were rated poorly in area of communication by teachers, performed below average scores in overall examination. Hearing aid users had more difficulty in communication abilities and emotional subscale.

It is important that the children's progress is assessed periodically and school placement decision be reviewed to ensure they can achieve their highest academic potential.

Summary & Conclusion:

The study shows that cochlear implanted children have better psychosocial development when compared with hearing aid users. However, this development is affected by the age of detection of hearing loss as well age of implantation. Also, hearing aid users and CI have very poor communication skills as perceived by their teachers and are at increased risk of poor academic performance. However, this risk is more for hearing aid users. There is need of early identification and early implantation to ensure better psychosocial as well as academic performance.


  Abstract – AP1018: Awareness of Hearing Loss, Vocal Hygiene and Vocal Abuse among Auto-Rickshaw Drivers in Mumbai Top


Shivani Prabhu1, Shreya Parab2, Siddhi Kadam3 & Arun Banik4

1shiv.prabhu9913@gmail.com,2shreyaparab63@gmail.com,3siddhikadam786@gmail.com, &4arunbanik@rediffmail.com

1Ali Yavar Jung National Institute of Speech and Hearing Disabilities (Divyangjan), Mumbai - 400050

Introduction

Long hours spent driving on road accompanied by urban noise pollution in today's time pave way for several health hazards. Hearing loss caused due to noise exposure at the workplace is referred to as occupational noise-induced hearing loss (ONIHL). With a gradual onset, NIHL may be well advanced by the time that it leads to a considerable impairment. Furthermore, auto-rickshaw drivers spend significant amount of time in an environment that is noisy and polluted with toxic gases which seems to affect nasopharynx (nasal cavity and soft palate), sinuses and respiratory system, speech and voice due to continuous exposure for a longer period. Moreover, harmful lifestyle, stressful occupational conditions and improper vocal hygiene during working hours act as contributing factors.

Need for Study:

With urbanization, there has been a rapid increase in transport vehicles over the past few decades resulting in a marked increase in environmental noise levels, and accompanying air pollution. The population at large is unfamiliar with the magnitude of the adverse effects of noise and air polluted environment, prevention and management of these problems.

Habitual exposure to occupational noise damages the hair cells in the cochlea causing a sensory hearing loss. Ultimately, some of the nerve fibers supplying the damaged hair cells may also become damaged from many causes and result in a neural hearing loss as well.

City traffic from inside of closed vehicles are about 85-90dB(A) which is within permissible limits as long as one hears it for 8 hours per day, but honking creates the noise of 110dB(A) which cannot be tolerated by human being for more than half & hour. If the noise level is above 115dB(A) then exposure should not be more than 15 minutes. Exposure to longer duration leads to gradual hearing loss. (NITA, 2008)

Continuous exposure to dust, chemical inhalants, cigarette smoke, etc may result in drying of the delicate tissue lining the vocal tract, resulting in dry mouth, throat tightness, irritated throat, or if lodged in lungs, may affect respiratory function essential for voicing.

In spite of increasing risk of hearing loss and/or tinnitus, poor vocal hygiene and vocal abuse among auto-rickshaw drivers, it has hardly been studied.

Aim & Objectives:

AIM:

To determine the awareness of hearing loss, vocal hygiene and vocal abuse with associated risk factors like exposure to continuous noise, air pollution and smoking respectively among auto-rickshaw drivers in Mumbai.

OBJECTIVES:

  1. To study the awareness of the risk of hearing loss due to continuous noise exposure among auto-rickshaw drivers.
  2. To study the awareness of vocal hygiene among auto-rickshaw drivers.
  3. To study the awareness of the risk of vocal abuse and misuse among auto-rickshaw drivers.


Method:

Research design: Survey.

Participants: A convenience sample of 55 auto-rickshaw drivers was used for the study. All participants included in the sample were male whose age ranged between 22 to 72 years with a mean age of 42.34 years. 10.9% of participants had been driving for <2 years, 2-5 years, 6-10 years each respectively with a majority of 67.27% had been driving for >10 years.

Tool: A self-developed questionnaire consisting of 23 questions drafted in English, validated by 5 professionals, who considered them as highly appropriate, and translated in Hindi using a standard procedure was used as the research tool in the study. The questions were close-ended (56.53 % yes/no type and 43.47 % multiple-choice options) and were designed under 3 sections viz. A) Years of working, exposure time, exposure to irritants, details regarding smoking and alcohol B) hearing sensitivity and symptoms of hearing loss, C) vocal hygiene, vocal abuse and misuse.

The researchers approached the participants and briefly explained about the objective and the procedure of the study. After seeking consent, responses to the questions were obtained from the auto-rickshaw drivers. The information obtained from the participants was kept confidential as per the ethical point of view.

Results & Discussion:

RESULTS:

A total of 10.9% of participants drove the auto-rickshaw for 4-8 hours at a stretch with 58.18% driving for 9-12 hours and 29.09% driving for more than 12 hours. About 30.9% of participants consume >5 cups of caffeine and 12.7% consume 1-2 alcoholic drinks daily. Regular smoking was reported by 27.27% of participants whereas 9.09% were occasional smokers. Irritants in the working environment have impact on the voice quality of 40% of participants, while 9.09% are unsure. 38.18% agreed and 50.90% disagreed that continuous and/or excessive exposure to noise can impair their hearing while 9.09% were unsure and 1.81% had no idea how noise exposure was linked to hearing impairment.20% reported a decrease in hearing sensitivity since they had started driving an auto-rickshaw.Continuous tinnitus in 3.63% and intermittent tinnitus in 32.72% of participants was reported. About 72.72% reported that their hearing sensitivity was not impaired enough to disturb day-to-day conversation. Only 12.72% of participants knew persons with hearing loss due to excessive noise exposure. About 94.54% have not undergone any hearing evaluation.

About 36.36% of participants consumed 2-3 liters and 27.27% consumed 1-2 liters of water daily. Abusive habits like frequent throat clearing (40%), shouting/screaming (30.9%), and excessive talking (25.45% spend 9-12 hours talking, while 7.27% speak for >12 hours) were also reported.

47.27 % & 7.27% of participants talked occasionally and often respectively while driving. 29.09% spoke loudly or in an unnatural pitch whereas 36.36% were uncertain. About 29.09%of participants agreed that background noise has an impact on voice quality. Even so, 14.54% participants admitted that their voice quality deteriorated at the end of the day.

DISCUSSION:

The study reflects towards lack of awareness among the majority of rickshaw drivers concerning the detrimental effects that noise and air pollutants can have on their aural and vocal health, respectively. It also highlights the ill-effects of continuous exposure to noise and air pollutants.

Consumption of alcohol, smoking cigarettes, inadequate water intake and long hours of exposure to irritants in the atmosphere may further cause or elevate voice-related problems in auto-rickshaw drivers.

Increased loudness during a conversation in noise also demonstrates the masking effects of background noise.

Summary & Conclusion:

SUMMARY:

Research indicated that auto-rickshaw drivers are not well aware of the occupational health hazard which is ultimately reducing the capability of their work efficiency over some time.

CONCLUSION:

Although the most simple and desirable solution is to limit two-stroke engines and the environment to intensities below damaging levels, audiometric and voice screening and monitoring are extremely effective in preventing hearing loss and voice disorders.

Implications: The data contributed in determining and developing awareness about ill-effects of continuous exposure to noise and air pollutants and associated risk factors on an individual's hearing and voice. The survey highlighted the role of an audiologist and speech-language pathologist in counselling about hearing and vocal health. The overall findings and results can be used as a database for future studies related to awareness of occupational NIHL and voice disorders.

Limitations: Clinical evidence could not be obtained to show progressive threshold shift due to occupational noise-induced hearing loss and acoustic and aerodynamic voice parametric measurements because of time constraints. An elaborated open-ended questionnaire needs to be administered upon a larger sample for implementable results.


  Abstract – AP1023: Effect of Compensation Benefits Accessable in India on Self- Rating of Hearing by Workers Exposed to High Level of Industrial Noise Top


Rubina Yasmin1, Rima Das (Datta)2, Sreemoyee Panda3, Ganendra Prasad Sue4, Nilanjan Paul5 &

Riyanka Choudhury6

1kumkummondal1998@gmail.com,2rimadatta@gmail.com,3sreemoyeepanda@gmail.com,4ganendrasue197@gmail.com,5paulnilanjan2@gmail.com, & 6riyanka96@gmail.com

1P.K.K. College of Education, Hooghly, Westbengal - 712703

Introduction:

Industrial noise is usually considered mainly from the point of view of environmental health and safety, rather than nuisance as sustained exposure can cause permanent hearing damage causing Noise Induced Hearing Loss. The Noise Induced Hearing Loss (NIHL) mainly shows sensorineural hearing loss with wide variability in hearing threshold with greatest loss at 4KHz compared to lower and higher frequencies and gradually including the lower and higher frequencies as loss progress. Worldwide 16% of disabling hearing loss in adults is attributed to occupational noise, ranging from 7% to 21% in the various sub-regions (Indian Journal Of Occupational and Environment Medicine, 2008). Hearing conservation program is a systematic plan implemented to protect the hearing of the employees from damage due to hazardous sound exposure in the work place. Noise Induced Hearing Loss has been compensable disease since 1948 in India under the Employees State Insurance Act (1948) and Workmen’s Compensation Act (1923). It was in 1996 that the first case got compensation. So awareness was minimum about the effect of noise and the benefits employees could derive if the noise levels had made any permanent change in their hearing.

Need for Study:

Workers who have already acquired a permanent noise induced hearing loss due to high level of noise exposure are eligible for compensation claims. To understand the benefits of use of ear protection devices and regular monitoring of hearing status an adequate self perception of hearing is required by workers exposed to high level industrial noise as compared to control group which would ultimately lead to an effective hearing conservation programme. The effect of awareness about compensation benefits on self perception of hearing by workers also need to be studied for an unbiased rating of perception of their hearing ability.

Aim & Objectives:

The aim of the present study was to investigate the effect of awareness of compensation benefits on self-perception of hearing by workers in industrial set up exposed to high level of industrial noise as compared to hearing impaired people not exposed to noise.

The primary purpose was to determine if a statistically significant relationship existed between the hearing loss and their self-rating about their hearing as compared to the control group due to the effect of compensation provided by employers.

Method:

Subjects-

Two group of subjects participated in the experiment. 100 Factory workers, (age range 30 to 55 years) highly exposed to industrial noise, formed the 'experimental group' while 100 hearing impaired persons not exposed to industrial noise formed the 'control group'. Based on severity of loss, they were further divided into subgroups. The subjects had hearing ranging from confirmatory high frequency sensorineural hearing loss with notch at 4KHz, to moderate degree of hearing at low and mid frequencies with more involvement of frequencies around 4KHz. The control group consisted of age, gender, hearing loss matched subjects. In this study the subjects in both the groups were able to read and write english.

Test Environment-

All the testing was carried out in a sound treated suite in clinical set -up. The noise were within permissible levels specified by ANSI 1998. Instrumentation and methodology-

  1. A detailed case history regarding demographic details, period and duration of noise exposure, other medical history, duration of ear protection device usage, benefits provided by the company including compensation claims if any etc.
  2. Otoscopic examination was carried out for all the subjects.
  3. A calibrated diagnostic audiometer, MADSEN ITERA-II was used to estimate the hearing sensitivity (using Carhart-Jerger Modified Hughson-Westlake method, 1959) for both the group. Behavioural threshold on audiometry were obtained for frequencies 250Hz, 500Hz, 1kHz, 2kHz, 3kHz, 4kHz, 6kHz, 8kHz for air conduction and 250Hz to 4kHz for bone conduction.
  4. Tympanometry, was performed using a calibrated tympanometer MADSEN ZODIAC 901 to record the static compliance, middle ear pressure, tympanometric gradient and width, ear canal volume for monitoring middle ear status.
  5. Speech Audiometry was administered by using 'PHONETICALLY BALANCED WORD LIST IN BENGALI' developed by Mainak Santra, Indranil Chatterjee to assess word recognition score.
  6. Maximum, minimum and average noise levels at workplace of the experimental group measured by Sound Level Meter (SLM), BRUEL-KJAER., expressed as dB(A). A noise exposure level of >90dBA; 8Hrs daily was the inclusion criteria.
  7. Hearing Handicap Inventory for Adults, a 25 item self-assessment scale was administered for both the groups to rate the self- perception of their hearing. The rating was further subjected to statistical analysis.


Results & Discussion:

It was found that there was no significant relationship between hearing loss of experimental group and their self- perception about their hearing due to benefits from compensation claims. There was no significant difference in the rating of hearing between the experimental and control groups.

To avail the compensation benefits available for industrial workers complaint with government regulations, workers could have given a biased self perception, rating their hearing higher revealing that they had more difficulties in situations where communication is involved. Portraying higher communications difficulties could have given them a better opportunity to approach for compensation claims. Compensation claims have been reported to be less in our country as compared to other developed countries which could be a result of lack of awareness among workers in industrial set up about the compensation acts available in India. The findings of self- rating of the experimental group in our study could be a result of lack of awareness about compensation benefits available for them which correlated with the case history where it was found that almost all employees in the experimental did not have awareness about the compensation benefits available for them or was not aware about the procedure for claiming compensation. Though unbiased rating could also be due to ethical reasons which have motivated them to correctly rate themselves as per their self perception of hearing, since few employees who were aware of the compensation benefits and the procedure of claims also rated a similar perception of hearing as the control group.

Summary & Conclusion:

An attempt was made to explore whether there was any effect of awareness of compensation claims on rating about self perception of hearing among employees who were exposed to industrial noise. The results of this study revealed that employees in industrial set up who developed a permanent noise induced hearing loss due to exposure of high level of noises have rated an unbiased perception of hearing similar to control group and were unaware of compensation benefits available for them. From the present study it is obvious that awareness about compensation acts and benefits for the employees working at high noise areas need to be promoted extensively so that they can claim all the benefits available for them. Moreover a regular hearing conservation program with proper follow ups need to be practiced so that the hearing of employees could be monitored frequently and counseling could be done about regular use of ear protection devices during exposure to high level of noises. Research about self perception of hearing also needs to be carried on a large population of employees with permanent noise induced hearing loss who are aware of compensation acts that apply for them and are involved in the procedure of claims.


  Abstract – AP1024: Assessment of Auditory Neural Coding in Children with Specific Language Impairment Top


Animesh Barman1, Mekhala V G2, Kavya Vijayan3 & Swapna N4

1nishiprerna@yahoo.com,2mekhalavg@gmail.com,3kavya.vijayan@gmail.com, &4nsn112002@yahoo.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Specific language impairment (SLI) is diagnosed when a child has difficulty in producing or understanding spoken language for no apparent reason (Bishop, Hardiman, Uwer & Suchodoletz, 2007). Children with specific language impairment exhibit receptive and/or expressive language impairment in the absence of peripheral hearing loss, mental retardation, psychological or physiological problems, or environmental deprivation. There are studies which report that the difficulty faced by children with SLI could be related to subtle auditory perceptual problems or auditory agnosia (McArthur & Bishop, 2005). It has been proposed that specific language impairment (SLI) is the consequence of low-level abnormalities in auditory perception (Bishop, Hardiman, Uwer, et al., 2007). Thus, electrophysiological studies assessing the auditory structures for possible functional abnormality in children with SLI are essential. The electrophysiological studies can highlight the subtle auditory neural deficits which might, in turn, affect the expressive and receptive language.

Need for Study:

There are very limited studies which have attempted to characterize objectively auditory problems seen in children with SLI. All the previous studies on Auditory brainstem response (ABR) and Frequency following response (Basu, Krishnan & Fox 2010), Late latency response (LLR) (Bishop Hardiman, Uwer, et. al., 2007) and Mismatch negativity (McArthur & Bishop, 2005; Shafer, Morr, Datta, Kurtzberg & Schwartz, 2005) have assessed neuro-audiological structures in isolation and do not give insight into whether these children have problem at a specific level of the auditory system or have co-morbid condition (brainstem to cortex). Thus, the present study attempts to administer a neuro-audiological test battery (ABR and LLR) on children with SLI. The study would provide insight into the subtle auditory problems in individuals with SLI. This would help in providing appropriate management for auditory problems in children with SLI.

Aim & Objectives:

The study aimed to evaluate auditory deficits in children with SLI using neuro-audiological tests. The objectives of the study were:

  1. To compare the amplitude and latencies of auditory brainstem response (ABR) between children with SLI and control group.
  2. To compare the amplitude and latencies of long-latency responses (LLR) between children with SLI and control group.


Method:

The study was conducted on ten children in the age range of 4-7 years with bilateral normal peripheral hearing and Intelligence Quotient of > 85, which was assessed by an experienced psychologist. The children were divided into two groups: a control group consisting of five children with normal language development and a study group consisting of five children with specific language impairment. The children were evaluated through a diagnostic protocol that included Clinical Evaluation of Language Fundamentals (CELF - Preschool 2) language test (Priya, 2017) which was assessed by an experienced speech-language pathologist.

The neuro-audiological test battery included recording of click-evoked auditory brainstem response (ABR) which assesses the brainstem, followed by a long-latency response (LLR) which assesses the integrity of the auditory cortex. The biologic system was used to elicit the auditory evoked potentials. Both ABR and ALLR were obtained using single-channel recording with inverting electrode montage on the left/right ear mastoid, non-inverting (Fz) on the upper forehead and ground electrode on the right/left ear mastoid. The stimulus was presented through electrically shielded insert earphones. The latencies and absolute amplitudes of waves I, III and V of ABR were obtained using click stimuli which were presented at 80 dBnHL at a repetition rate of 11.1/s. 1500 number of stimuli were used to elicit ABR. Similarly, the latencies and absolute amplitudes of P1, N1, P2 and N2 wave components of LLR were obtained for 160 ms acoustic speech stimuli /da/, presented at 80 dBnHL at a repetition rate of 1.1/second. 500 number of stimuli were used to elicit LLR.

Results & Discussion:

The waveform morphology, latency, and amplitude of the peaks of ABR and LLR were studied. To define the results, the collected data was analyzed using SPSS software and the non-parametric Mann-Whitney U test. The peak amplitude and latency were the dependent variables. The mean and standard deviation of the variables were calculated in both the groups. The latencies of LLR (P1, N1, P2, and N2) was significantly reduced (p<0.05) in children with SLI compared to the control group. There was no significant difference (p>0.05) in the latency of LLR in the left ear and the amplitude of LLR in both the ears. Since N2 peak was absent in four out of five children with SLI, it could not be compared. Shallower peaks were obtained for the SLI group compared to the control group. Concerning the mean latency and amplitude of ABR, no statistically significant difference (p>0.05) was found between the two groups. The ABR waveform morphology in the SLI group was similar to that of the control group.

Discussion

The results indicate a relationship between SLI and auditory processing abilities in these children. The results suggest an intact auditory processing ability at the brainstem level in children with SLI. This can be hypothesized because clicks were used in the present study. Research needs to be conducted using more complex stimuli such as speech which assesses higher auditory processing abilities (Johnson, Nicol & Kraus, 2005). The findings suggest that the auditory processing abnormality at the cortical level may pose a risk for normal language development in children with SLI. The speech perception disorder related to temporal processing abnormality, delayed transmission and processing of the speech signal confirm the presence of disorder of cognitive functions in children with SLI which in turn might reflect as a shift in the latency observed in this population (Bellis, 1996; Chermak & Musiek, 1997). As a result of this study, we can consider temporal processing as an important factor for speech perception in children with SLI (Dlouhá O, 2008).

Summary & Conclusion:

It appears that the auditory evoked potentials can be included in the test battery to assess the auditory processing skills in children with specific language impairment and to know the role of temporal processing of auditory stimuli, especially the more complex stimuli such as speech. This study was conducted with small sample size. Research needs to be carried out on a larger population for better generalization of the results. Complex stimuli such as speech can be used in the study to assess higher auditory processing in children with SLI.


  Abstract – AP1026: Effect of Different Domains of Cognition on Speech Perception in Noise: A Preliminary Report Top


Ashima Verma1 & Usha Shastri2

1abhishek.verma665@gmail.com &2shastri.usha@gmail.com

1Kasturba Medical College, Mangaluru - 576104

Introduction:

Everyday spoken language processing does not occur in a pristine acoustic environment, but rather in the presence of interfering background noise. Several factors play role and affect our speech perception in noise, the most prominent being age and hearing loss. Another factor that plays a vital role here is cognition. Attention, memory and language, are some of the cognitive abilities which are involved in the process of speech detection, discrimination, understanding, and organization. Understanding the relationship between auditory and cognitive processing has gained a lot of interest and attention recently, which to a certain extent is because of the need to understand and address the day to day demands of people with hearing loss.

Need for Study:

Understanding and maintaining a discussion in noisy environment is highly problematic. Some of the difficulties are explained by hearing loss, although it cannot be the only factor responsible. Firstly, there can be a great difference in speech in noise (SiN) performance even in listeners who have similar auditory sensitivity. Secondly, even in the absence of hearing loss, SiN difficulties can still be found and thirdly, even after the hearing loss has been alleviated by hearing aids the SiN listening difficulties can still persist. Cognition has been proposed repeatedly to show a significant part in SiN perception.

Akeroyd (2008) observed that working memory correlated with SiN measured using sentences. However, Fullgrabe and Rosen (2016) in their review and meta-analysis of association between working memory (Reading span test) and SiN listening in normal hearing listeners found only little contribution of working memory to individual alterations in SiN perception. Due to the difference in the population that were studied, these differences in findings may have arose. The relationship between working memory and SiN may not be as pervasive, but rather may change considerably by age or hearing status of the listener. Various studies have used different measures of cognition and different types of speech materials for SiN measurement. Hence, in spite of increasing attentiveness in the relationship across the cognitive performance and SiN perception, the picture has not been unfolded yet and is far from clear. Also, it is important to understand which cognitive domain has a major role in SiN perception. Thus, measuring several cognitive domains will enable to find the particular effect each cognitive domain has on SiN perception task and also in what way this can add on to the inconsistencies of formerly established results. To the best of our knowledge, effect of various domains of cognition on SiN has not been studied. The present study is designed in this view.

Aim & Objectives:

The present study aimed to find the effect of a few cognitive domains on speech perception in noise in individuals with normal hearing sensitivity. Specific objectives were to measure SNR-50 for sentences and a few domains of cognition (sustained attention, inhibitory control, processing speed, task switching, and working memory) in individuals with normal hearing and to assess the relationship between these different aspects of cognition on speech in noise perception.

Method:

Twenty native Hindi speakers of age range 18-35 years with normal hearing (pure tone thresholds < 15 dB HL) participated in this study. The study was approved by the ethical committee of the institute and written informed consent was obtained from the participants. Procedure: Speech perception in noise (SNR-50) and cognitive tests were administered in random order for each participant in a quiet, distraction free room. All tests were presented by Lenovo ThinkPad (T440s) laptop, delivered through headphones (Sennheiser HD280 Pro) at most comfortable level. SiN measures were administered using the APEX software and cognitive tasks were administered using MATLAB (R2013b).

Speech perception in noise: SNR-50 was measured using one list consisting of 10 Hindi sentences (Jain et al., 2014). Each sentence in a list was mixed with speech shaped noise at a particular signal-to-noise ratio (SNR) that started from -8 dB SNR. SNR-50 was measured using one down one up procedure with 2 dB step size.

Working memory (Backward digit span test): Monosyllabic Hindi digits from one to nine of equal syllable length were used as stimuli. Inhibitory control (Stroop Task): Two speech tokens /dāye/ and /bāye/, meaning right and left respectively in Hindi were used. Congruent trials consisted of the word /dāye/ in right ear and word /bāye/ in left ear, while, incongruent trials consisted of /bāye/ in right ear and /dāye/ in left ear. Task of the participant was to indicate the ear in which stimulus was presented, while ignoring the stimulus meaning. Simple reaction time: The total of the time needed to develop and then respond to a simple stimulus was evaluated with this task. Task of the participant was to press a button as soon as they hear a pure tone.

Sustained attention: Numbers between 1 and 7 were presented auditorily and the participants had to press a button only when they hear numbers 1 and 5. Three trials each consisting of 70 stimuli were presented.

Task switching: Visual arrow task was used which consisted of two subtasks. In single task the participant had to name what direction shown the arrows are pointing to, namely up or down. In mixed task, arrows were in two colors, green and red, whenever a green arrow is flashed on the screen the participant had to name to actual direction but whenever the red arrow flashed, he/she had to name the opposite direction.

Results & Discussion:

Results of the study showed that the mean SNR-50 for sentences was -9dB±2.12dB with a range of -5.5dB to -13dB. This shows individual variability among listeners with normal hearing. Among cognitive domains, the mean backward digit span was 3.89 ±1.38. In auditory stroop task, mean reaction time for agreeing stimulus (934.1ms) was shorter than that for conflicting stimulus (967ms). Mean auditory simple reaction time was 589.58ms±125ms. Reaction time across three trials of sustained attention showed increase in reaction time across trials, indicating reduced attention on the task over a period of time. Comparison of reaction of congruent stimulus in single task vs. mixed task showed that reaction time was longer in mixed trial. This is expected as mixed trial is more taxing than single trial task.

Pearson correlation coefficient was measured between SNR-50 and various measures of cognition. Results revealed that only backward digit span (r=-0.53) and task switching difference reaction time (r=-0.56) showed significant negative correlation with SNR-50 (p<0.05). This means, an individual with good backward digit span as well as better task switching ability showed better perception of speech in the presence of noise. Our results of effect of working memory on speech perception in noise supports the previously established results. Task switching involves shifting of attention between two tasks. Speech in noise perception involves attending the relevant signal and ignoring the irrelevant message. This shifting of attention might be influenced by an individual's task switching ability.

Summary & Conclusion:

We studied the effect of five domains of cognition on perception of sentences in noise. Working memory and task switching ability of an individual, among tested cognitive domains had great influence on SiN perception. Results of this study helps to choose the domains of cognition to be tested which influences in SiN perception, for various clinical and research application.


  Abstract – AP1029: Epidemiological Study on Pediatric Unilateral Cochlear Implantation in Gujarat: Disparity among Children of Different Age, Gender, Region and Type of Implant Based on Manufacturer Top


Gunjan Mehta1, Anuj Kumar2, Bebek Bhattarai3, Kalpesh Bheda4 & Mayur Chaudhary5

1gunjanmehta06@gmail.com,2anujkneupane@gmail.com,3bebek.bhattarai@gmail.com,4bhedakalpesh121@gmail.com, &5mc5551613@gmail.com

1C.U Shah Medical College and Hospital, Gujarat - 363001

Introduction:

Hearing impairment is the most common sensory deficit, affecting excess number of individuals around the globe (WHO, 2019). 360 million individuals suffer from hearing impairment worldwide that comprises a considerable 5.3% of the total world's population. Hearing loss has adverse effects on different aspects of auditory functioning which include detection, discrimination, and resolution and noise avoidance. These deficits lead to difficulty in speech perception and production that gets exacerbated in presence of noise. Reduced hearing and communication ability have strong association with depression and functional decline, contributing to degraded quality of life (Mulrow, 1996; Appollonio, 1990). Therefore, the need of aural rehabilitation is immsense to reduce deficits induced by hearing impairment, therefore, resulting in degraded quality of life (Maki-Torkko, 2001).

Attempts to provide hearing by electrical stimulation of the auditory system have a long history going back around late 18th century with the discovery of electrolytic cell (McCormick, 2003). Cochlear implants are surgically implanted neuro-prosthetic devices which gives the sensation of hearing in individuals with irreversible sensorineural hearing loss. Few studies have attempted to assess the prevalence of implant technology reaching various sectors of the society, therefore, helping in formulation and implementation of apt policies to reduce the barrier if any.

A database study by Stern et al, (2005) reported 124 children (mean age of 5 years) who underwent CI where all of them were medical insured. 61% of the study population was male. Most of cochlear implantees were white and Asian pediatric patients than black and Hispanic patients. It was found to have higher concentration of pediatric cochlear implantees residing in wealthier regions. Another database study by Chang et al, (2010) reported the need to create awareness regarding the good hygiene, periodic follow up and sequential bilateral implantations in individuals undergoing CI. The total number of 133 pediatric patients underwent CI during the period 1996 to 2008 at Cleveland, Ohio, where all of them were medical insured. Study revealed the importance of adequate reimbursement for equal access to the CI irrespective of socio-economic status.

Need for Study:

Even though few studies on the prevalence of CI have been performed in western world, there is a lacuna in South Asian region with higher prevalence of deafness (WHO, 2019). Huge burden of hearing impairment across the region and in India is largely avoidable. The incidence and prevalence of hearing impairment in India are significantly elevated where 21 per 1000 children have hearing impairment (NSSO, 2002). Various schemes have been initiated to assist children with severe to profound hearing impairment in procuring CI (Kumar et al, 2017). One of such schemes, School Health programme, has been running in the state of Gujarat where overall prevalence of communication disorder is 4.09% with otological disorders ranked highest (3.30%) (Sinha et al, 2016). However, the prevalence of CI under the programme across Gujarat has never been studied which could give insight regarding the impact of the programme. Also, the study based on the age of implantation, gender, region of belonging and type of implant-based on manufacturer since the initiation of programme could trace the importance of understanding the efficacy of the programme.

Aim & Objectives:

Present study aimed at estimating the prevalence of unilateral CI in children based on their age, gender, region of belonging and the type of implant based on manufacturer under the School Health programme at various government hospitals across the state of Gujarat between 2013 and 2019.

Method:

A register-based, retrospective study was employed reviewing clinical reports of the cochlear implantees who underwent surgery at different government hospitals of Saurashtra region, Gujarat under the School Health programme between 2013 and 2019. During this period, total number of children between ages of 6 months to 6years were reviewed. Details based on case history such as gender, age of implantation, region of belonging and the type of implant based on manufacturer were documented. With these information collected, the data analysis was done to understand prevalence of pediatric unilateral CI.

Results & Discussion:

216 children underwent CI between the year 2013 and 2019 at hospitals of Saurashtra region, Gujarat under School Health Programme. Total study population constituted 61% (132) boys and 39% (84) girls. This result was in coherence with the study done by Sinha et al, (2016) where the prevalence of otological conditions was relatively higher in males (1.66%) than females (1.63%) in Gujarat. Also, NSSO (2002) report suggests the prevalence of hearing loss in male to be higher than female. Therefore, these findings advocate the result in present study where comparatively higher number of boys underwent implantation as compared to girls.

The cochlear implant recipient children population varied across the age of implantation. It was found that 6% (12) of recipients got implanted between age of 1-2years, 14% (31) between age of 2-3years, 25% (53) between age of 3-4 years, 17% (37) between age of 4-5years, 16% (34) between age of 5-6years and 22% (49) between age of 6-7years. It could be seen that the majority of these children are getting implanted post the critical age of language learning which might affect prognosis of the condition. This could trace on the key issue in implementing early diagnosis and intervention programmes effectively as it can obviate communication disorder by 40% of the population (Kumar et al, 2017). School health programme has been conducting free CI for children across all six regions of Gujarat. Hence, though all surgeries were performed at Saurashtra region, the data was analyzed to understand the region of belongings of these cochlear implantees. It was found that 73% (158) children were from Saurashtra region, 10% (22) from Central Gujarat, 9% (20) from North Gujarat, 3% (7) from South Gujarat and 4% (9) from Kutch. The result can be attributed to the ease of accessibility of health facilities for children within Saurashtra region in comparison to other regions of Gujarat. In the study, 10% (22) of children were from Central Gujarat which attributed to the fact of close proximity of the respective region to Saurashtra region. Hence, this suggests the need of easy access of health facilities for early assessment and intervention of communication disorders effectively.

Over the period since 2013 till 2019, different implant manufacturers have been providing the service. Hence, the attempt was made to understand the number of different manufacturer implantees. It revealed 88% (189) children implanted with Cochlear nucleus device and 12% (27) with Medel device. The result could be attributed to the fact that different implant manufacturers were providing service under School health programme at different period of time. The majority of implantees with cochlear nucleus device might be due to maximum number of implantation since 2016 onwards where as Medel was providing service from 2013 till 2016 with less implant performed at the beginning of programme.

Summary & Conclusion:

Hence, the present study speculates that differences in age of implantation, gender, region of belonging and the type of implant based on manufacturer. This information could be used to develop a database, which could include information from different regions of Gujarat and across India. This would help in better understanding the prevalence of cochlear implantation across different regions. The results of the present study could be used to plan and execute policies for the identification, management, and rehabilitation of individuals with hearing loss.


  Abstract – AP1030: P300 Responses to Speech and Music Stimuli: An Effect of Musical Training Top


Mudra Prakash1, Abigail Miriam Soni2 & Rucha Vivek3

1shettyswarup12@gmail.com,2abigail3004@gmail.com, &3ruchaavivek@gmail.com

1School of Audiology and Speech language Pathology, Bharati Vidyapeeth (Deemed to be) University, Pune - 411030

Introduction:

Music is an art of combining different sounds in order to express ideas and emotions through elements such as pitch, timbre, and rhythm. Years of musical experience earned by musicians enables them to understand and identify the fine-grained acoustics of music (Kraus & Chandrasekaran, 2010) as well as individual notes of a melody as compared to non-musicians. Gizzi and Albi (2017) state that musicians had a symmetrical distribution in both hemispheres for music processing whereas non-musicians had a clear right-hemispheric dominance. The interhemispheric neural connectivity was greater in musicians due to the plastic changes produced as a result of musical training.

Literature reveals that musical experience and/or training produces changes in the auditory processing of both linguistic and non-linguistic input (cross-domain plasticity), both at the cortical and subcortical level (Patel, 2013). Interestingly, the differences in the cortical auditory processing can be seen as early as one-year post commencement of the training (Skoe, Kraus, & Clark, 2009). Apart from the structural changes, different patterns of neural activation were observed, like heightened responses to simple artificial tones and stronger ones to the stimulus of their own instrument (Skoe, Kraus, & Clark, 2009). The responses for speech stimuli or music are dependent on auditory learning and exposure, specifically, on the amount of practice, age at which training started, nature of training, practice methods and context of the stimulus (Kraus & Chandrasekaran, 2010).

Researchers have shown that superior information processing is directly co-related with shorter latencies and larger amplitudes of P300 which receives contributions from thalamus, auditory cortex, temporo-parietal areas, and portions of the frontal lobe (Courchesne, 1978). The factors influencing P300, apart from individual differences like age, gender, intelligent quotient and effect of medications, include state variables like attention, cognition, sleep quality, drugs, arousal levels (Van Dinteren et al, 2014).

Need for Study:

P300 has been reported as an indicator of cortical auditory processing but is under-utilized in explanation of the effects of musical training. Using an electrophysiological approach, primarily P300, we aim to study the advantages of musical training in not just auditory perception but also the cognitive processing of different auditory stimuli and hemispheric specialization of the same.

Aim & Objectives:

The aim of the present study is to evaluate the effect of musical training on cortical auditory processing with P300 using both speech sounds and music as stimuli in musicians compared to non-musicians.

Method:

Participants:

Twenty healthy adults having normal hearing sensitivity (PTA<20 dB HL at octave frequencies between 0.25 kHz to 8 kHz) in the age range of 18-30 years participated in the current study. Informed consent was obtained in accordance with the ethical rules followed at Bharati Vidyapeeth School of Audiology and Speech-language Pathology. Subjects were divided into two age and gender-matched groups namely Musicians (M) (n=11, mean age=23.09 years) and Non-musicians (NM) (n=9, mean age=22.67 years).

The group of musicians have had professional musical training for a minimum of three years and report to have varied hours of practice per week. On the other hand, the group of Non-musicians report to have had no professional training in music across their lifetime.

Stimuli:

Two kinds of stimuli were used (speech and non-speech) which were recorded in a soundproof environment and edited using the Audacity software for amplification and normalization to ensure removal of unwanted noise or clicks. Speech stimuli being speech syllables, /da/ and /ba/ were recorded in a male voice, /da/ was used as the frequent stimulus while /ba/ was the infrequent. Non-speech stimuli used were E major, (frequent) and F major (infrequent) chords recorded on a piano (duration = 150ms) which have minimal spacing between them on a musical scale.

Electrophysiological Procedure:

Event-related potentials (P300) were recorded in a soundproof environment using Bio-Logic Navigator Pro AEP instrument. The participants were seated in a comfortable position. They were instructed to count the number of infrequent stimuli and to avoid eye movements so as to reduce ocular artifacts. The presentation level of both stimuli was 70dBnHL given through insert earphones (ER-3A). Responses were obtained using vertical electrode montage and dual-channel recording with five-disc electrodes using the conventional 10-20 montage (non-inverting: P3, P4; inverting: M1, M2, and ground: Fpz). Contact impedance was ensured to be ≤ 5kΩ with inter-electrode impedance ≤3 kΩ and the time window was set at 533ms. The ratio of frequent to infrequent stimulus was 4:1 and pseudo-random presentation was used. Multiple traces of 80-120 sweeps for frequent stimuli were recorded and the grand average waveforms were analysed for latency and amplitude. Bandpass filter was set between 1Hz to 100 Hz for online filtering. Responses were offline filtered from 1-30 Hz for ease of interpretation.

Results & Discussion:

The study was conducted on a total of 20 participants divided into two groups namely musicians and non-musicians, containing 11 and 9 individuals respectively. Shapiro-Wilk test of normality revealed that the data followed a normal distribution. Hence, parametric tests were used for further analysis. Results of the Independent sample t-test revealed that the latencies and amplitudes of P1, N1, P2, and N2 did not differ significantly (p>0.05) for speech and chords for both groups. However, a significant difference was seen in the P300 latency and amplitude for both the stimuli. With speech as the stimulus, P300 amplitude showed a marked difference in both ears (Right: p=0.029 and Left: p=0.001) across groups.

Using chords, a notable difference was observed in the P300 amplitude for both ears (Right: p=0.004 and Left: p=0.001) as well as in its latency in the left ear (p= 0.031) suggesting cerebral specialization in musicians as well as recruitment of attention, cognition and language networks for music which was not noticeable in non-musicians. P2 response amplitude for the left ears also differed across the two groups (p=0.046) using music as stimuli. This implies a preferential encoding of music in musicians' auditory association areas. Since a significant difference is found in both the ears between the two groups, it suggests that the plastic changes not only occur in the dominant hemisphere for music but also the non-dominant hemisphere. Also, the auditory processing skills not only improve for musical stimuli, but also for speech indicating transfer effects causing plastic changes for language processing in the non-dominant hemisphere. The results are hence in line with previous research that indicate a positive effect of musical training on language processing abilities (Kraus & Chandrasekaran, 2010).

Summary & Conclusion:

In this comparison between Musicians and Non-musicians, changes in cortical processing are evident in the auditory evoked potentials obtained from musicians especially in P300 responses which receive contribution from attention, cognition, and linguistic networks. A significant difference in the latency and amplitude of P300 between the groups implies better speech and non-speech discrimination in Musicians. We aim to continue this research by increasing the sample size and segregating the musicians into further groups according to the specific instruments they are trained to play.


  Abstract – AP1032: Investigating Knowledge of Primary School Teachers and Medical Professionals on Central Auditory Processing Disorders in Mumbai Maharashtra Top


Naima Zehra1, Prabodhini Ramchandra2 & Rajeev R Jalvi3

1naimazehra78614@gmail.com,2kagneprabodhini@gmail.com, &3rjalvi@gmail.com

1Ali Yavar Jung National Institute of Speech and Hearing Disabilities (Divyangjan), Mumbai - 400050

Introduction:

The action of listening involves complex interaction between the peripheral and central auditory systems. CAPD refer to difficulties in the perceptual processing of auditory information in the central auditory nervous system (CNS) as demonstrated by poor performance in one or more of the auditory skills.

Children with central auditory dysfunction are part of an often misdiagnosed or mislabeled group of children with communication disorders who may experience learning and behavioral or emotional problems in school.

Need for Study:

The purpose of this study is to investigate the awareness of CAPD in Children among teachers in primary schools and medical professionals. Statistical levels of awareness and knowledge of CAPD are currently unknown among the public.

In this study following points are covered:

  1. Knowledge of CENTRAL AUDITORY NERVOUS SYSTEM anatomy and physiology is still extremely limited.
  2. Traditional auditory tests are not designed to assess complex central Auditory nervous system processes especially individuals with subtle disorders.
  3. Professional judgments are too frequently based on theories that are not adequately confirmed by reality, particularly in terms of auditory linguistic relationships.
  4. This disorder is viewed too simplistically.
  5. Meaningful interdisciplinary dialogue about such problems has been extremely limited.


Aim & Objectives:

AIM:

This study reports the results of an investigation of awareness and knowledge of CAPD among teachers and medical professionals for better educational planning and appropriate diagnosis of a child with CAPD.

OBJECTIVES:

Because children with CAPD present unique problems for their parents, teachers and many professionals who are called upon to evaluate and prescribe management for a given child's communication problems and resulting academic and social difficulties.

There is a need to make people aware of this disorder for the better diagnosis and meaningful educational planning of a child with CAPD. The tests which are effective in assessing CAPD are not yet widely used or understood by professionals.

Even when central auditory problems are diagnosed, there is often a wide gap between obtaining a diagnosis and implementing an appropriate educational program for the child.

Method:

A questionnaire consisting of 10 close ended questions in English language was developed to conduct this study. Questionnaire was distributed to the participants in which they had to respond by marking yes/no. The nature of the study and questionnaire were explained to the participants before voluntarily participating in the study. The study design and questionnaire were approved by the 5 audiologists of AYJNISHD. A total of 123 participants including 83 primary school teachers and 40 doctors were surveyed about their knowledge of CAPD belonging to various government and private English medium schools and hospital setups respectively. Among doctors, Information was collected from otolaryngologists, pediatricians, neurologists, psychiatrists. Separate statistical analysis was carried out for two the groups.

Results & Discussion:

Awareness and knowledge of CAPD appear to be very poor among primary school teachers and limited among medical professionals.Participants with most accurate knowledge of CAPD were very less in number and are mostly otolaryngologists. Study reports that 70% of the medical professionals are aware about the term central auditory processing disorder but 65% of them have idea about its signs and symptoms. Doctors reported that they rarely encounter these children In the medical setup. Only 32.5% of doctors reported that they have dealt with such type of cases. As far as the implementation of test battery approach in the better diagnosis of these children is concerned only 25% of doctors are aware about the different tests used to diagnose the disorder. With 87.5% of doctors reported that there is a meagre amount of information available regarding this disorder. The disorder is misdiagnosed or mislabeled with other disorders usually. Only 45% of the doctors reported that they can actually differentiate between central auditory processing disorders and other disorders e.g learning disability and attention deficit hyperactivity disorder. The findings reported that only 30% of doctors were aware about the treatment and rehabilitation approaches indicated to these children to overcome adverse effects of this disorder. About 95% of the doctors reported that there is a need of creating awareness among the doctors.

Study reported that Teachers are least aware about the disorder, its signs and symptoms and the teaching techniques and programs implemented for the better academic achievement of these children. 25.30% of teachers reported that they are aware about the disorder. 15.7% of teachers reported that are aware about the signs and symptoms of this disorder with a very less percentage 20.5% of teachers having idea of problems faced by the children in the schools.16.9% of teachers reported that they have actually encountered such children in the educational setup. Only 12.03% of teachers are aware about the different strategies that can be applied to improve their academic skills. Only 9.6% of teachers are aware about the basic tests that can be carried out in these children. Teachers find it very difficult to differentiate CAPD with other learning disorders. Study reported that only 21.7% of teachers know the difference between the CAPD and other learning disorders. Only 9.6% of teachers are aware about the treatment plans for these children. The study also reported that 86.7% of teachers think that there is a need of creating awareness about the disorder among the teachers.

Summary & Conclusion:

This research has revealed the least informed target population which includes primary school teachers .There is a great need to create awareness among teachers and medical professionals. During the survey, it was observed that teachers were more curious to know about the various kinds of auditory disorders because of very poor knowledge about the communication disorders and especially their management.

Awareness can be created by seminars,camps,videos etc. Doctors should also have information about this disorder to provide referrals for audiological evaluation and appropriate diagnosis of an affected child. Survey reported that among doctors otolaryngologists were aware about the disorder. Because of the limited information about this disorder there are chances of misdiagnosing the children with other learning disorders. Teachers can be trained in such a manner that they should be able to provide quality education to these children. There is a requirement of creating awareness about the role of ASLP in the management of various communication disorders. Researches can be conducted for the development of efficient tests that can be helpful in diagnosing the disorder in a child.


  Abstract – AP1033: Temporal Release of Information and Energetic Masking in Younger and Older Adults Top


Pravena Nantha Balan1, Deepika .M2, Sanjana S3 & Saransh Jain4

1princesspravena19@gmail.com,2deepikam2403@gmail.com,3sanjanasingh275@gmail.com, &4saranshavi@gmail.com

1JSS Institute of Speech and Hearing, Mysuru - 570004

Introduction:

With advancing age, the difficulty in following a conversation in the presence of noise increases multiple folds. The problem is marked when multiple people are talking simultaneously. Peripheral hearing loss, cognitive abilities, and auditory processing contribute to speech perception difficulties in older adults. Apart from the intrinsic factors, external factor, such as type of noise affects the extent of deficits in the perception of speech. Different type of noise affects the spectral and temporal resolution abilities in older adults.

The background noise causes two types of masking effects. When the noise overlaps the temporal and the frequency content of the target signal, it causes the energetic masking effect. Speech perception with speech weighted or speech shaped noise are the typical examples of energetic masking effect. As both signal and noise interact at the peripheral auditory system, the signal audibility reduces. On the other hand, when the speech is heard in the presence of other background speech, causing speech-in-speech masking, it is difficult to extract the meaning of speech at the cortical level. Such effect is known as informational masking effect. Speech perception in presence of speech babble is typical example of informational masking.

There are multiple ways illustrated in the literature to reduce the effect of masking. One such way is release of masking. The release of masking can be spatial or temporal in nature. Temporal release of masking refers to the presentation of signal and the masker in different time frames, such that the signal can be perceived in the glimpses of the masker, i.e., point with maximum signal-to-noise ratio.

Need for Study:

Earlier research studies have focused on the effect of increasing age on the perception of speech under energetic and informational masking. However, the temporal release of energetic and informational masking has not been investigated explicitly in younger and older adults. Since the temporal release of masking may be a tool to improve speech intelligibility through the hearing aids and cochlear implants, it is essential to study the amount of temporal release of energetic and informational masking.

Aim & Objectives:

The present study aimed to investigate the temporal release of energetic and informational masking in younger and older adults.

Method:

A total of 100 standard Kannada sentences were selected as stimuli for testing speech perception abilities. These sentences were mixed with noise such that 50 sentences were used to test the informational masking effect and the remaining 50 sentences tested energetic masking effect. The sentences were presented to 60 normal-hearing adults (30 younger adults aged 18-25 years; 30 older adults aged 50-60 years) with normal hearing sensitivity (PTA<15 dB; SRT+6dB of PTA; SIS>90%). Written informed consent was obtained from each participant prior to the testing, and appropriate permissions from institutional ethical board were taken.

The energetic masking effect was assessed by adding the speech spectrum shaped noise corresponding to each sentence at +2, 0, -2, -4 and

-6 dB SNR, such that 10 sentences were at each SNR level. The speech spectrum noise was generated and added at desired SNR level using Matlab. The temporal release of energetic masking was assessed by increasing the duration of the pause between each word spectrum in the speech spectrum shaped noise. The pause duration was increased to two folds, such that it will maintain the noise duty cycle of 0.5. The effect of informational masking was assessed using a speech babble, developed based on the 'International Speech Test Signals'. The speech babble consisted of concatenated speech segments of four female speakers, each having different mother tongue, i.e., Tamil, Telugu, Malayalam, and Hindi language. The segmentation and remixing made the babble unintelligible. The signal was matched for the long-term average spectrum of female speech and short-term spectral variation of continuous natural speech. The speech babble was added to the sentences at +2, 0, -2, -4 and -6 dB SNR levels, such that 10 sentences were processed at each SNR level. The 'RMS' of sentence and noise was matched prior to mixing. Temporal release of informational masking was assessed by inserting the silence between each concatenated speech sample, such that at any point, the noise duty cycle does not exceed 0.5. The stimuli were presented to each participant to assess the effect of informational and energetic masking and to assess the temporal release of informational and energetic masking.

The sentences were presented randomly to each participant, and they were asked to repeat the sentences. The repeated sentences were recorded using Alvin software, installed in the personal computer. The output from the computer was routed via the calibrated audiometer equipped with Sennheiser HDA-202 supra-aural headphones. The stimuli were presented binaurally at each participant's most comfortable loudness level, and the responses were recorded using Audio-Acoustica ATIII unidirectional microphone. The entire testing was carried out in a sound-treated room.

Results & Discussion:

The data were normally distributed (Shapiro-Wilk p>0.05), and hence, parametric statistics were carried out. The SNR-50 values for sentence perception under informational and energetic masking effect and temporal release of informational and energetic masking was calculated using logistic regression with nonlinear interpolation. The SNR-50 value was obtained for each participant and compared across masking effect, across temporal release of masking condition, and between younger and older adults, using 3-way ANOVA.

The results revealed a significant effect of masking condition [F(1,58)=17.26;p<0.001;Å'P=0.61], with the responses being better for energetic masking than for informational masking condition. The SNR-50 scores were approximately 1.5 dB better for energetic masking condition. The difference in the SNR-50 scores for energetic masking with and without temporal release was not statistically significant [F(1,58)=1.02;p=0.19]. However, the SNR-50 scores for informational masking were significantly better with temporal release condition [F(1,58)=12.57;p<0.001;Å'P=0.43]. These results indicate that the release of masking reduces the cognitive load more than the audibility load,

in both younger and older adults. A significant effect of age was also observed, where the effect of energetic masking [F(1,58)=7.19;p=021;Å'P=0.42] and informational masking [F(1,58)=22.36;p<0.001;Å'P=0.70] were significantly more for older than for younger adults.

The effect of the temporal release of masking in younger and older adults was estimated while controlling the effect of energetic and informational masking. The SNR-50 scores for energetic and informational masking were considered as co-variate, and one-way ANCOVA results indicated that younger adults benefit more from temporal release of energetic masking, whereas older adults benefit more from the temporal release of informational masking.

Summary & Conclusion:

Based on the results of the present study, it may be inferred that the temporal release of informational masking plays an essential role in speech perception in noise, especially in older adults. Thus, it may be suggested that the signal processing strategies to improve temporal release of masking should be adopted for hearing aid and cochlear implant prescription in older adults.


  Abstract – AP1034: The Functioining of Otolith Organs and Semicircular Canals in Individuals with Type-Ii Diabetes Mellitus Top


R. Rajesh Kumar1, Prawin Kumar2, Vipin P G3, Kumari Apeksha4 & Niraj Kumar Singh5

1jordan.rajesh.kumar14@gmail.com,2prawin_audio@rediffmail.com,3vipinghosh78@gmail.com,4apeksha_audio@yahoo.co.in, &6niraj6@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Diabetes mellitus (DM) is a metabolic disease associated with rise in blood glucose level. Type 2 diabetes mellitus (T2DM) constitutes

80-90% of all DM and is a major public health issue (Amod et al., 2012). Individuals with DM are more likely to develop hearing loss, tinnitus, and dizziness due to macro and microvascular complications in the inner ear caused by glucose metabolism (Maia & deCampos, 2005). The progression of the disease may lead to comorbidity of hearing and vestibular symptoms, at least in a few individuals (Yikawe et al., 2017; Razzak et al., 2015).

Need for Study:

Vestibular system consists of otolith organs and semicircular canals. While ocular and cervical vestibular evoked myogenic potential (cVEMP & oVEMP) assess the functioning of the otolith organs (Halmagyi et al., 1994; Rosengren et al., 2005), the video head impulse test (vHIT) assesses the semicircular canals in all 6 degrees of freedom. The use of cVEMP, oVEMP and vHIT can, therefore, provide information about the entire vestibular periphery in individuals with T2DM; however, to the best of our knowledge, there is no study reporting all three test outcomes in the same individual. Yikawe et al (2017) stated that an increase in the duration of T2DM was related to an increase in hearing thresholds. Since the inner ear consists of both hearing and balance system, the extent to which balance functions are impaired might depend on the duration of the disease.

Aim & Objectives:

To investigate the vestibular functioning in individuals with T2DM and establishing whether or not the effects are duration dependent.

Method:

Forty individuals (80 ears) in the age range of 40-60 participated in the present study. Among them, 30 had T2DM with disease duration ranging from 1year to 25 years. The control group comprised of 10 non-diabetic individuals. The diagnosis of T2DM was made by a general physician. The contralateral rectified oVEMPs were obtained from non-inverting electrode 1-cm below the lower eyelid and inverting electrode 2-cm below the non-inverting, whereas ipsilateral rectified cVEMPs were obtained from non-inverting electrode on the belly of the sternocleidomastoid muscle and inverting at the sternoclavicular junction. The ground electrode was on the forehead for both tests. Both responses were elicited by a 500-Hz tone-burst (intensity =125 dB peSPL, rise-plateau-fall time = 2-1-2ms, repetition rate = 5.1Hz). Analysis window was kept at 70-ms including 15-ms pre-stimulus recording. vHIT responses were obtained by jerking the head in the respective canals' planes while the participant maintained gaze on a wall-fixed target straight in front. Additionally, the participants' heads were turned by 45o to the left or right for right anterior - left posterior plane (RALP) and left anterior - right posterior plane (LARP), respectively. The direction of head impulses was randomised and extent was limited between 10° and 20° with minimum peak velocities of 100°/s for vertical canals and 150°/s for lateral canals.

Results & Discussion:

The response rates and abnormality rates were compared using the equality of test for proportions. Latencies, amplitudes and VOR gains were compared using Mann-Whitney U test. Correlation analyses were done using Spearman's rank correlation. The response rate of oVEMP was 100% (20 out of 20 ears) in the control group which was significantly higher than the response rate of 78.33% (47 out of 60) in the T2DM group [Z=2.27,p=0.022,]. Compared to the control group, the T2DM group had significantly longer latencies of n1 [Z=-3.78,p=0.000] and p1 [Z=-3.216,p=0.001] and smaller peak-to-peak amplitudes [Z=-2.385,p=0.017]. These findings are in consonance with those of Konukseven et al. (2015) who reported significantly prolonged latencies of oVEMP in T2DM. However, contraindictory findings were reported by Kalkan et al (2018) who found no significant difference in latencies between T2DM individuals with and without diabetic neuropathy. The differences from the present study in the study by Kalkan et al (2018) could be exclusion of individuals with vestibular symptoms which would have caused exclusion of individuals in whom vestibular system was affected due to T2DM. The prolongation of latencies of VEMP could be the result of complications such as sensory or motor polyneuropathy in individuals with T2DM (Konukseven et al. (2015). Murofushi et al (2001) stated that prolonged latency in VEMP could also occur due to retro-labyrinthine demyelinating lesion. The response rate of cVEMP was 100% (20 out of 20 ears) in the control group which was significantly higher than the response rate of T2DM group 81.66% (49 out of 60) [Z=2.06,p=0.039].

Compared to control group, T2DM group revealed significantly reduced peak-to-peak amplitude of P13-N23 complex [Z=-3.966,p=0.000]. While cVEMP response rates, like oVEMPs, were significantly lower in T2DM group than the control group, there was no significant group difference on other cVEMP parameters. Similar discrepancy of cVEMP and oVEMP have been observed with cVEMP being less often affected than oVEMP in auditory neuropathy spectrum disorders (Sinha et al., 2013; Singh et al., 2015) and vestibular neuritis (Fetter & Ditchgans, 1996; Goebel et al., 2001; Fundakowski et al., 2012). The discrepancy between cVEMP and oVEMP results could be due to the relative robustness of the inferior vestibular nerve fibres than the superior vestibular nerve fibres (Fundakowski et al., 2012). On vHIT, reduced VOR gains were observed in any canal in 11 (18.3%) ears of individuals with T2DM as against no one in the control group. This difference in the abnormal VOR gain response between the groups was statistically significant [Z=2.06,p=0.039]. Further, separate one-way ANOVAs revealed no significant difference in VOR gain between T2DM and controls for lateral canal [F(3,70)=0.573,p>0.05], anterior canal [F(3,70)=0.303,p>0.05] and posterior canal (F(3,70)=1.094,p>0.05) on vHIT. Presence of refixation saccades evidenced in 12 (20%) ears of individuals with T2DM; however, no one the control group had refixation saccades in any canal. This difference was statistically significant [Z=2.16,p=0.030]. Overall, an abnormal result on vHIT (presence of refixation saccades and/or reduced VOR gain in any number of canals) was evidenced in 17 (28.3%) in the T2DM group, which was significantly higher than the control group [Z=2.68,p=0.007]. Presence of reduced VOR gain and refixation saccades were observed more in T2DM compared to normal. Similar results, although not about refixation saccades,

were observed previously in T2DM (Kalkan et al., 2018; Omar et al., 2018). These outcomes could be possible because of the cupular deposition in one or more semicircular canals in T2DM than the healthy controls (Yoda et al., 2011; Ward et al., 2015). Finally, There was a significant positive correlation of the disease duration with the only latencies of n1[r=0.289,p=0.025] and p1 [r=0.299,p=0.020]. Although not done using oVEMP, other studies reported higher incidence of vestibular dysfunction (Agarwal et al., 2009) and more abnormal results on saccades test in T2DM than normal controls (Ozel et al., 2014). This might be due to the cascading effect of the pathology on the reduced nerve conduction velocity with increasing disease duration, as evidenced in previous studies (Chethan & Sowjanya, 2018; Sultana et al., 2010).

Summary & Conclusion:

The results of the present study show that oVEMp and vHIT results are more abnormal than cVEMP in individuals with T2DM, These results are indicative of distinctive vestibular end-organ impairment in T2DM. The study also found an association of disease duration with chronic complication like neuropathy. Therefore, vestibular end-organ assessment should be routinely conducted in order to understand the impact of increased blood glucose in T2DM.


  Abstract – AP1036: Awareness of Noise Induced Hearing Loss amongst Professional Gym Instructors Top


Nafeesathul Afeefa1, Osheen Mumtaz2 & Archana Rai S3

1nafeesathulafeefa@gmail.com,2osheenmv@gmail.com, &3archana.rai04@gmail.com

1NITTE Institute of Speech And Hearing, mangalore - 575018

Introduction:

Exposure to noise/music that is too loud is known to have harmful effects on hearing. Noise-induced hearing loss (NIHL) is significantly high in a diverse range of musicians and music listeners (Phillips, Henrich, & Mace, 2010). Music listening is mostly associated with leisure, enjoyment and during fitness training which is not likely to focus on harm to hearing, hence awareness regarding the damage on hearing caused at these venues would be low in those who visit such activity venues. According to Beach et al. (2016), night clubs were the noisiest and posed a higher risk for hearing difficulty.

There are various places where loud music is played where the focus is on enjoyment. One such place is the fitness classes/ gyms were music is played as a part of the entertainment during healthy exercises. Studies have showed the risk of noise-induced hearing loss in rock band and orchestra musicians and individuals who work at venues of entertainment where recorded or live music is played (Vogel et al., 2010; Phillips, Henrich & Mace, 2010). Nevertheless, loud music exposure in gyms is generally not considered risky as the training duration is not more than an hour which is considered to be very brief, which is very unlikely to cause NIHL. According to Beach and Nie (2016) noise level of music is found to be consistently too high and mean noise level in classes was measured, A-weighted sound pressure level (LAeq) was more than 90 decibels (dB) and posed a potential risk to hearing in fitness instructors. It is also true that prolonged and consistent exposure to a decibel higher than 90dB might cause permanent damage to hearing as the damage might be to the outer hair cells.

Need for Study:

The risk of hearing loss associated with continuous listening to loud music in workplaces and leisure setting is recognized. This has been assessed in most of the professionals who work in places where music is being played, but very little is known regarding hearing-related problems in gym instructors. It is important that the gym instructors need to know that they are prone to have hearing-related issues due to prolonged exposure of music. On the other hand, as an audiologist, it is essential to identify the early signs and symptoms of hearing problems so that appropriate hearing conservation programs can be recommended.

Aim & Objectives:

The study aimed at identifying awareness of hearing difficulties, symptoms of hearing loss and self-reported indicators of noise exposure in gym instructors.

Method:

A cross-sectional survey was conducted with the focus of identifying awareness of hearing loss caused due to noise/music exposure. The study also aimed at identifying self-reported indicators of noise exposure and symptoms related hearing loss. A questionnaire was formed with a total of 15 questions; questionnaire included general demographic details and questions specifically focusing on symptoms of hearing and indicators of noise exposure. Ten audiologists validated the questionnaire. Questionnaires were circulated through Google forms platform to the gym instructors for 29 participants' through snowball sampling. Noise levels in gyms were accounted with the help of SLM (Sound Level Meter) from 10 gyms to estimate the average level at which the music is played. The data obtained were transferred to excel for analysis. Descriptive statistical analysis was applied to the data.

Results & Discussion:

RESULTS:

The present study aimed to identify the awareness of noise-induced hearing loss in gym instructors. Noise levels in gyms were accounted with the help of SLM which on an average was 90 decibels. A total of 29 participants participated in the study. Out of the 29 participants, 68.9% (20) of the respondents were not aware of hearing difficulty caused by prolonged exposure to music. Majority of the subjects participated in the study were full-time professional gym instructors (62%). All the participants (100%) reported that loud music is played in their gym during fitness training. It was noted that pop music (20.6%) was the least preferred type followed by rock music (24.1%) and a mixed kind of music was the most preferred. Out of the population tested 37.9% works for more than 5 hours a day in the gym, and 34.4% of the participants worked for more than 10 hours a day in the gym. Amongst the tested population, 96.5% of the respondents did not employ the use of EPDs. The respondents reported the presence of tinnitus (13.79%) and understanding difficulty (37.93%). Out of 11(37.93%) participants who reported to have understanding difficulty 8 (72.7%) had personal music listening habits at a volume ranging from 50%-100 %. Around 31.03% of the tested population stated that music is annoying and they will have to reduce the intensity of the music.

DISCUSSION :

The present study highlighted on the awareness and self reported symptoms of noise induced hearing loss caused by prolonged exposure to loud music in gym instructors. The average noise level in the gym was found to be 90 decibels which indicates that the gym instructors are at a risk for hearing damage, arising from cumulative noise exposure created at gyms, which is in supportive of the study done by Beach and Nie (2016). Symptoms of hearing related problem was wide spread in instructors with almost 13.79% reported the presence of tinnitus and 37.93% reported understanding difficulty. This indicates that a significant number of instructors may be experiencing hearing problems due excessive noise exposure. It can also be inferred that the 31.03% instructors have loudness intolerance as they stated that music is annoying and they need to reduce the loudness of the music, this also indicates that the outer hair cell functioning could be effected in these individuals. From the present study we can infer that there is a lack of awareness regarding hearing loss caused by exposure to loud music for a prolonged period of time. As there is a lack of awareness in these individuals usage of EPDs are also found to be very less. The lack in awareness can be contributed to the fact that the purpose of the music played in gyms focus on enjoyment during fitness activity ignoring on hearing health

(Beach and Nie, 2015).

Summary & Conclusion:

The overall findings of the study indicated that most of the gym instructors had signs and symptoms of hearing difficulty and were potentially exposed to excess loud noise/music. It is vital to make the gym instructors aware that continuous exposure to loud music may cause hearing difficulties. The risk of hearing loss can be communicated through media outlets, by publishing the research findings and by raising awareness amongst those at risk.


  Abstract – AP1037: Attitude towards Rehabilitative Management of Person with Single Sided Deafness (SSD) among Otolaryngologists and Audiologists Top


Anil jethwani1, Navya Rathi2, Akshit Dadhich3 & Ketaki Borkar4

1aniljethwaniaslp@gmail.com,2navyarathiaslp@gmail.com,3sharma.vinid36@gmail.com, &4deshpande.ketaki@gmail.com

1Ali Yavar Jung National Institute of Speech and Hearing Disabilities (Divyangjan), Mumbai - 400050

Introduction:

SSD is defined as a condition where an individual has non-functional hearing in one ear and receives no clinical benefit from traditional acoustic amplification system, with the contralateral ear possessing normal audiometric function (George Cire, 2012). The current statistics indicate that up to 10% of the population may have SSD (Meehan S, et al. 2017). Persons with unilateral hearing loss face challenges in localizing sounds and understanding speech in noise, but various treatments such as CROS, BAHA, CI can improve or even restore the benefits of binaural hearing like spatial hearing, reduction of head shadow effect.

Researchers Camille Dorbeau et al (2018) conducted a study to find the binaural perception in single-sided deaf cochlear implant users with unrestricted or restricted acoustic hearing in the non-implanted ear. For this study 18 SSD patients (8 males & 10 females) with at least one year of CI experience were taken. The study revealed strong significant and consistent CI benefits in localization, speech performance (when speech and noise are spatially separated), tinnitus reduction, and QoL.

Researches also suggest a short- and long-term efficacy for the BAHA in adults with single-sided deafness for recognition of speech in noise (noise in front, speech lateralized to the bad ear) and on subjective measures of benefit. (Cristopher J. Linstrom MD 2009). In this study the outcome measures were administered in the aided, directional BAHA, and omnidirectional BAHA conditions on 8 adults after 1 month, 6 months, and one year of BAHA use.

A study conducted by Myrthe K. S. et al (2009) on 10 adults with SSD showed improved scores in domains of ease of communication, background noise & reverberation and least deterioration was seen in aversiveness of sound with conventional CROS.

Need for Study:

Studies show a significant benefit across the various rehabilitative management options that are available for person with single side deafness, still there is a lack of awareness towards the use of it among the population with SSD in India. One of the reasons for this could be that the information is not imparted by the hearing healthcare professionals. The attitude of the health care professionals towards the management options influences the way counselling is carried out which in turn influences the choices made by the persons with SSD. Hence there is a need to study the attitude of the otolaryngologists and audiologists towards the rehabilitative management of person with single sided deafness.

Aim & Objectives:

To study the attitudes of otolaryngologists and audiologists towards rehabilitative management for Single Sided Deafness.

To study the differences in the attitudes of otolaryngologists and audiologists towards rehabilitative management for Single Sided Deafness.

Method:

A questionnaire was developed which was validated by audiologists to seek opinion regarding evaluation and management and was administered on 30 otolaryngologists and 30 audiologists. Questionnaire contains 20 questions in which 15 are affirmative i.e. closed ended and 5 are interrogative, in terms of awareness, choice of rehabilitative management, effectiveness of rehabilitative management. This was distributed via social media as questionnaire made in google form and responses were measured and collected in excel sheet. The results were analysed quantitatively to find the percentage.

Results & Discussion:

82% of otolaryngologists and 85% of audiologists are aware of most of the signs and symptoms of SSD whereas only 68% of otolaryngologist and 52% of audiologist believe SSD as a disabling condition. This indicates that the awareness among otolaryngologist and audiologist is similar however more of otolaryngologist consider SSD as a disabling condition than audiologists.

The attitudes of 87.5% of otolaryngologists and 85.3% of audiologists towards the effect of rehabilitative management option for the sign and symptoms of SSD is positive. 72% of both otolaryngologists and audiologists believe that rehabilitative management is mandatory though 28% of both otolaryngologist and audiologist believe either it is not necessary or are uncertain about it. In spite of being aware about the effects of binaural hearing through their curriculum they do not prefer the rehabilitative management probably because of the experience in their practice where SSD patients report no positive feedback with rehabilitative management over the years. This ambivalent attitude of hearing healthcare professionals towards the rehabilitative management might influence the decision making of person with single sided deafness while selection of management option

Otolaryngologist believe cochlear implant (44%) to be the best option and BAHA (32%) as the second best option for increase in the signal to noise ratio. Whereas audiologist believe CROS (52%) to be the best option and cochlear implant (40%) as the second best option for improvement in the signal to noise ratio(SNR). Both otolaryngologist (50%) and audiologist (40%) believe BAHA as the best management option and cochlear implant (i.e. 32% and 36% respectively) as the second best option for improvement in the localization abilities. 56% of otolaryngologist believe implantable devices (BAHA, CI) as the best management option whereas 74% of audiologist believe non-implantable devices (CROS, H/A) as the best management option for person with SSD.

Summary & Conclusion:

A questionnaire was administered on otolaryngologists and audiologists to study the attitude and differences in the attitudes towards rehabilitative management of person with Single Sided Deafness (SSD). The study revealed following trends in attitudes towards the rehabilitative management:

In spite of similar levels of awareness amongst otolaryngologists and audiologists about the signs and symptoms of SSD, more audiologists than otolaryngologists consider SSD as a disabling condition.

Overall attitude towards the effectiveness of rehabilitative management on the signs and symptoms of SSD among both otolaryngologists and audiologists is positive.

There is a difference observed in the selection of management option, otolaryngologists consider implantable devices as the best management option whereas non-implantable devices are more appreciated by audiologists. The findings of the study support the concept that attitude of otolaryngologists and audiologists plays an important role in suggesting appropriate management option in person with SSD. There is a distributed pattern of responses in selection of management option among the hearing healthcare professionals. There is a need for a collaborated management approach to decide the selection criteria and assessment protocols to choose the appropriate rehabilitative management strategy for person with single sided deafness.

Further Researches with regard to users' perspective of how this attitude influences decision making should be conducted as the attitude of professionals influence the selection of management option by the patient. Hence, more data should be collected and analysed from the users' perspective.


  Abstract – AP1040: Audiologic Monitoring of Actinomycetoma Patients on Aminoglycoside Treatment with Long Term Follow-Up Top


Shubham Shaniware1, Suvankar Parasar2 & Mitali Thakkar3

1shubhamshaniware333@gmail.com,2suvankarmund96@gmail.com, &3mitalit151@gmail.com

1KEM Hospital, Pune - 411011

Introduction:

Mycetoma is a chronic, suppurative, and deforming granulomatous infection which is progressive. It is a disorder of subcutaneous tissue, skin, and bones, mainly of feet, characterized by a triad of symptoms viz. localized swelling, underlying sinus tracts, and production of grains or granules. Myectoma can be caused by bacteria (actinomycetoma) or fungi (eumycetoma). The worldwide incidence of actinomycetoma varies from country to country and region to region, but this infection is predominant in countries that are located between 30°N and 15°S. Most cases of mycetoma occur in India and other countries like Sudan, Venezuela, and Mexico.

The treatment of choice for actinomycetoma involves amikacin, dapsone, rifampicin, streptomycin, and each medication follows different dosages and duration. These medications are more notorious for being ototoxic primary targeting the cochleo-vestibulo system. Unfortunately, while these complications are relatively common, not much attention is usually paid to them in the course of treatment of patients. This relative inattention is due to the complications not being perceived as directly life-threatening. While this may be so, it is important to note that ototoxicity, which may be irreversible, can result in severe impairment of the patients' quality of life with attendant anxiety and potential for progression to serious mental illness.

Need for Study:

Patients receiving actinomyectoma treatment must be made aware of the ototoxic signs so that they can know the correct time to consult an audiologist and/or otolaryngologist (Fausti et al, 2005). This helps to monitor the hearing level of the patients to know the early effect of ototoxicity.

  1. If a patient's awareness of the effects of amikacin injection is not raised, patients may not notice ototoxic hearing loss until a communication problem becomes apparent. The patient will come to know about it only when the speech frequencies get affected.
  2. The study will present a holistic understanding of the effects of amikacin injection medication and the need for audiological monitoring and intervention, including counseling, rehabilitation technology, and communication intervention.


Aim & Objectives:

Aim: The study aimed to describe the occurrence and nature of hearing loss in terms of degree and configuration of hearing loss in patients with actinomycetoma receiving aminoglycoside treatment for 3 cycles.

Objectives:

  1. To assess the hearing levels at baseline, post 1st cycle of treatment, post 2nd cycle of treatment and post 3rd cycle of treatment.
  2. To compare the hearing levels at baseline, post 1st cycle of treatment, post 2nd cycle of treatment and post 3rd cycle of treatment.
  3. To find out the specific frequencies involved in hearing loss among Actinomycetoma patients in the cycle of treatment.


Method:

Description of participants:

The study sample consisted of 30 patients, of which 27(90%) were males and 3 (10%) were females. The age range of participants was 18—56 years, with a mean age of 34 years. All participants (100%) were receiving Amikacin for the treatment of Actinomycetoma

Procedure:

Baseline pre-treatment pure tone audiograms between 250 Hz and 8000 Hz were performed for all the patients in a sound-treated room and were repeated after every cycle of treatment. Baseline and follow-up audiological testing included both air-and bone-conduction threshold measurements in all the patients.

Those patients with any pre-treatment evidence of hearing loss on history, clinical assessment (patients with evidence of infective pathology in-ear were excluded) or pure tone audiometry, whether conductive (A-B gap > 10 dB) or sensorineural were excluded.

All the patients who had a history of taking ototoxic drugs were excluded from the study

Results & Discussion:

Pure tone audiometric results were averaged and grouped into three frequency categories (Rademaker-Lakhai et al., 2006) Low-frequency average:- 250Hz to 500Hz,

Mid-frequency average:- 1kHz to 2kHz and

High-frequency average:- 4kHz to 8kHz, hence, thresholds were averaged in these frequency groups for analysis.

Descriptive statistics and parametric tests were used to find out the mean hearing levels using pure tone audiometry and also frequency wise comparisons were made to see if there exists a statistically significant difference between the hearing thresholds of baseline and first cycle, first cycle and second cycle, and second and third cycle.

Descriptive statistics

Descriptive statistics suggest that there exists a shift in average hearing level in all the three frequency ranges after every cycle, however, the effects are more pronounced in high-frequency range and mainly post 3rd Cycle of Treatment.

Individual Frequency Analysis

The greatest shift in thresholds is obtained in the high-frequency average group and it more predominantly at 8 kHz as observed from the descriptive statistics.

Statistical comparison of Mean Hearing loss across the cycles

Paired t-test was performed for Low Frequency, Mid Frequency and High-Frequency group. Between baseline and post 1st cycle, post 1st cycle and post 2nd cycle and in between post 2nd and post 3rd cycle of treatment.

Low Frequency; there exists a statistically significant difference between the mean hearing levels of participants between each cycle of treatment as observed from p values <0.05 obtained between baseline and 1st cycle, 1st and 2nd cycle, 2nd and 3rd cycle as follows (0.030, 0.028, 0.09).

Mid Frequency: there exists a statistically significant difference between the mean hearing levels of participants between each cycle of treatment as observed from p values <0.05 obtained between baseline and 1st cycle, 1st and 2nd cycle, 2nd and 3rd cycle as follows (0.001, 0.000, 0.000).

High Frequency: there exists a statistically significant difference between the mean hearing levels of participants between each cycle of treatment as observed from p values <0.05 obtained between baseline and 1st cycle, 1st and 2nd cycle, 2nd and 3rd cycle as follows (0.000, 0.000, 0.000).

It is thus observed from the obtained p values that the difference in hearing levels between each cycle is statistically significant, however, the difference is more significant at Mid and High Frequency as opposed to Low Frequency.

Discussion:

The observed hearing levels at baseline are suggestive that all participants had normal hearing before the start of treatment, however with the progression of treatment it was observed that hearing loss developed at mid and high Frequencies for 3 cycles of treatment. However, it is now clear that ototoxicity sets in usually by end of 2nd cycle and hence effective measures should be taken to prevent the detrimental consequences of Ototoxicity

Summary & Conclusion:

A high rate of hearing loss was seen in patients using amikacin for the treatment of actinomycetoma of leg. It has a significant adverse event that can impair their quality of life. This study is the first to show the effect of hearing thresholds in patients with actinomycetoma treatment. Further studies are required to elucidate the mechanism of increased ototoxicity in a patient with actinomycetoma of legs as compared to other parts of the body. Hearing loss is likely to increase with subsequent cycles of treatment. So monitoring of hearing loss needs to be done after each cycle of treatment. It helps to evaluate the effect of amikacin on the auditory threshold in frequency and also helps to changes in amikacin dosage in actinomycetoma patients if needed. Information regarding possible hearing loss should be included as part of informed consent before commencing amikacin therapy


  Abstract – AP1041: Comparative Study on Method of Threshold Estimation: Distortion Product OAE Threshold Test Vs Pure Tone Threshold Test Top


Abhishek Mistri1, Nirmit Shah2, Gunjan Mehta3, Hiral Joshi4 & Anuj Kumar5

1107abhishekmistry@gmail.com,2nushah98@gmail.com,3gunjanmehta06@gmail.com,4hiraljoshi82@yahoo.in, &5anujkneupane@gmail.com

1C.U Shah Medical College and Hospital, Gujarat - 363001

Introduction:

Otoacoustic emissions (OAEs) are sounds of cochlear origincaused by the motion of the cochlea's sensory hair cells as they energetically respond to auditory stimulation. OAE test is used to assess outer hair cell (OHC) function and to estimate hearing sensitivity. (Kemp, 2002). OAEs can be classified as Spontaneous OAE(SOAE), Transient Evoked OAE (TEOAE) and Distortion Product OAE (DPOAE). DPOAE is generated when two tone of frequencies f1 and f2 are presented at loudness L1 and L2 such that they will interact in the cochlea at a placeclose to the best place for the higher of the two frequencies(f2), producing intermodulation distortion, the largest componentof which occurs at a frequency equal to 2f1-f2 (Brownell, 1990). DPOAEs provide quantitative informationabout the range and operational characteristics of the Cochlear Amplifier, i.e. sensitivity, compression, and frequency selectivity. DPOAE can be interpreted in two ways, 1) DP-Gram and 2) DPOAE input/output function (DPOAE i/o function). DPOAE-grams reflect the sensitivity of the cochlear amplifier (CA)best when recorded at close-to-threshold stimulus levels. In normalhearing (normal CA), DPOAE-grams are close to each other at highand more separated at low stimulus levels reflecting cochlear non-linear sound processing.

Need for Study:

The relationship between DPOAE and behavioral thresholds has been studied regularly. Different DPOAE input/output ~I/O functions measurement methods have been used to provide estimates of DPOAE thresholds and their estimation in behavioral thresholds. In a previous study, DPOAE threshold from extrapolated DPOAE I/O-functions were derived directly where cubic 2 f1âˆ'f2 distortion products and pure-tone threshold at f2 were measured at 51 frequencies between f2=500 Hz and 8 kHz at up to ten primary tone levels between L2=65 and 20 dB SPL in 30 normally hearing and 119 sensorineural hearing loss ears. There was linear regression which yields correlation coefficients higher than 0.8 in the majority of the DPOAE I/O-functions, which showed a significant correlation between DPOAE threshold and pure-tone threshold. Thus, the DPOAEs that reflect the functioning of an essential element of peripheral sound processing enable a reliable estimation of hearing threshold (Boege and Janessen, 2002). Schmuziger (2006) studied the reliability of automated DPOAE threshold based on extrapolated DPOAE i/o function. He compared automated extrapolated DPOAE i/o functions with PTA. The findings suggested a 2 dB mean difference between automated DPOAE threshold and pure-tone thresholds. Therefore, various studies have predicted behavioural thresholds on the basis of DPOAE i/o function. Path Medical Solutions has developed an advanced test module to measure the cochlear thresholds in dBHL based on extrapolated DPOAE i/o function. Hence, there is a need to achieve normative on DP threshold and compare it with pure-tone threshold to determine its applicability for clinical use.

Aim & Objectives:

Aim of the preset study was to develop normative on DP threshold and compare it with pure-tone thresholds so as to understand the difference between the two in each frequencies.

Method:

Participants: A total of 50 individuals (100 ears), ages between 18 and 35 years participated in the study. Prior otoscopic examination and immittance audiometry was performed to rule out conductive component. Any subject with history of otological or neurological conditions were excluded. Subjects with 'A' type tympanogram were included in the study.

Pure-tone Audiometry: Pure-tone audiometry(PTA) was done using GSI 61 Clinical Audiometer. It was conducted on 1000, 2000, 4000, and 8000 Hz. Subjects were having pure-tone average below 15 dBHL.

DPOAE Threshold: Sentiero Advanced Device of Path Medical Solutions was utilized in the study. It has DPOAE Threshold Test module which directly measure the DP-threshold in dB HL based on 'scissors' paradigm (i.e. 2f1-f2). DPOAE threshold was measured on 1 k Hz, 2 kHz, 4 kHz and 8 kHz.

Results & Discussion:

Results:

With IBM SPSS version 22 software, descriptive statistics was done to calculate the mean and standard deviation of hearing thresholds measured across frequencies via PTA and DP threshold test from all the participants, The mean threshold differences between PTA and DP threshold test at 1 KHz for left ear was 2.1 (S.D. = 4.7) and right ear was 2.24 (S.D. = 4.4), 2 KHz for left ear was 1.82 (S.D. = 4.6) and right ear was 2.8 (S.D. = 5.3), 4 KHz for left ear was 5.78 (S.D. = 5.98) and right ear was 7.18 (S.D. = 6.68), 8 KHz for left ear was 4.48 (S.D. = 6.55) and right ear was 5.66 (S.D. = 7.98). Shapiro-Wilks test revealed normal distribution (p>0.05) of the data. Therefore, parametric paired t-test was performed between two tests of threshold estimation. The significant difference between PTA and DP threshold test at 1 KHz for left ear as t(49) = 3.17, p= 0.003 and right ear as t(49) = 3.6, p= 0.001, 2 KHz for left ear as t(49) = 2.77, p= 0.008 and right ear as t(49) = 3.72, p= 0.001, 4 KHz for left ear as t(49) = 6.83, p= 0.000 and right ear as t(49) = 7.59, p= 0.000, 8 KHz for left ear as t(49) = 4.83, p=0.000 and right ear as t(49) = 5.01, p= 0.000. Thus, the results showed that there is a significant difference between both PTA and DP threshold test.

Discussion:

The results indicate that there was a significant difference between DPOAE threshold test and PTA. In the present study, it was seen to have mean difference across 1000, 2000, 4000, and 8000 Hz between PTA and DP threshold test where DP threshold was found to have better threshold values. The mean difference was more at high frequencies (7 dBHL) compared to mid frequencies (2 dBHL). Similar studies were done by Schmuziger, (2002) where overall 10 dBHL of difference between PTA and automated DPOAE threshold was observed. In another study with varying calibration stimuli, there was a similar finding when comparing DPOAE in dBSPL with behavioural thresholds. There was difference of 1 dB HL which indicated a correlation between DPOAE i/o and Pure-tone threshold (Rogers et. al., 2010).

Summary & Conclusion:

The result from the present study provides the normative data on DPOAE thresholds for clinical usage and research. It indicated the mean difference across frequencies between DPOAE threshold and pure-tone thresholds and these values are significantly different which could be used in clinical and research setup for assessment and verification of various otological conditions.


  Abstract – AP1043: Social and Emotional Competences in MDR-TB Patients due to Hearing Loss Top


Shubham Shaniware1, Suvankar Parasar2 & Mitali Thakkar3

1shubhamshaniware333@gmail.com,2suvankarmund96@gmail.com, &3mitalit151@gmail.com

1KEM Hospital, Pune - 411011

Introduction:

Tuberculosis (TB) is a chronic, progressive mycobacterium infection, often with a period of latency following initial infection. TB is caused by bacteria known as mycobacterium tuberculosis. It most often affects the lungs. The World Health Organization (WHO) estimates that there are 650,000 cases globally of multidrug-resistant (MDR) tuberculosis (Tb). The treatment of MDR TB has various adverse effects. The injectable drugs, aminoglycosides, and polypeptides that are used for the treatment of MDR TB are associated with a risk to renal function, hearing, and the vestibular system. The first effect of hearing loss is on communication. It is difficult for a person to adjust to this problem especially if the hearing loss is acquired. Many studies have shown that the inability to communicate effectively affects socialization, independence, and participation in activities of daily life. Hearing handicap is a measure of the impact of hearing loss in individual life experiences. No laboratory test can find out the handicap caused by hearing loss. The impact of handicap can be assessed by different questionnaires. Self-assessment, HHIE, HHIA are certain questionnaires to assess the handicap caused by hearing impairment.

Need for Study:

Hearing impairment is third most common populations' impairment. Hearing impairment is associated with negative emotions. In addition to emotional considerations, both psychological and social behavior can be affected due to hearing loss. Hearing impairment harms overall development, psychological and cognitive behavior, speech and language. As a result, they may experience a poorer quality of life. MDR TB is known to cause farfetched effects on a person's social, emotional, economic and psychological aspects of life. These effects are exacerbated in the presence of concomitant illnesses which mainly include fatigue, weight loss, and night sweats. These patients suffer from greater psychological, social and emotional impact of Hearing Loss due to the presence of a primary illness. As a professional, it is essential to have an idea about the patient's overall well-being, as it will aid in better management and prognosis.

Hence, the present study aimed to evaluate the social and emotional impact of hearing impairment in the individual with a history of MDR TB and without MDR TB.

The findings of the study will help to understand whether individuals with hearing impairment with a history of MDR TB face similar difficulties or more difficulties as compared to individuals with sensorineural hearing loss without MDR TB. This will help to plan management strategies and to provide appropriate referrals for the management of psychological issues.

Aim & Objectives:

Aim: To assess the social and emotional competence among MDR TB patients due to hearing loss.

Objective:

  1. To assess the emotional and social impact of hearing impairment in the individual with a history of MDR TB and without MDR TB.
  2. To compare the overall (social and emotional) effects of hearing impairment between individuals with a history MDR TB and individuals without MDR TB


Method:

Sampling technique: A purposive sampling technique was used to recruit all participants.

Description of participants:

The study consisted of two groups, n=30 in each group of individuals in the age range 25-50yrs.

Group I: n=30, age range of 25-50yrs, mean age of 39yrs. Out of 30, 17 were males (57%) and 13 were females (43%). All the 30 participants were diagnosed with MDR TB and were on medication for the same for 6 months. All the 30 participants were taken up for audiological evaluation which included, case history taking, otoscopic examination, and pure tone audiometry evaluation at octaves and mid octaves from 250Hz to 8KHz. Average Hearing Loss exhibited by all the participants was moderately severe,severe and profound hearing loss.

Group II: 30 participants having moderately severe, severe and pofound sensorineural hearing loss were chosen for the study. n=30, age range of 25-50yrs, mean age of 40yrs. Out of 30, 15 were males (50%) and 15 were females (50%).

Tool:

The Hearing Handicap Inventory for Adults (HHIA) was used. The HHIA is a 25-item self- assessment scale comprised of two subscales looking at the emotional and the social/situational effects of hearing impairment. The HHIA has two subscales; social and emotional. The Social (S) subscale consists of 12-items whereas the Emotional (E) subscale consists of 13-items. Respondents were asked to answer the questions based on their daily life experience using a 3-Likert scale as follows: Yes (4 points), Sometimes (2 points) and No (0 points). Lower the score better the quality of life.

Procedure:

The study was explained to the participants and written consent was obtained from them after clarification of their doubts. The consent form aimed at informing the parents about the objectives, justifications, and procedures of the investigation.

The HHIA questionnaire was administered on the subjects either by asking them to fill up the questionnaire or by verbally interviewing them. The data were collected from the two groups of subjects and the responses/answers were rated on a three-point rating scale ranging scale i.e.

1) 'yes'- awarded 4 points; 2) 'sometimes'- awarded 2 points; 3) 'no'- awarded 0 points. Lower the score better the quality of life. The Social (S) subscale, the Emotional (E) subscale and overall (both social and emotional) scores were obtained.

Results & Discussion:

Descriptive Statistics: Mean Overall Scores on HHIA were compared across the two groups and it is as shown below Group I and II were 86.36 and 76.56 respectively. The HHIA consists of questions based on Social and Emotional Aspects. Hence mean scores on both these aspects were compared separately for both the groups.

Mean HHIA Scores on Subtests social for group I and II are 43.18 and 32.7 respectively, emotional subtest score for group I and II are 46.23 and 43.86 respectively.

Comparison of Mean Scores on HHIA of both the groups was performed using unpaired t-test.

Scores of HHIA, social scores and emotional between group I and group II were compared, the p values obtained are (0.000, 0.000, 0.000 respectively) which is suggestive that there exists a statistically significant difference between the scores on HHIA among the two groups.

Discussion:

HHIA gives an idea about the level of handicap imposed by a condition on his/her social and emotional aspects. It is thus seen that Moderate to Moderately Severe Sensorineural Hearing Loss creates an impact on an individual's social and emotional life however it is lesser as compared to individuals who have an additional concomitant illness, in this case, MDR TB. The existence of a co-morbidity usually creates a greater impact on a person's overall wellbeing. As an audiologist one needs to be aware of the social and emotional handicap of the person to understand him/her better and provide appropriate management and additional counseling, as and when required.

It also points to the fact that patients with MDR TB requires a prompt solution to their hearing problem and necessary referrals for psychological well being too.

Summary & Conclusion:

Patients with MDR TB and Moderately Severe to Severe Sensorineural Hearing loss pose a greater handicap than individuals with Moderate to Moderately Severe Sensorineural Hearing loss without MDR TB. The social and emotional impact of hearing loss are both much greater among these individuals and hence require suitable counseling, management, and even added support if needed, to prevent further handicap on their social and emotional life due to hearing loss.


  Abstract – AP1045: Snooze or Unsnooze the Alarm in Your Ear Top


Ambreen Aseef1, Mathew Vincent2 & Manita Thomas3

1ambreenaseef99@gmail.com,2ambreennoel0921@gmail.com, &3manita.aslp@gmail.com

1Kasturba Medical College, Mangaluru - 576104

Introduction:

Tinnitus is a Latin word 'tinnire' which means to tinkle like a bell. Tinnitus can vary from person to person and it can sound like ringing, buzzing, whistling, hissing etc. It can be continuous or intermittent and it can be high pitched or low pitched. Tinnitus has been considered as one of the common problem found in one in five people. S. Makar et al (2014) reported that the tinnitus is common in India with a prevalence of 7% in regard with Indian population. Some individuals tend to have tinnitus in just one ear or both the ears. Impact of tinnitus on the life of such individuals is the major concern. It is a debilitating condition that affects a patient's social wellbeing and overall health negatively. They often experience, anxiety, depression, frustration, irritation, distress, pain, poor concentration, even a moderate level can affect the individual's life to a large extend, thus affecting their quality of life. The aspects of quality of life affected in individuals may differ. Tinnitus affects the quality of life of individuals largely with percentages such as; sleep -18%, concentration-16%, anxiety-13%, annoyance-34%, etc. As tinnitus is considered as a phantom perception, a complete solution to it is not achievable; however, there are many non-medical managements options available for the treatment of tinnitus, such as, Tinnitus Retraining Therapy, tinnitus maskers, sound generators, combination devices, counselling etc. One of treatment options which have gained attention is Tinnitus Activity Treatment (TAT). It is a treatment plan carried out for patients suffering from tinnitus in the form of activities, exercises, home works. It is a patient oriented approach, which focuses on the patient's primary issue. The goal of TAT is to improve the quality of life on tinnitus sufferers.

Need for Study:

The quality of life affected in individuals having tinnitus varies, the present research was done to assess which aspect of their daily life is affected whether sleep, hearing, concentration and by how much and how to overcome the effect and reduce the severity of the problem with a treatment plan Tinnitus Activity Treatment. This treatment way indulges with informative counselling and activities which draws the attention of the individual suffering from tinnitus and distracts his/her concentration from it.

Aim & Objectives:

To evaluate the quality of life on individuals suffering from continuous tinnitus with use of Tinnitus Activity Treatment along with structured counselling and home training programme.

Method:

A cross sectional study design was used. 20 individuals were included in the study. Participates were divided into two groups 10 each in a group, that is; controlled group and intervention group were taken for the study. In Group 1, controlled group were we assessed the quality of life by using a conventional method and Tinnitus Handicap Inventory(THI) which is a self administered test battery and also a semi structured interview was done and in Group 2, the intervention group were we assessed the quality of life on individuals after the use of conventional method along with a combination approach, that is, Tinnitus Activity Treatment(TAT) . Quality of life was assessed after the post treatment of 4 weeks of sessions.

Results & Discussion:

A significant difference(p <0.05) seen in the Tinnitus Handicap Inventory scores pre and post treatment and also the semi structured interview gave a positive impact during the interview session. Individuals whose tinnitus was a major concern affecting their life reported that the tinnitus activity treatment plan helped improve their quality of life, sleep concern was improved in 5 individuals(50%), concentration in 4 individuals(40%). It is seen that, the Tinnitus Activity Treatment (TAT) tends to reduce the psychological effect in individuals which draws their attention more to tinnitus and thus by reduction in their negative reactions, their quality of life with tinnitus has improved. A significant increase in the quality of life scores were seen in the Group 2 when compared with Group 1 because of the activities and exercises drawn from the treatment plan per week which engages their mind and distracts them from the presence of tinnitus.

Summary & Conclusion:

Tinnitus is a condition that is difficult to treat and treatment outcomes are difficult to measure. The current study shows the importance of TAT test battery that must be administered on tinnitus patients in clinical set up .It is a client centred approach, where it highlights the importance and the serious impact of tinnitus in daily activities, occupation, sleeps in people suffering from it. Thus, awareness needs to be increased about the relationship between impact of tinnitus on the quality of life of individuals and the treatment options such as Tinnitus Activity Treatment which focuses on their primary concerns.


  Abstract – AP1046: Living with Tintinnabulum in Your Ears Top


Mathew Vincent1, Ambreen Aseef2 & Manita Thomas3

1ambreennoel0921@gmail.com,2ambreenaseef99@gmail.com, &3manita.aslp@gmail.com

1Kasturba Medical College, Mangaluru - 576104

Introduction:

Tinnitus is defined as the presence of a sound in the individual's ear in the absence of any external sound source. It can be perceived as different sounds like roaring, hissing, swishing, etc. It can be unilateral or bilateral, continuous or intermittent in nature. Santoshi et al (2015) reported that 30% of increased prevalence of tinnitus who are more than 40 years of age. It is observed frequently that tinnitus is more seen in quiet situations. Tinnitus is not a condition itself it accompanies with a condition such as ear injury, hearing loss etc. Thus, tinnitus is considered as a symptom. Individuals suffering from continuous tinnitus tend to have severe affect in their daily life such as concentration issues, sleep cycles, stress, communication etc. Most of the people approach with a permanent solution for this problem but a temporary solution is given for a particular time period and is being monitored. The current research focuses on one of the method as tinnitus rehabilitation is Tinnitus activity Treatment [TAT] which is a 4-week program and patient centered which mainly aims to overcome the effect of tinnitus on their daily routine. The tinnitus activity treatment involves intense counselling of individuals in regard with areas such as hearing, thoughts and emotions, concentration and sleep.

Need for Study:

As tinnitus is considered as a phantom perception which can be continuous and intermittent in nature. Tinnitus has been considered as one of the common problem found in about 1 in 5 people. There is no permanent solution for this condition due to individual perception. People with tinnitus face difficulties in their day today life such as concentration issues, sleep, attention etc. when comparing with normal individuals who is not concerned about the tinnitus. Basically, more problems exhibited in domains such as sleep, concentration, attention and other domains. So there was a lack of individualized approach which focuses on primary issues such as sleep, concentration etc. So the TAT helps in assessing such areas with activities and exercises and thus improving the lifestyle of an individual with tinnitus. The difference in scores of Tinnitus Handicap Inventory gives an outline experience on how effective is the treatment approach across various dimensions and depending up on different individuals.

Aim & Objectives:

The aim of the current research focuses on the effectiveness of the TAT in patients with pre and post tinnitus levels and comparing the effectiveness between the control and intervention.

Method:

20 subjects were taken for the study, who were divided into two groups- control group and intervention group. Control group consists of individuals who is having tinnitus with no hearing impairment and intervention group consists of individuals who is having tinnitus with hearing loss. Group 1 was given the conventional tinnitus training whereas Group 2 was given conventional method with TAT. Intensive counselling and home training program was given to both the groups and outcome measures were assessed.

Results & Discussion:

The data was analyzed by comparing both Group 1 and Group 2. It showed a significant rise (p < 0.05) in the Group 2 when compared to Group 1.Using the individualized TAT plan there was a significant reduction in the perception of tinnitus in the intervention group which incorporates the use of counselling and outcome were measured.

Summary & Conclusion:

Tinnitus is a condition that is difficult to treat and treatment outcomes are difficult to measure. The current study shows the importance of TAT test battery that must be administered on tinnitus patients in clinical set up. The tinnitus activity treatment plan helped individuals suffering from tinnitus to recovery their negative emotions and build back positive environment in their daily routine life. It has a positive impact on not just the individuals but also their families. The inclusion of TAT helps in accurately measuring the patient's distress like concentration, emotions, sleep etc. So TAT can be incorporated during the counselling session and home training program can be given along with other conventional methods and frequent follow-up can be recommended.


  Abstract – AP1047: Awareness of Ear Infections among Swimmers Top


Shubham Shaniware1 & Mitali Thakkar2

1shubhamshaniware333@gmail.com &2mitalit151@gmail.com

1KEM Hospital, Pune - 411011

Introduction:

One of the most common reasons for visits to primary care physicians and medical care providers has been Ear pain (otalgia). Other causes include middle ear infections (otitis media), external ear infections (otitis externa), foreign bodies and trauma. Otitis media is the most common cause of ear pain, occurs primarily in young children. Otitis externa is regularly associated with swimming and diving (often called swimmer's ear), affects all ages.

The frequency of 'earaches' reported by swimmers during the summer of 1971 was 2.4 times the frequency reported by non-swimmers. Furthermore, the risk of a swimmer acquiring external otitis, was approximately five times as great as the risk to non-swimmers as reported by Hoadley et al. in 1975.

The aquatic environment adds the conditional viable of moisture to the ear canal. Individuals who have recurring ear canal infections require evaluation by an ear specialist to identify possible remedial problems that can trigger infection.

In case of children, there is a greater risk of invasion of infection into the middle ear cavity as well, due to multiple of anatomical and physical considerations like shape of the Eustachian Tube, greater susceptibility for Middle Ear infections, recurrent Upper Respiratory Tract Infection (URTI). As observed, recurring middle ear infections in children leads to fluctuating conductive hearing loss among them and this results into multiple speech and language problems like misarticulations, unclear speech sound production, poor expressive language skills as compared to chronological age, and also deficits in higher auditory processes.

Need for Study:

To assess the awareness of ear infections among the swimmers due to their greater reported prevalence.

To assess the awareness of ear infections among the parents and caregivers of children who are swimmers as they are at higher risk for developing middle ear infections.

To improve the awareness about the use of ear protective devices during swimming and consequential infections due to swimming also steps to prevent the deleterious effects of fluctuating hearing loss aforementioned.

As an audiologist try to prevent the occurrence of middle ear infections by creating awareness about it and necessary precautionary steps to be undertaken when considering swimming as a regular hobby and thereby guiding swimmers or the parents/caregivers of swimmers.

Aim & Objectives:

Aim: To assess the awareness levels for ear infections among swimmers

Objectives:

To assess the awareness levels for ear infections among adult swimmers.

To assess the awareness levels for ear infections among parents or caregivers of child swimmers. To compare the level of awareness among the two groups.

Method:

Participants: Study was conducted on two groups:

Group I consisted of 50 adult swimmers, age range 25-40yrs, with a mean age of 33yrs, of which 10 were females and 20 were males. All the participants were head immersion swimmers, who reported submerging their head or putting their face in water while swimming. Average number of hours spent on swimming for all participants was 6-7hrs in a week.

Group II consisted of 50 parents of children between the ages of 5yrs to 10yrs. Children use to swim for at least 6-7hrs in a week. Children were trained for head immersion swimming and use to practice the same.

Tool: Tool used for data collection was a questionnaire developed by researcher consisting of 15 questions, all of which were to be answered in either Yes (1) or No (0). Questionnaire was validated by 5 Audiologists and 5 ENT specialists.

Questions were concerning about the awareness of ear infections, its symptoms, appropriate referrals needed, use of ear protective devices while swimming. The total score obtained was 15, and the scoring was further divided into 3 categories.

0-5: Poor Awareness

6-10: Fair Awareness

11-15: Good Awareness

Procedure: All the participants of group I and II were explained about the procedure. Participants of Group I was given to fill the questionnaire on their own, and that of Group II were parents of children who use to swim 6-7hrs daily. All the participants of study were swimmers for at least 1yr now, and use to swim in non-marine water with head immersion technique. Exclusion of other causes of hearing loss was ensured. Once the questionnaire was filled, appropriate statistical tests were applied.

Results & Discussion:

The data collected from the participants consisted of scores out of 15, and hence the analysis performed was of Descriptive Statistics including Mean Scores on Questionnaire and Standard Deviation (SD)

All participants filled up the questionnaire (100%), scores out of 15 were calculated for each participant and then a mean score was obtained for both the groups.

Group I had mean score of 8 which is suggestive of Fair Awareness Group II had a mean score of 7.5 which is suggestive of Fair Awareness.

The mean scores on awareness level for ear infection is suggestive of the fact that both the groups had fair awareness about Ear infections associated with Swimming.

Also, it was found that out of 100 participants only 40 participants were using Ear Protective Devices during swimming.

Using Unpaired t-test, the mean awareness levels were compared and it was found that there exists a statistically significant difference between the awareness levels of two groups (p=0.003).

Hence considering adult swimmers and children swimmers it is found that adult swimmers are slightly better at their knowledge on awareness about the ear infections associated with swimming.

Discussion

The results of the statistical analysis are suggestive that both the groups had fair level of awareness about the infections of ear associated with swimming. The slightly better awareness level among the adult swimmers is noted as a result of parametric test.

Most of the respondents had poorer awareness on questions relating to precautionary steps necessary to prevent ear infection; also it was poorer awareness on the questions related to possible side effects of ear infections. It is thus found that adult swimmers though has awareness about the presence of ear infections associated with swimming but there exists a dearth of awareness about the side effects and steps to prevent these infections from aggravating.

Parents of Children Swimmers in Group II had fair awareness about the infection associated with swimming; most negative responses were obtained on questions relating to prevention of ear infections and long term effect of fluctuating hearing loss. This probably could be because it usually takes fluctuating hearing loss to recur repeatedly in order to see its effect on Speech and Language Skills, and the questionnaire was administered on children who use to swim for 6-7hrs weekly since minimum 1year, hence if children who are head immersion swimmers for a longer duration may be having higher awareness about the long term effects of fluctuating hearing loss.

Summary & Conclusion:

Level of awareness about the ear infections associated with swimming is fair among adult swimmers in the age range 25-40yrs and parents or caregivers of child swimmers of 5-10yrs age.

Also there is slightly better awareness among adult swimmers as opposed to child swimmers, and least awareness levels were observed for long term effects of fluctuating hearing loss, steps for its prevention.

Hence, it is responsibility of an audiologist to improve the awareness levels among these swimmers and thereby prevent any deleterious effects of fluctuating hearing loss.


  Abstract – AP1049: A Comparative Study on Audiological Deficits in Rheumatoid Arthritis and Juvenile Arthritis Top


Raj Kumar1, N Banumathy2 & Noorain Alam3

1rajaslpkumar@gmail.com,2banupallav@gmail.com, &3noorain.aslp@gmail.com

1Post-Graduate Institute for Medical Education and Research, Chandigarh - 160012

Introduction:

Rheumatoid Arthritis (RA) is a systemic, autoimmune disease which may affect different organs including cardiac, pulmonary, skin and ocular. Similarly, the auditory pathway might be influenced by an assortment of pathologies over the span of illness. The auditory deficits may be attributed to the middle ear or inner ear involvement.(Guo Q et al.,2018) Middle ear may be affected at the synovial joints of ossicular chains or due to presence of rheumatoid nodes in the middle ear.(Tilstra JS et al, 2015) Inner ear may be affected due to vasculitis or neuritis or deposition of immune complex deposition.(Takatsu M et al, 2005) Ototoxicity may also cause hearing deficits which is seen in the course of treatment.

Need for Study:

There is a dearth of studies with respect to audiological profiling in cases with Juvenile RA and RA. Also, not many studies have reported a comparison of audiological deficits in JRA and RA. There was a need for a battery of audiological tests in both RA and JRA population to find out the type, degree of auditory deficits in these populations.

Aim & Objectives:

Aim of the study:

  1. To explore the audiological deficits in RA
  2. To explore the audiological deficits in JRA
  3. To Compare the audiological deficits presented by individuals with RA and JRA


Method:

The experimental groups were composed of newly diagnosed RA (14 males and 16 females) And JRA subjects (23 males and 7 females) with 30 no. in each group. The age range in RA group was (17- 45years) with a mean age (31.53, SD +-3.83), while in JRA group the age range was (5-16 years) with a mean age (12.1, SD +-3.37). Control groups were age and gender matched with the experimental groups.

Audiological tests battery included PTA (including high frequencies), Tymapnometry, OAEs and ABR tests. t test and MANOVA tests applied on data used using SPSS 22 software.

Results & Discussion:

1. PURE TONE AUDIOMETRY

A statistically significant difference (p<0.05) was observed between RA and control group as well as JRA and control group on Air conduction thresholds at frequencies 250Hz-16 KHz in both the ears. The RA and JRA group had significantly higher pure tone averages (PTA1, PTA2 and PTA3) as compared to the control group. In JRA PTA 1 and PTA 2 showed minimal hearing loss and PTA 3 had mild hearing loss in the JRA group.

RA group showed significant elevated thresholds (p<0.05) compared to JRA group in both ears at 4 kHz, 12 kHz, 16 kHz and in right eras at 4 kHz and in the ears of the left at 8 kHz.

Thus the finding in the present study showed that both RA and JRA had hearing deficits, as revealed in both conventional and high frequency (low, mid and high) audiometry. These findings are indicative of abnormalities in the middle ear at speech frequencies and high frequency loss due to damage to the hair cells.

2. TYMPANOMETRY

Type A tympanogram were observed in all the subjects in RA and JRA and control groups in both the ears except three JRA subjects who had type As tympanograms.

3. OTOACOUSTIC EMISSIONS (OAES)

TEOAE- In RA group 66% (20 subjects) in the right ear and 73% (22 subjects) in left ear showed the presence of TEOAEs while 33.3% (10 subjects) in the right ear and 36.6% (8 subjects) in the left ear in the RA group showed the absence of TEOAEs.

Significantly lower SNR values were observed for TEOAEs at all frequencies in comparison to the control group except 1 kHz in the right ear and 3 kHz in the left ear.

DPOAE- A significant difference was found in RA and control group for both the ears at 375 Hz, 499 Hz and 3991 Hz. While it was significant for right ears at 2000 Hz and 5469 Hz and in left ears at 1409 Hz. The values were lower for RA groups in these frequencies.

In JRA group slightly lower SNR for TEOAEs at all frequencies in comparison to control group. A significant difference was observed in JRA vs control group in left ears for 1 kHz and 3 kHz and in the right ear at 4 kHz.

DPOAE- A significant difference was found in JRA vs control group for both the ears at 5649 Hz. While there was significant for right ears at 375 Hz, 499 Hz, 3991 Hz, and 5469 Hz and in left ears at 1003 Hz. The values were lower for RA groups in these frequencies.

On comparing RA and JRA groups on OAE measures:

TEOAE- Significantly lower SNR values were observed in RA groups in both ears at 1.5 kHz and 3 kHz and in the right ears at 2 kHz. DPOAE- A significant difference was found in both the groups for both the ears at 1003 Hz in left ears and 2000 Hz in right ears. The values were lower for RA groups in these frequencies.

4. AUDITORY BRAIN STEM RESPONSE (ABR).

No statistically and clinical significant differences were observed in all the three groups in all the measures.

Discussion- The test results are similar to many previous studies which suggest there is abnormality in the auditory system in RA and JRA population. The Low and high frequency hearing loss on the PTA may be attributed to synovial involvement of middle ear joints (Raut et. al. 2001).

'As' type of Tympanogram in three subjects in JRA group may be due to Inflammation of the synovial membrane followed by repair processes may decrease the mobility of one or both middle ear diarthrodial joints as found in other parts of the body. Reduction in the SNR values in TEOAE may be because of multiple factors involved in the neural degeneration that reduces the active motion of the tectorial membrane.

Our study did not find any abnormality in the absolute latencies and interpeak latencies as well as interaural latencies in RA and JRA group. It may indicate that the deficit in auditory system did not affect the lower brain stem in newly diagnosed RA and JRA subjects.

Summary & Conclusion:

We may conclude that there is an auditory deficit in RA and JRA patents which was seen in the different audiological test results. Both the groups showed elevated hearing threshold (minimal to mild hearing loss) on pure tone audiometry across low, mid and high frequencies. In some (10%) of JRA subjects the middle ear abnormality was observed in the form of 'As' type of tympanogram. Also, even though TEOAEs were present in both RA and JRA groups, the SNR values were significantly lowered when compared to control groups. The auditory deficits were more in RA groups than in JRA group. The normal findings in ABR tests show that the auditory deficits did not affect the auditory system at the low brain stem level in both RA and JRA groups.


  Abstract – AP1052: Social and Emotional Handicap of Persons with Auditory Neuropathy Spectrum Disorder (ANSD): Measures of HHIA: S Top


Mugdha Manoj1, Kamat Disha2, Mohammad Shamim3 & Rathna Kumar S. B.4

129mugs@gmail.com,2dishastar1@gmail.com,3msansari5000@yahoo.com, &4sarathna@yahoo.co.in

1Ali Yavar Jung National Institute of Speech and Hearing Disabilities (Divyangjan), Mumbai - 400050

Introduction:

The term Auditory Neuropathy Spectrum Disorder (ANSD) is a term used to describe individuals with auditory disorders due to dysfunction of the synapse of the inner hair cells and auditory nerve, and/or the auditory nerve itself. Unlike individuals with sensory hearing loss who show clinical evidence of impaired outer hair cells (OHCs) function, individuals with ANSD show clinical evidence of normally functioning OHCs. A post mortem examination of one patient with this aetiology of ANSD revealed preserved cochlear hair cells, decreased spiral ganglion cell number, and extensive degeneration of both peripheral processes in the residual axons. The proximal portion of the auditory nerve showed axonal loss and incomplete remyelination at the entrance of the brainstem. The hearing loss in ANSD can vary from normal to profound degree. ANSD is typically characterized by abnormal/absent ABR with the presence of OAEs indicating integrity of outer hair cell functioning. The diagnosis of ANSD involves ABR, OAE and/or Cochlear Microphonics and absent Acoustic Reflexes. The etiology of ANSD is multifactorial and includes genetic, congenital, acquired conditions. Both syndromic (Charcot-Marie-Tooth syndrome and Friedreich's ataxia) and non syndromic genetic forms of ANSD (DFNB9/OTOF; Varga et al., 2003; Yasunaga et al., 1999) have been reported. Risk factors reported include prematurity, Hyperbilirubinemia, ototoxic drug exposure and various neurological disorders such as mitochondrial disease.

Need for Study:

Individuals with ANSD have difficulties in understanding speech, comprehension of speech in the presence of a background noise/competing message, understanding speech if the speaker's back is turned and they are often accused of being 'selective listeners'. The impact of ANSD on a person may affect his day-to-day functioning, mental state, social behaviour, communication, economic state, work progress, etc. Although much has been learned about ANSD, the selection of appropriate treatment strategies still remains a difficult task. One of the reasons for the lack of proper rehabilitation strategies is the scarcity of studies evaluating the social and emotional effects in individuals with ANSD and without ANSD. It is hypothesized that individuals with ANSD will experience more negative consequences than individuals with sensorineural hearing loss. Therefore, this study was conceptualized to measure the perception of handicapping condition and its level in persons with ANSD. Such a study may help us to understand sequel of ANSD, in social and emotional perspective which in turn may guide us to plan better rehabilitation strategies and appropriate referral for handling the social and emotional issues in ANSD.

Aim & Objectives:

The purpose of this study is to:

  1. Assess the level of emotional and social handicap in sensorineural hearing loss cases.
  2. Assess the level of emotional and social handicap in ANSD cases.
  3. To compare the level of social and emotional handicap in patients with ANSD and patients with sensorineural hearing loss.


Method:

This is a survey conducted by involving patients who reported to AYJNISHD(D) during the year 2018-19. 20 cases of ANSD and 20 cases of sensorineural hearing loss were assessed. Among the 20 patients with ANSD, 5 patients had mild hearing loss, 4 patients had moderate hearing loss, 8 patients had moderately severe hearing loss and 3 patients had profound hearing loss. Among the 20 SNHL cases, 3 patients had mild hearing loss, 4 patients had moderate hearing loss, 2 patients had moderately severe hearing loss, 8 patients had severe hearing loss and 3 patients had profound hearing loss. In all the 40 patients, the hearing loss was post lingual. Audiometric tests including Pure Tone Audiometry, Speech Awareness Test, Speech Discrimination Score, OAE, ABR were carried out in all patients. Immittance audiometry, reflexometry was done for all patients to validate middle ear functioning.

HHIA(S) in English, Marathi and Hindi was used according to the language preference of each patient. It is a 10 item self assessment scale which consists of two subscales (emotional and social/situational). There are 5 items included in the social subscale and 5 items included in the emotional subscale. The patient is asked to fill out the scale by responding in term of Yes (4 points), Sometimes (2 points) and No (0 points). A total score of 0-8 suggests no hearing handicap. A total score of 10-24 suggests mild-moderate hearing handicap, and a total score of 26-40 suggests significant hearing handicap.

Results & Discussion:

In the 20 SNHL patients, the patients with mild and moderate hearing loss showed no handicap on HHIA(S). The patients with moderately severe and severe hearing loss showed moderate handicap on HHIA(S). The patients with profound hearing loss showed significant handicap on HHIA(S). In the 20 ANSD patients, the patients with mild hearing loss showed mild-moderate hearing handicap. Out of the 4 patients with moderate hearing loss, 3 showed moderate handicap and 1 showed significant hearing handicap on HHIA(S). All the patients with moderately severe and profound hearing loss showed significant hearing handicap. Parametric Tests : The unpaired t-test was performed to compare overall scores of HHIA, social scores and emotional scores between group I and group II, the significant p values obtained are (0.023, 0.011, 0.016 respectively) which suggests that there is significant difference between the two groups in perception of handicapping condition. This finding reveals that the person with ANSD perceives disproportionate perception of handicapping condition to their degree of hearing loss in comparison to the person with cochlear origin of sensorineural hearing loss. This trend is seen even in social and emotional domain when the data was analysed separately.

Even when the degree of hearing loss is same in sensorineural patients and ANSD patients, the latter exhibit experience greater social and emotional problems. Underlying psychosocial problems often accompany hearing loss, but they are seen to a greater extent in patients with ANSD. Although a lot has been researched about ANSD, the emotional and social problems faced by these individuals have remained relatively ignored. The rehabilitation strategies used for sensorineural hearing loss patients do not show much effectiveness for ANSD patients due to these very issues. Other psychological issues like anxiety, depression, stress are commonly seen in ANSD patients.

Summary & Conclusion:

Before selecting a suitable rehabilitation strategy for a patient, the clinician must evaluate the psychological status of the patient. The emotional, social problems faced by the patient should be considered, and the patient should get the help he needs. Only then will the rehabilitation strategy show effectiveness and lead to a better quality of life for the patient.


  Abstract – AP1053: Development and Validation of Automatic dichotic listening test in Indian English Top


Mayur Bhat1, Hari Prakash P2 & Varsha U3

1bhatmayur0@gmail.com,2hari.prakash@manipal.edu, &3varshau63@gmail.com

1Manipal College of Health Professions, Manipal, 576104

Introduction:

Dichotic listening test involves simultaneous presentation of two different stimuli to both ears. The stimulus from the right ear reaches left hemisphere directly through the dominant pathway and results in Right Ear Advantage (REA), reflecting hemispheric asymmetry for speech stimulus. In the classical test, the participant is tasked to repeat whatever is heard irrespective of the ear. However, there are several, modifications to it, both in terms of the stimulus used (delay, phonological relation etc.) and the participant’s involvement (forced recall and free recall). All the above modifications have shown to influence the dichotic performance, especially attention. By directing the attention to the specific ears, it is possible to assess selective attention (right attended) and executive function (left attended) (Hughdal,2003). Other modifications like delay and phonological relevance are less explored as such.

Need for Study:

Several studies have been reported in literature in the field of dichotic listening where lag effect, free recall, focussed attention, etc. were studied separately. Till date there are no studies done using a single dichotic test including all the above-mentioned conditions and hence there is a large variability in the results across studies due to the differences in the method of preparation, language and the stimulus used. In order to control these variabilities, there is a need to develop and validate a single dichotic test including all the conditions using the same list of words. It is also important to know the effect of phonemic congruence (i.e., two words in a dichotic pair beginning with the same phoneme facilitate each other or interfere) in dichotic listening scores since there are no studies reported including this condition. Hence, there is an overall need to develop a software version of dichotic listening test which will serve as a single platform to carry out all the variations using same repertoire of token and also to explore the effect of phonemic congruence.

Aim & Objectives:

To develop and validate an automated software version of dichotic listening test in Indian English.

OBJECTIVES

  1. Development and validation of dichotic recorded word list and its variance
  2. Development and testing of dichotic test in MATLAB platform using GUI.


Method:

Stimulus preparation

Phonemically related and unrelated English word pairs were selected and recorded using a standard mic from a male English speaker. After editing and validating the recorded stimulus, equal number of phonemically related and unrelated dichotic stimulus pairs were prepared in adobe audition

Software version

A Graphic User Interphase was generated in MATLAB and the following conditions were created for equal number of phonemically related and phonemically unrelated dichotic pairs (180 pairs) like Free recall, Right delay, Left delay, Right attention, Left attention, Right attention-Right delay, left attention-left delay, left attention-right delay, Right attention-left delay.

Procedure

Participants

18 English speaking individuals in the age range of 18-50 with normal hearing sensitivity participated in the study. The study design was a cross sectional study. The participants were selected based on the inclusion and exclusion criteria.

Task

The stimuli were presented to the subjects through headphones where the subject will be instructed to recall whatever is heard only from the ear in which the stimulus is preceded by a beep tone (indicating attention condition) or else from both the ears.

Scoring

The right ear scores, left ear scores and the double correct scores for phonemically related and unrelated conditions were calculated separately.

Results & Discussion:

Repeated measures ANOVA with 3 levels of attention (free recall vs right attention vs left attention), 3 levels of delay (no delay vs right delay vs left delay), 2 levels of phonological constraints (phonologically related vs unrelated) and 2 levels of ear performance (right ear vs left ear) was carried out on dichotic scores to determine the main effect and the interaction effects.

The results indicated a significant main effect of ear (F(1,17) =29.864),p=0.001, Å'^2=0.637) attention (F(1.16,19.836)=146.679, p=0.001, Å'^2=0.896), delay (F(2,34)=6.747, p=0.003, Å'^2=0.284) and phonological relevance on dichotic scores (F(1,17)=9.022)p=0.008, Å'^2=0.347) . Overall it suggest, the right ear scores are better than the left ear, attended ear scores were better than the unattended ear, scores are better for phonologically related dichotic pairs when compared to unrelated pairs. In terms of delay, there was a significant increase in score in the delayed ear.

Further there is a significant interaction between attention and ear (F(2,34)=389.664), p=0.001, Å'^2=0.958), where better right ear scores in free recall condition, but in forced recall condition the better scores were seen in attended ear irrespective of right or left. Similarly there was a significant interaction between delay and ear (F(2,34)=23.487),p=0.001, Å'^2=23.487), suggesting, the REA in free recall as baseline and increased right ear scores with right delay condition and vice versa for the left delay condition though it was comparatively lesser than the right delay condition.

DISCUSSION

The main aim of the study was to develop and check feasibility of a fully automated DL test and its variants. The overall results of the free recall condition showed clear REA in the current test, which explains the given list and condition is adequate enough to the dichotic listing process. It is explained by Kimura's structural model (1961) which assumes that dichotic listening is related to brain asymmetry through predominant contralateral auditory pathways resulting in greater representation of verbal stimuli in the language-dominant left hemisphere. Using the same test and list, we evaluated the attention modulation of dichotic scores in the same participants. The results showed that the attended right ear scores were larger than the free recall right scores, this score difference was reversed to left ear in left attended condition. This phenomenon was previous explained by (Hugdahl et al., 2009; Hugdahl et al., 2000) and is owed to different cognitive process underlying this forced attention paradigm. Selective attention and executive function.

The next effect assessed was delay, in this study there was a constant delay of 200msec either in right or left ear. This delay is expected to modulate the right ear advantage in such a way the competition from dichotic rivalry is released and the scores of the delayed ear is expected to increase. The current study results also demonstrate similar trend, however, there the left delay induced increase in left ear scores were marginal compared to right delayed induced right ear scores. In other words, irrespective of left delay, the right scores were on par with the left ear scores. This pattern of results are same as the previous studies and re iterates that, the REA is resistant to delay and release from dichotic rivalry.

Phonological constraints had the main effect were phonologically relevant words had larger scores when compare to unrelated. The fact that phonology did not show any interaction effect indicated this process simply enhances the existing effect. This is interesting and needs to be explored further.

Summary & Conclusion:

To the best knowledge of the author this is the first test that combines attention, delay and phonological constraints in a random order in an automated software. If validated in disordered population this test would be of great clinical importance.


  Abstract – AP1054: A Survey on Awareness of Universal Newborn Hearing Screening in Nurses Top


Archana Rai S

archana.rai04@gmail.com

1NITTE Institute of Speech And Hearing, mangalore - 575018

Introduction:

Hearing loss in infants can lay a severe impact on speech and language development and all the other related domains such as interpersonal communication, academic skills, economic independence, behavioral and psychosocial development. About 3 out of 1000 babies are born with hearing loss, making it the most common congenital disability. Universal Newborn Hearing Screening (UNHS) is a screening protocol for identifying hearing loss among the infants, which was advocated in the year 1999 by the American Academy Of Pediatrics. The main objective of UNHS is to identify the infants with hearing impairment at the earliest and provide necessary intervention. Indian Academy of Pediatrics has also recommended standard guidelines for Newborn hearing screening. Government of India launched National Program for Prevention and control of deafness in 2006.

A multidisciplinary team plays a crucial role in early identification and intervention of hearing loss, which includes Pediatrician, Nurses, Otolaryngologist, Audiologists, and technicians (G. Achal., et al.; 2014). Amongst these nurses act as facilitators and a point of direct contact to the parents and other professionals involved in early identification. These professionals can educate and counsel expectant mothers as well as during the post-partum periods on need and importance of UNHS, steps followed, early identification and intervention and timely follow up (Ravi, R., et al.;2016). Hence it is of at most priority that nurses should have information related to hearing loss in the pediatric population, its early identification and rehabilitation.

Need for Study:

Awareness and knowledge survey aims to understand what people know about specific issues and how to react towards it. Based on these facts, the audiologist can plan the awareness programs amongst nurses for the benefit of the infants with hearing impairment during the early stages of their life for a better prognosis. Hence it is essential to assess the awareness and knowledge of UNHS among nurses and their source of information about the various steps carried out during UNHS.

Aim & Objectives:

The present study aims at identifying the awareness and knowledge of UNHS in nurses.

Method:

The study focused on identifying the awareness and level of knowledge that nursing students and professionals possess in identifying hearing-related problems and their early identification through a newborn hearing screening. A questionnaire was formed, which include general demographic details and more specific questions aimed at obtaining details regarding awareness and knowledge about UNHS among nurses. A total of 19 questions were formed, which include questions of both subjective and objective type. Further, the questions were subcategorized into awareness and knowledge regarding UNHS. The questionnaire was given for validation to 11 Audiologists and Speech-language pathologists. The corrections and suggestions were incorporated. The finalized questionnaires were circulated among nursing students and professionals via Google forms platform. To hasten the process hard copies of the questionnaires were distributed and were asked to fill it and later it was entered in the excel sheet for data analysis. The data obtained were transferred to an excel sheet where the data analysis was carried out. Further descriptive statistical analysis was done.

Results & Discussion:

The study focused on exploring the awareness and knowledge of UNHS among nurses in Dakshina Kannada District. Out of the total 19 questions, five questions aimed at evaluating the awareness of UNHS among the tested population and ten questions aimed at knowing the depth of knowledge among the respondents who were aware of UNHS. A total of 272 responses were obtained and were analyzed. Among the 272 participants, 178 (65%) of respondents were aware of UNHS, and 94 (35%) were not aware about UNHS, indicating a higher level of awareness among nursing professionals. It was also observed that awareness regarding the UNHS was more in nurses who worked in the setup where UNHS is practiced. Also, awareness and knowledge regarding the program increased as the experience in the field increased.

Out of 178 respondents, 76% are aware that UNHS is the protocol for hearing assessment in children. However, it was also noted that in spite having awareness about UNHS, 24.15% of the respondents assumed UNHS to be a vaccination for children or a disorder in children. Majority of the participants, that is 70% of the respondents were aware that an audiologist does Newborn hearing screening. 71% of the tested population knew sedation is not required for infants during the UNHS. Out of the 272 nurses participated, 20% (36) of them were part of administering UNHS either by observing the procedure or by administering the tests on newborns.

DISCUSSION:

A successful implementation of any health related issues lie in the appropriate knowledge and attitude of the professionals involved. The present study focused on identifying the awareness and knowledge of UNHS in nurses. The nursing professionals are not only aware about UNHS but also comprehend that it is a protocol for newborn hearing screening. The above mentioned majority positive responses were obtained since 38% of them had studied it as a part of their syllabus. 81% of the respondents reported their work sector performs UNHS. This also makes it much evident that not all the clinics or hospital carry out UNHS on a regular basis. In spite of having a majority of the individuals, who are aware about UNHS, their knowledge about UNHS and its counseling are inadequate. This might be due to lack of exposure on UNHS or due to the limited information available in the syllabus. Out of 178 only 36 (20%) have administered UNHS either actively or passively, which supports the above statement. Even though there are 65% of the respondents who are aware about UNHS, only 55% in that knows about the terms used in interpretation. The lack of knowledge about interpretation of the result can miss lead the parents/caregivers during counseling. It can also cause unnecessary anxiety in them. Therefore, it is very important for the nurses to gain knowledge about UNHS which in turn will help in encouraging the parents as well as providing opportunity to have successful follow up programs. Follow up programs play key role in success of UNHS, which can be achieved with a well planned UNHS program and appropriate counseling.

Summary & Conclusion:

The current study aimed at identifying the awareness and knowledge about UNHS among nurses. It was found that though majority of the respondents are aware about UNHS, knowledge regarding the procedure and interpretation are lacking in them. Hence there is high requirement to spread awareness and educate the nurses regarding the same. The information in the syllabus can be more informative and focus majorly of early identification and intervention. Workshops, seminars, camps can be conducted by the professionals working in the field of Audiology. Also, various Governmental and private health care sectors must make UNHS a mandatory one in the entire health care sector and also can conduct various programs for spreading awareness and knowledge about Infant hearing screening among nurses. The limitation of the present study is that, the questionnaire was given through google forms which is available for only those who has access to it and since it was multiple choice questions, there are high chances of guessing the correct answer.


  Abstract – AP1057: Unilateral Selective Inner Hair Cell Damage: A Case Study Top


Mehulla Jain1, Shana Yasmin2 & Chandni Jain3

1mehullaj@gmail.com,2shanayasmin97@gmail.com, &3chandni_j_2002@yahoo.co.in

1All India institute of Speech and Hearing, Mysuru - 570006

Introduction:

According to the World Health Organization, hearing loss is one of the six leading cause for the global burden of disease. There are two main types of hearing loss: sensorineural and conductive. Sensorineural hearing loss is a condition that results from damage to the inner ear and/or auditory nerve. In 95% of all sensorineural hearing loss cases, the damage is to the inner ear (sensory), and damage to the auditory nerve (neural) is rare which cannot be differentiated from sensory losses by clinical symptoms alone. Unilateral hearing loss causes many problems in hearing during the life of the patients.

Need for Study:

Differential diagnosis of sensorineural hearing losses is essential as wide variety of pathologic processes including damage to outer hair cells (OHCs), inner hair cells (IHCs), tumor in the auditory nerve, auditory dyssynchrony among many may be the cause hearing loss. Thus a test battery approach including various diagnostic tests is usually used during initial evaluation, including pure-tone audiometry, acoustic reflex testing, imaging, and auditory brainstem response testing to differentially diagnose (Reiss & Reiss, 2000).

Aim & Objectives:

The present case study highlights the importance of test battery approach for understanding the exact site of lesion in a client with unilateral hearing loss due to selective inner hair cell damage.

Method:

A 14 year 9 months old child reported to the Department of Audiology with the complaints of reduced hearing sensitivity in left ear and difficulty in hearing the speech from a distance from three months. A detailed interview from the client and her guardian was taken which revealed that the condition was acquired in nature, and was of static type. There was no report of any other otological complaints. The client had been earlier diagnosed as having minimal hearing loss in the right ear and severe sensorineural hearing loss in the left ear with (?) indication of Auditory Neuropathy Spectrum Disorder (ANSD) based on otoacoustic emissions (OAEs) and auditory evoked brainstem responses (ABR) in a private hospital. The routine audiological evaluations which included pure tone audiometry, speech audiometry, and immittance evaluations were performed on the client in the Department of Audiology. Pure tone audiometry revealed normal hearing sensitivity in the right ear (Pure tone average: 5 dB) and profound hearing loss in the left ear (Masked PTA: 101.25 dB) with speech identification scores of 100% and 8% for right and left ear, respectively.

Further, immittance evaluation was carried out, and the results indicated A-type tympanogram with acoustic reflexes present in both ears. The right ipsilateral and left contralateral acoustic reflexes were within normal levels. However, right contralateral and left ipsilateral acoustic reflexes were elevated for 500 Hz and 1000 Hz and absent for 2000 Hz and 4000 Hz. Transient evoked otoacoustic emissions (TEOAEs) and distortion product (DP) OAEs were done, and it was present in both the ears indicating normal outer hair cells functioning in both ears.

Further, ABR was carried out for threshold estimation and site of lesion testing. For site of lesion testing double channel, click-evoked recording was done with two repetition rates of 90.1 and 11.1/s at 90 dB nHL. In the right ear, I, III and Vth peaks were present within normal latencies, and ABR was not visualized in left ear at 90 dB nHL, but cochlear microphonics (CM) was present till 5 ms. Hence, a provisional diagnosis of hearing sensitivity within normal limits in right ear and profound hearing loss in left ear with (?) retrocochlear pathology was made. The client was recommended to come for a follow up after neurological consultation.

CT scan was done as per neurological recommendation, and reports showed no significant abnormality in brain parenchyma, and bilateral temporal bones appeared normal. A follow up audiological evaluation was done after neurological evaluation to find out the exact site of lesion. The routine audiological evaluation showed similar findings on pure tone audiometry, speech audiometry, immittance evaluation, and OAEs.

In the follow-up evaluation ABR was recorded for both clicks and 500 Hz tone bursts (TB). In click ABR similar findings were noted as previous evaluation that is I, III and Vth peaks were present within normal latencies in right ear, and ABR was not visualized in left ear at 90dB nHL, but cochlear microphonics (CM) was present till around 5 ms.

Further, TBABR for 500 Hz in left ear was done, and Vth peak was visualized at 8.57 ms indicating a sloping pattern of audiogram. Later auditory late latency potentials (ALLR) was recorded for clicks and 500 Hz TB. In the right ear, ALLR was obtained for both clicks and tone bursts at normal latencies. In left ear ALLR was not visualized for click stimuli but was seen for 500 Hz TB stimuli within normal limits wherein the latencies of P1 was 59.51 ms, NI was 84.50 ms and P2 was 153.20 ms respectively. After second evaluation, it can be concluded that the site of lesion in the present client is not neural-based on CT scan finding, ABR and ALLR in the left ear. Also, within the cochlea functioning of outer hair cells seems to be normal as indicated by the presence of OAEs. Therefore, by the method of exclusion, the possible site of lesion in the present case appears to be selective loss of inner hair cells. TEN Test was done to confirm the same on the client, but the results were inconclusive since TEN tests are reported to be inefficient for severe to profound losses and also for unilateral losses, no method for contralateral masking has yet been established.

Results & Discussion:

The present case seems to have selective inner hair cell damage based on the finding of various tests. It can also be seen that the client was misdiagnosed as ANSD earlier due to the presence of CM and OAEs. But as the reflexes and TBABR is present in the client, the diagnosis of ANSD is ruled out. It has also been reported in the literature that CM is not exclusive in ANSD but is also seen in sloping hearing loss cases. But in the present client OAEs were present at all frequencies. Thus, the diagnosis of sloping SNHL is also ruled out. Hence, the presence of OAEs and TBABR with severe loss indicates that the site of lesion is exclusive to IHCs. There are reports in the literature which demonstrates that selective IHC damage is possible with normal OHC functioning (Prieve et al., 1991; Amatuzzi et al., 2001). Amatuzzi et al. (2001) reported selective inner hair cell loss in premature infants. In the study hearing testing (Screening ABR) was performed in NICU infants and then histological evaluation of temporal bone was done for 15 non-survivors. Results showed bilateral selective OHC loss in two infants, bilateral selective IHC loss in three premature infants and a combination of both outer and inner hair cell damage in two infants. Thus the present case seems to be a client with rare condition wherein OHCs are normal and damage to only IHCs.

Summary & Conclusion:

In light of the current findings and other reports in the literature, there may be cases in which OAEs can be present in cases with severe hearing loss. Presence of OAEs in these cases can be an indication of selective inner hair cell loss, and thus rehabilitation can be planned accordingly in these clients.


  Abstract – AP1058: Migration of Cochlear Implant Magnet after Head Trauma with and without Active Electrode Migration Top


Vanshika Vashishtha1 & Nirnay Kumar2

1vanshikavashishtha1997@gmail.com &2nirnaykeshree@gmail.com

1Sri Aurobindo Instittute of Medical Sciences, Indore - 453555

Introduction:

Cochlear Implant is an electronic device which replaces the function of impaired inner ear with impaired hearing. It consists of a current source and an electrode array which is implanted inside the cochlea to stimulate the auditory nerve fibres surviving in there. Outcomes from the implant are highly successful with low complications. Amongst which physical trauma after implant resulting in electrode array migration and/or misplacement, magnet pop-up or dislocating, wound infection and device failures are most likely reasons to bring upon complications. Complication rates are low overall, with flap-related complications reported to occur in 0.26 to 2.09% of cases, while electrode array-related problems occur in 0.17 to 2.12%.2 (Yu-Lan Mary Ying, MD, Jerry W. Lin, MD, PhD, John S. Oghalai, MD, and Robert A. Williamson, MD) . Physical Trauma over the head can lead the damage to internal implant receiver, electrodes, magnet popup or damage, migration of electrode are observed mostly. Effects on components and constituents of cochlea can be structurally observed via radiological evaluation such as Plain X-ray, CT scan and/or MRI techniques. Hence, doesn’t accounts for physical trauma resulting in functional changes. Whereas these functional changes could be encountered using Neural Response Telemetry (NRT), subjective techniques as well as measuring the electrode Impedance could help in assessing integrity within cochlea.

Need for Study:

Physical trauma post cochlear implantation is a major concern and sometimes also could be a medical emergency for the parents, surgeon as well as for the audiologist. As physical trauma could build up structural changes needing a medical intervention or functional changes needing intervention on the level of device mapping. Physical trauma has low complication rate though along varying degree of effects and extents may it be structurally and/or functionally. This needs an intensive evaluation in regard to gain knowledge about changes caused due to the trauma to the implant. Changes in TNRT findings indicate even mild changes occurred due to physical trauma on the implant.

Aim & Objectives:

The study aims to facilitate an early indicator of functional changes caused due to physical trauma post cochlear implant even with mild migration and/or displacement of electrode.

Method:

A total of 430 children underwent cochlear implantation surgery from year 2013 to 2019 under Rashtariya Bal Sevak Karikaram (RBSK) Scheme and ADIP Scheme at Sri Aurobindo Institute of Medical Sciences, Indore (M.P.) out of all 430 clients 2 of them met with a physical trauma leading to internal magnet popup. Which were studied in detail:

PARTICIPANTS:

Case 1: Child with age 2 years 6 months/male came under RBSK scheme to SAIMS, Indore (M.P.) and was fulfilling all needed criteria for candidacy of cochlear implant in respective to the scheme. On date 30-11-15 CI surgery was done using CI24 Straight electrode array. After CI surgery child unfortunately met with an accident on 12-05-16, when the child fell on floor with the head support of right side in school. Child reported with ear pain, inability to hear sounds and improper magnetic attachment with the internal constituents. Post traumatic ENT Evaluation, Plain X-ray, TNRT and Impedance Measurement procedure were performed.

Case 2: Child of age 4 years/male came under RBSK scheme to SAIMS, Indore (M.P.) and was fulfilling all needed criteria for candidacy of cochlear implant in respective to the scheme. On date 26-5-2016 his CI surgery was done using CI24 Straight electrode array. After which the child unfortunately met with an accident on date 09-10-2016 where he fell from a wall and landed on the support of head of right side to the other wall. Child reported with swelling on head of the implanted side, inability to hear sounds and improper magnetic attachment with the internal constituents. Post traumatic ENT evaluation, Plain X-ray, TNRT and Impedance Measurement procedure were performed.

PROCEDURE:

As soon as the children arrived after trauma to the hospital, there plain X-ray scan, Neural Response Telemetry Testing procedure for all 1 to 22 electrode, threshold estimation along with impedance measurement procedure was performed.

APPARATUS:

For NRT and Impedance measurements Custom Sound EP 5.1 software was used. Hardware constituents were speech processor CP802, a coil cable and coil with magnet, programming interface (POD), and a connecting cable attached with a computer.

Siemens Magnetom Symphony 1.5t was used for MRI Scan and Somatom definition AS Siemens for CT Scan. And digital x-ray was done.

Results & Discussion:

Two Out of all 430 cochlear implant cases, encountered a physical trauma leading the complication rates to be 0.465% as per the current study whereas as per the study conducted by Yu-Lan Mary Ying, MD, Jerry W. Lin, MD, PhD, John S. Oghalai, MD, and Robert A. Williamson, MD complication rates were reported to occur in the range 0.17 to 2.12%.

Comparison for pre and post traumatic: Plain X-Ray, CT Scan and/or MRI Scan findings , Impedance Findings and NRT findings suggested:

Extraction of 3 stiffening rings post trauma along with changes seen in loop formed the ball electrode were found out in case 1, indicating migration of the electrodes from the scan.

Displacement of magnet with no electrode migration was seen in case 2 from the scan.

No significant difference was noticed amongst finding of both the conditions for MP 1+2 Impedance for 1 to 22 electrodes.

Impedance values of electrodes, pre and post trauma were within the normal range. While in NRT evaluation Absence of evident changes in

NRT results, During first 15 months from cochlear implant ,variations in NRT of 6 up to 11 up was described by Lai et al., Hughes et al.,8 Thai Van et al.,9 and Ferrari10. Beyond these acceptable normal changes in NRT findings, changes more evidently on 3 electrodes post trauma making a notch pattern was observed, Indicating electrode migration.

On the contrary while comparing between case 1 and case 2 TNRT findings, no significant changes were observed in case 2 where magnet displacement took place without active electrode migration. Whereas, changes were evidently seen in case 1 where magnet displacement took place with active electrode migration, resulting in notch patterned changes visible in TNRT findings on 3 electrodes. The T levels also analysed as distance of the modiolus from electrode, showed NRT thresholds mirrored T and MC levels, along with radial distance of spiral ganglion cells from the electrode.

Summary & Conclusion:

Physical trauma to the implant can be observed easily on structural basis whereas its functional changes and device performance also needs to be observed. Changes in NRT findings could possibly be seen as an early indicator of affected device functionality even with a mild displacement and/or migration of electrodes.


  Abstract – AP1064: Exploring Barriers of Early Pediatric Cochlear Implantation Top


Jeeshna K1 & Ananya P C2

1jeeshnamedicalcollege@gmail.com &2ananyaslp96@gmail.com

1Government Medical College, Kozhikode - 673008

Introduction:

Early Cochlear Implantation (CI) leads to many benefits for deaf children including increased language ability, more integration into the hearing world, and an overall higher quality of life. Currently, age for CI has decreased to include implantation at 12 months of age or even earlier (Cosetti and Roland 2010). Join committee on infant Hearing in the year 2007 has advised universal newborn hearing screening program for early identification and intervention of hearing loss, and is been widely implemented all over Kerala.Despite this, many children with congenital profound sensori-neural hearing loss are not receiving cochlear implants in the first year. In order to implement better strategies that encourage earlier diagnosis and implantation, this study emphasized on identifying and analyzing the mean age of cochlear implantation and barriers to early pediatric cochlear implantation in Kerala.

Need for Study:

There are ample research data present in literature considering the age of implantation and its outcomes. Most of the children are being implanted at older age in spite of many Government funded cochlear implantation programs in our country . This study was designed to identify and analyze the mean age of cochlear implantation and the barriers to early cochlear implantation at different levels from screening to surgery and take corrective measures to overcome the barriers for better outcome of children with cochlear implntees.

Aim & Objectives:

  1. To explore mean age of Cochlear Implantation under Govt funded scheme in Kerala
  2. Barriers of early pediatric Cochlear Implantation under government funded Scheme in Kerala.


Method:

Documented details of patients who had undergone cochlear Implantation surgery at Govt. Medical College Kozhikode, Kerala, under Sruthitharangam, a state government free Cochlear implantation program from 1st January 2018 to 31st December 2018 were analyzed retrospectively. Among the parents of 42 cochlear implantees, and all the parents had participated in the survey. Mean age and barriers were analyzed for late implantation at various levels such as screening, confirmation of hearing loss, hearing aid fitting and age of surgery. The barriers were classified in to system delay, medical delay, parent-related and other delay. The delay at each level was identified by questionnaire by team members.

Results & Discussion:

As per the study the mean age of hearing screening was 6 months, confirmation of hearing loss was 12 months, hearing aid fitting was at 18 months, and mean age of Cochlear implantation was 35 months. There was 23 months elapsed between diagnosis of hearing loss and CI surgery. The elapsed moths were much longer compare to Canada cochlear implant program and the Canadian program elapsed only 9.1 months between diagnosis and surgery (Fitzpatrick and Brewster 2008).

50% received Cochlear Implantation less than 3 year of age and and mean age of Cochlear Implantation in Kerala was also younger than 3 years of age and finding was similar in Canada(Fitzpatrick and Brewster 2008). Most significant barriers at hearing screening and hearing aid fitting were system delay in Kerala, delay in conformation of hearing loss was due to medical delay and surgery was mainly delayed due to natural calamities and Nipah virus episode .

Summary & Conclusion:

The barrier reflects that the children undergoing cochlear implantation under govt funded scheme faces delay in cochlear implantation. This study also confirms that a substantial number of children can be expected to receive their first cochlear implant well beyond their first year of life if we fathom out all barriers at each level and provide better outcome of cochlear implantees. Subsequently these barriers could be overcome with better government policies and programs , leading to early implantation.


  Abstract – AP1067: Asymmetry in P300 Responses for Speech and Music in Left Handed Individuals Top


Sakshi Sarda1, Disha Raut2, Mohammad Afzal3 & Rucha Vivek4

1smsarda98@gmail.com,2disharaut20@gmail.com,3afzalmohd78615@gmail.com, &4ruchaavivek@gmail.com

1School of Audiology and Speech language Pathology, Bharati Vidyapeeth (Deemed to be) University, Pune - 411030

Introduction:

There is a remarkable functional asymmetry between the two cerebral hemispheres in humans for motor functions, language, sensory perception, etc. According to acoustic laterality theory, the left hemisphere is dominant in the processing of phonetic distinctions in speech stimuli (Mountcastle, 1972; Belin, Penhune & Zatorre, 2002); while, the right hemisphere better processes music sounds and timbre (Hugdahl & Tervaniemi, 2003; Jones, Longe & Vaz, 1997). It was also found that the incidence of atypical language lateralization in normal left-handed and ambidextrous subjects is higher than in normal right-handed subjects (Hecaen, 1956).

The Auditory Evoked Potentials (AEPs) elicited with complex tones may prove a useful tool in establishing functional lateralisation if linguistic and musical processes are consistently located in contralateral hemispheres (Byrne & Jones, 1998). P300, which is generated by contribution from the auditory association areas, prefrontal cortex, thalamus and temporo-parietal regions (Bottlender, Frodl-Bauch & Hegerl, 1999) is marked as indicator for attention and memory during stimulus processing and hence widely used in studies in neurogenic and psychiatric cognitive dysfunctions (Polich, 2004). Reduced amplitude of P300 has been observed over both hemispheres in cases with unilateral damage to temporo-parietal cortex (Knight & Yamaguchi, 1991). The P300 recorded was significant with parietal but not frontal lobe grey matter volume during effort-full attention (Ford & Laura, 1994).

Need for Study:

A study conducted by Altenmuller in 1986, proposed that hemispheric lateralization of musical abilities depended on handedness and musical training. Considering the sensitivity of P300 responses as measure of information processing and differences observed in perception of stimuli by cerebral hemispheres, P300 could be an effective tool in finding whether the cerebral dominance for speech and music stimulus shifts in right and left-handed people. This study aims to understand cerebral laterality for both speech and music at the physiological level using P300 in and its correlation with behavioural dichotic measures.

Aim & Objectives:

The present study was undertaken to investigate the cerebral asymmetry for speech and non- speech stimuli in right-handed and left-handed individuals using two dichotic tests namely Dichotic CV and Dichotic digit test as behavioural measures and P300 as an electrophysiological measure.

Method:

Individuals who participated in the study (age range 18-40 years) were divided into two groups of 15 and 16 respectively as left laterality and right laterality on the basis of laterality preference schedule (Venkatesan, 1993). All 31 participants had bilateral normal hearing sensitivity (PTA<15dB) and good speech identification scores without significant otological or neurological history. After determination of laterality preference, the participants performed dichotic CV and dichotic digit tests in both free-recall (attend to both stimuli) and directed-recall (attend to stimuli of specific ear) paradigms. Three dichotic tasks were administered in a sound treated booth using a calibrated Madsen electronics OB 922 dual channel audiometer at 40 dB SL (reference to speech recognition thresholds) via TDH 39 headphones as follows:

  1. Dichotic consonant-vowel test (Muthuselvi, Vanaja & Yathiraj, 2012) consisted of 30 pairs of syllables where the participants were instructed to circle two syllables that were perceived in a closed-set task; while the stimulus was presented in both ears.
  2. Dichotic Digit test in Marathi (Vanaja C, 2009) - in which two pairs of monosyllabic digits were presented at a time.


P300 was recorded using Bio-logic Navigator Pro AEP system (Bio-Logic systems, Denmark) via TDH-49 inserts with dual channel placement using conventional 10-20 system (non-inverting: P3, P4; inverting: M1, M2 and ground: Fpz). Two kinds of stimuli were used (speech and non-speech) which were recorded in soundproof environment and edited using the Audacity software for amplification and normalization to ensure removal of unwanted noise or clicks. the speech stimuli being speech syllables, /da/ and /ba/ were recorded in male voice, /da/ was used as frequent stimulus while /ba/ was the infrequent. Non-speech stimulus used were E major (frequent) and F major (infrequent) chords recorded on a piano (duration-150ms) which have a minimal spacing between them on a musical scale.

The subjects were seated comfortably in a sound treated room and were asked to avoid eye movements to reduce ocular artifacts. The subjects were instructed to count the number of infrequent stimuli heard. The ratio of frequent to infrequent stimulus was 4:1 and pseudo-random presentation was used. Multiple traces of 80-120 sweeps for frequent stimuli were recorded and the grand average waveforms were analysed for latency and amplitude.

Results & Discussion:

Shapiro-Wilk test of normality revealed that the data did not follow normal distribution. Therefore, non-parametric measures were used for further analyses. Mann-Whitney U tests did not show significant statistical difference between groups on behavioural dichotic measures or LLR (p>0.05). Significant difference was obtained on latency of P300 elicited by non-speech stimuli in the left ear (right hemisphere) (p=0.049) suggesting that discrimination of complex tones may be lateralized to the right hemisphere which is more dominant in left-handed subjects.

Dichotic Digit Test double correct scores showed correlation with P300 amplitude and latency for the non-speech stimuli of the left ear. Dichotic Consonant-Vowel test scores did not show consistent correlation with auditory evoked potential measures. The responses to speech stimuli did not yield significant correlation with behavioural tests. Whereas, the correlation of evoked potential responses was significant with behavioural tests, especially Dichotic digit test while using non-speech stimuli. The amplitude of P300 in the left ear (right hemisphere) elicited by music chords showed more pronounced correlation with dichotic CV Left (p=0.01), dichotic CV Right (p=0.03), dichotic digit double correct (p=0.019), dichotic digit left (p=0.01), dichotic digit right directed (p=0.014) and dichotic digit left directed (p= 0.036) which may be attributed to asymmetric processing of music stimuli in the right hemisphere.

Dichotic scores did not yield a significant group difference in directed recall paradigm which may be due to attention, cognition, voluntary selective attention towards stimulus and ceiling effect in scores. P1 latencies in right ear showed a significant difference between groups which may suggest preferential detection and encoding of tone stimulus in left hemisphere.

Our results suggest that language dominance may be similar in left versus right handers as no group difference was found on behavioural tests. However, processing of non-linguistic stimuli uses resources based on hemispheric specialization as seen by the correlation obtained between P300 and dichotic tasks. Results are in conjunction with Lishman and Mc Meckan (1977) which states significantly smaller ear differences in left dominant and right dominant left handers, thus indicating lateralization of language may be independent of handedness.

Summary & Conclusion:

16 right handed and 15 left handed subjects were tested on two dichotic listening tests which included dichotic CV and dichotic digit test. P300 responses were measured for both speech sounds and musical notes. Non parametric tests revealed that the dichotic scores showed correlation with the amplitude and latency of P300 for musical chords in left ear. However, no significant differences across the two groups were found in behavioural measures as well as LLR responses indicating that language dominance may not be atypical in left handed individuals. However, more research using tonal or musical stimuli may indicate resource allocation and cerebral asymmetry. Future directions include studying the auditory processing of non-speech stimuli using physiological measures to compare the two groups.


  Abstract – AP1071: Cognition and Speech Understanding in Elderly Adults with Age Related Hearing Loss Top


Archisman Shubhadarshan1, Dipika Behera2 & Sushmit Mishra3

1subhamasp@gmail.com,2archismanshubhadarshan@gmail.com, &3sushmitmishra@gmail.com

1International Institute Of Rehabilitation Sciences And Research, Odisha, 751030

Introduction:

Research in the past 20 years have pointed out a general link between cognition and hearing in noise. In particular, measures of working memory predicted speech performance in presence of noise better than other cognitive tests. Working memory has predicted 30-40% of the variance in speech understanding in presence of noise using hearing aids.

Need for Study:

There is a growing evidence that working memory plays an important role in speech perception and thus predicts outcome of hearing aid fitting. However, all of these studies were conducted in western countries using foreign language. There is a need to establish whether such finding can be replicated in India.

Aim & Objectives:

The aim of this study was two folds. Firstly, a test for working memory capacity was translated to Odia and established. Secondly, it was explored whether the working memory test predicted speech perception score in presence of noise. This study will be important for other studies, e.g., exploring the role played by cognition in hearing aid fitting.

Method:

Participants

We recruited 32 students including 21 number of males with a mean age of 18 years (SD 1.85 years, range 16-25 years). The subjects had normal hearing (threshold better than 20 dB in both ear). We also recruited 21older adults (18 males, 3 females) with a mean age of 69 years (SD 10.4 years, range 50-78 years). All participants wore their corrective lenses during testing who required vision correction for myopia or hyperopia. They were native Odia speaker with no psychological problem as reported.

Material

Reading Span test:

The reading span test provided a measure of the working memory capacity. The English version of Reading Span Test consisting of 57 sentences including three practice sentences were translated into Odia by a Linguist and were verified by three individuals who are fluent in both English and Odia. The test material was shown to the participant on a computer screen. Each sentence consisted of three- four words. Each series consisted of three to six sentences presented in an increasing series length. Half the sentences were coherent and half the sentences were absurd which were spread randomly across all the sentences. The participants were given 1.75 seconds at the end of each sentence to judge whether the sentence was semantically correct by responding 'Yes' or 'No' if not correct. After each series of sentences, the participants were asked to recall the first word or the last word of each sentence.

Spondee and Phonetically Balanced (PB) word list.

A standardized spondee and PB word list in Odia were used as the test material to determine the Speech Recognition Threshold (SRT) and Word Recognition Score (WRS) respectively. Each list consisted of 50 commonly used words in Odia and were presented in same order.

Procedure

All the testing was carried out in sound treated room. First, the pure tone audiometry was conducted. The Speech Recognition Test (SRT) and Word recognition Score (WRS) were obtained using a live voice using the same speaker for all the participants. Speech noise was introduced during obtaining WRS and noise was presented at three noise levels, namely, a) 0dB b) -10 dB c) -20 dB SNR (signal to noise ratio). Finally, the reading span test was administered.

Results & Discussion:

For the older adults, the mean pure tone average (PTA) was 36.26 dB HL (S.D: 16.36; Range: 22-83.5 dB HL), mean SRT was 42.5 dB HL (S.D: 14.44; Range:30-80 dB HL), and mean WRS was 79.5% (S.D: 8.38; Range: 70-100%) for both ears. The mean reading span score for the young adults was 21(S.D: 3.85; Range: 16-31) and for the elderly adults was 16 (S.D: 4.34; Range: 15-28). The PTA correlated with SRT (r=0.97, p<0.01), WRS at 0 dB SNR (r=-0.89, p<0.01), WRS at -10 dB SNR (r=-0.84, p<0.01) and WRS at -20 dB SNR (r=-0.74, p<0.01). The reading span score correlated with WRS at 0 dB SNR (r=0.70, p<0.01), WRS at -10 dB SNR (r=0.9, p<0.01) and WRS at -20 dB SNR (r=1.0, p<0.01). A hierarchical regression model was also used to analyses the data for WRS presented at 0 dB SNR, -10 dB SNR and -20 dB SNR. It revealed that PTA accounted for 78% of the variance in WRS. For WRS at -10 dB SNR, PTA, age and reading span score accounted for 83% of the variance in WRS with reading span accounting for 78% of the variance in score. Finally, for WRS at -20 dB SNR, PTA, age and reading span score accounted for 85% of the variance in WRS with reading span accounting for 80% of the variance in score.

Discussion

The correlation of speech perception score as measured by WRS was stronger with working memory capacity as measured by reading span score as the level of noise increased. This finding is in expected lines as the level of difficulty in listening increased with increment in noise levels, the reliance on working memory for speech perception increased. The most striking finding is that working memory explained 80% of the variance in speech perception score at higher noise levels whereas at lowest noise level (0 dB SNR) variance in speech perception was explained by hearing loss. The reliance on working memory for speech perception is greater in this study compared to previous studies. This may be greater because, in this study, the person with hearing loss heard the stimuli unaided. In all the previous studies, the participants were fitted with hearing aids hence the auditory perception of the stimuli was increased. Secondly, higher noise level was used in this study compared to previous studies. Therefore, the impoverished stimuli might have led to greater reliance on working memory. Thirdly, the material used to measure speech perception are PB words whereas other studies have used sentence as stimuli. PB words have low predictability and hence are more difficult to guess. This finding is in similar lines to that of Foo et al. (2007) where correlation of working memory and speech perception in noise was found in low predictable sentences and not high predictable sentences. This finding suggests that a higher contribution of working memory can be expected when the difficulty in listening to signal was increased and predictability of stimuli material is less. Correlation of hearing loss and speech measures are in expected lines. Similarly, the young adults had greater mean working memory capacity compared to older adults as expected. No correlation was observed between age and hearing loss, as people with varied hearing loss and etiology for hearing loss were used in this study.

Summary & Conclusion:

The study demonstrated that working memory predicted speech perception in noise which are in agreement with previous studies. The working memory test was translated to Odia language which will be helpful in further studies in evaluating the cognitive process involved in audiological management.


  Abstract – AP1072: Consonant Perception in Noise among Bilinguals: Effect of Native Language Top


Rudrasis Swain1, Mrinal Sinha2 & Suresh Thontadarya3

1rudrasis89@gmail.com,2suresh.sphg@outlook.com, &3s.audioprojects@gmail.com

1International Institute Of Rehabilitation Sciences and Research, Odisha - 751030

Introduction:

Speech perception is a complex mechanism which is being studied in ever more depth by many. Identification of phonetic identity of the acoustic stimuli in the presence of background noise is challenging for even for normal hearing individuals. Various factors influencing this ability has been studied. Bilingualism seems to affect this ability negatively. Florence (1985) did suggest that nonnative improved in their ability of speech perception of English in noise as their language experience increased in years. Some also have noted that If the subjects have higher proficiency in second language their primary language shows poorer scores in noise. India is home to diverse languages differing in phonological content. Phono tactics rules differ from language to language. The importance and relative frequency of occurrence of consonants also differ. English language perception of /s/ is important as it is also a plural marker as well as indicator ownership of action. Therefore, study of consonant perception in noise among natives of different languages are useful.

Need for Study:

Perception of consonants specifically the fricatives and nasals have not been studied so far with respect to bilingualism.

Odia and Malayalam differs in their inventory of fricatives and nasal phonemic inventory. Therefore, consonant identification of fricatives and nasals in these two groups were undertaken.

Aim & Objectives:

Bilingualism is a prevalent phenomenon when one is trying to study language, perception or processing related concepts. Some literature does exist as to the nature of influence of bilingualism on perception of native and non-native speech.

But bilingual exposure itself is a variable among subjects. The languages also differ on many levels. Therefore, studying the perception of fricative and nasals when presented in noise was planned. Two groups of subjects, whose primary language differed on common phonological rules and frequency and composition of fricatives in their language were selected. i.e. Odia and Malayalam. SINFA is a method used by Miller (1955) to analyses the error pattern in perception of consonants. In present study also, error pattern was analyzed using SINFA procedure, using a public domain software by UCL, London.

Method:

Fifteen (15) no. Odia speakers and fifteen (15) no. Malayalam speakers both, with English as second language were the subjects of the study. They were in the age range of 18-25 years and studied English as second language throughout their school and college. The subjects were also administered with a consonant identification test wherein they had to identify the consonant presented in background of speech babble noise with varying SNR ratio (-3,-6 dB SNR). The errors were noted down and subjected to feature analysis and SINFA.

Result and Discussion

Results were subjected to statistical analysis. ANOVA and regression analysis was done to know if all the two groups differed in the mean score values significantly and whether native language usage and SNR level affected the scores individually or in combined fashion. Error analysis did indicate the mistakes in perception of consonants were influenced by their primary or native language. Malayalam has more nasals in its phonemic inventory and subjects with Malayalam as native language also did fewer errors in nasal identification compared to native speakers of Odia.

Summary & Conclusion:

Phonetic inventory of a language seems to be a factor in perception of consonants in degraded condition. Frequently used consonants may be less affected by degradation of signal due to noise compared to less frequent consonants of a language. Acoustic spectrum and phonetic inventory probably independently affect the speech perception ability among normal.


  Abstract – AP1073: Auditory Processing Abilities in Adolescent Girls with Iron Deficiency Anemia Top


Chandni Jain1, Vipin P G2 & Aishwarya Lakshmi3

1chandni_j_2002@yahoo.co.in,2vipinghosh78@gmail.com, &3aishwaryalakshmi.611@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Iron deficiency anemia (IDA) is the most common nutritional disorder seen all over the world, more so in the developing countries, particularly affecting young children of 6-24 months of age, adolescents, women of reproductive age group and pregnant/ lactating women (Lokeshwar et al., 2011). India has the world's highest prevalence of IDA among women, with 60 to 70 percent of the adolescent girls being anemic. Anemia is a condition in which the number of red blood cells (RBCs) and subsequently their capacity to carry oxygen is inadequate to meet the body's physiologic needs. The most common cause of anemia globally is iron deficiency with additional causes being deficiencies in other nutritional elements such as folate, vitamin B12 and vitamin A. IDA includes decrement in both hemoglobin levels and iron stores.

A multi-centric study by the Indian Council of Medical Research showed that over 90% of adolescent girls throughout the country have some kind of anemia (Toteja & Singh, 2002). There are many studies done in different parts of the world regarding the prevalence of IDA in various populations. In India, the prevalence of anemia among adolescent girls was 56% and this amounts to an average 64 million girls at any point in time (Aguayo et al., 2013). State-wise prevalence data are also available showing considerable amount of females to be anemic with the highest prevalence observed among the tribal population of Wayanad, Kerala i.e., 96.5% (Shrinivasa et al., 2014). Studies have reported a greater likelihood of sensorineural hearing loss in IDA population in comparison to non-IDA population (Schieffer et al., 2016; Schieffer et al., 2017). It is hypothesized that the blood supply to the inner ear, which is highly susceptible to ischemic damage, is compromised by IDA, leading to sudden sensorineural hearing loss (Chung et al., 2014).

Iron is required for normal myelination and the pathway transmission might be affected by early iron deficiency. This is reflected as longer latencies of auditory evoked potentials and increased central conduction time (Algarin et al, 2003). IDA has been shown to affect concentration, attention span, cognition, motor development and educational attainment (Dubey, 2006). This could lead to delay in the pattern of responses in a discrimination task of highly familiar stimulus from an unfamiliar stimulus because of central nervous system (CNS) involvement (Burden et al., 2007).

Need for Study:

There are many studies done on the prevalence of IDA in the Indian population. However, there is very limited body of Indian research done to compare auditory processing abilities in IDA and non-anemic adolescent girls. Although the prevalence of IDA is reported to be high in Indian states especially in adolescent girls, the possible central auditory processing difficulties in adolescent girls with IDA are not well understood.

Aim & Objectives:

The aim of the present study is to compare auditory processing abilities in adolescent girls with IDA and adolescent girls without IDA. Objectives of the study

  1. To compare the auditory closure abilities of adolescent girls with and without IDA
  2. To compare the temporal processing abilities of adolescent girls with and without IDA
  3. To compare the binaural integration abilities of adolescent girls with and without IDA


Method:

A total of 74 participants in the age range of 10 to 17 years were recruited for the study. Group 1 included 36 female participants having IDA confirmed by complete haemogram of the blood samples collected. Group 2 included 38 female participants without IDA again confirmed by complete haemogram. All the participants were native Kannada speakers, and none of the participants had a history of hearing loss, ear disease, head trauma, ototoxic drug intake, ear surgery, speech-language problems, or neurological issues or. None of them reported any illness during the time of testing.

The routine audiological evaluation was done to ensure normal hearing sensitivity in all the participants. The assessment for auditory processing included tests for auditory closure/auditory separation using Quick speech perception in noise (QuickSIN) test in Kannada developed by Avinash, Meti & Kumar (2010) which included Kannada sentences presented in speech babble starting from +10 dBSNR reducing subsequently in 3 dB steps to -8 dB SNR with the total number of sentences in each list being 7. Two lists were used in the present study with the lists randomized between the ears. Temporal processing was assessed using gap detection test (GDT) carried out using Maximum Likelihood Procedure (MLP) implemented in MATLAB software installed in a laptop. Three blocks of 500 ms noise were provided with one of them having a gap. Binaural integration was assessed using Dichotic CV (DCV) test developed by Yathiraj (1999). The test consisted of 6 consonant-vowel combinations presented dichotically to both the ears. All the tests were performed at 70 dB HL using calibrated HDA 200 headphone connected to a laptop.

In QuickSIN test, the participant was asked to repeat whole of the sentence heard or the key words in the sentences. SNR 50 was estimated for each ear. The task of the participants in GDT involved identifying the noise with the gap. This test was done to estimate gap detection thresholds for both right and left ears. Dichotic CV test was done to estimate single correct scores (SCS) for right and left and double correct scores (DCS). The task of the participants was to write down the syllables heard in each ear.

Results & Discussion:

Normality of the data was assessed using Shapiro-Wilks test of normality which showed that the data was normally distributed (p>0.05). Descriptive statistics was carried out to estimate the mean and standard deviation for all the parameters. Results showed that the scores of group 1 (IDA) for all the tests were poorer compared to group 2 (non-IDA). Further, independent samples t test was done to compare the means of both the groups for SNR 50, GDT and DCV. Results showed that the group 2 performed significantly better than group 1 for SNR 50 right (t= -2.102, df= 72, p<0.05) and left ears (t= -2.439, df= 72, p<0.05) and GDT for right (t= -3.209, df= 72, p<0.05) and left ears (t= -2.791, df= 72, p<0.05). However, SCS right and left, and DCS did not show significant difference between the two groups (p>0.05).

The current study reveals that there are differences observed in auditory closure and temporal processing abilities in adolescent girls having IDA. Though statistically significant differences were not obtained for dichotic scores between the groups, descriptive statistics demonstrated poorer score in group 1 when compared to group 2. This is a first study according to authors knowledge wherein auditory processing abilities have been assessed in IDA. The poorer central auditory processing in adolescent girls with IDA can be attributed to the affected blood supply to the central auditory nervous system which could have compromised the central auditory processes and to reduced myelination caused by IDA. The present study is still in progress and data on large sample would better generalize the results.

Summary & Conclusion:

The current study demonstrates that IDA has an effect on the central auditory processing abilities. This research has clinical utility and can shift the focus of health care professionals and administrators to this health issue which is very common in the public though not considered critically. The results of the present study would also help in counselling and management of adolescent girls with IDA regarding effect of IDA on hearing.


  Abstract – AP1074: Rare Occurance of Audiological & Speech Disorder in Klippel Feil Syndrome: A Case Study Top


Sharmishtha Chavan1 & Mohammad Shamim2

1sharmishthachavan03@gmail.com &2msansari5000@yahoo.com

1Ali Yavar National Institute of Speech and Hearing Disabilities Divyangjan, Mumbai - 400050

Introduction:

Maurice Klippel and Andrie Feil in 1912 described condition with fused cervical vertebrae, thus the term Klippel Feil Syndrome was coined to describe this condition. Klippel Feil syndrome is a rare congenital craniofacial disorder (could be acquired during an embryonic period or later in life. It is characterized by a short neck with decreased movement and low posterior hairline radiologically, the fusion of some or all cervical vertebrae (from C5-T1). There are two types of Klippel Feil syndrome TYPE I &TYPEII.

PREVALENCE- Klippel Feil Syndrome occurs in approximately 1 in 42000 birth, with 65% occurring in the female. Girls appear to be more frequently affected by the types I and III, but there is an equal sex incidence in type II.

The cause of the condition is heterogeneous involving both genetic and environmental. Mutation in the GDF6 (Growth differentiation factor 6) or GDF3 (Growth differentiation factor 3) genes can cause the disorder.

FEATURES- Fusion of cervical vertebrae, Webbed neck, abducens nerve paralysis(VI cranial nerve ), Occasional cleft palate, Torticollis, Mild conductive to profound sensorineural hearing loss, Heart malformation, Asymmetry of face, Low hairline, absence of VIII cranial nerve, Clubfoot (twisted out of shape or position). Ear deformities- 1) Narrow to absent external auditory meatus or middle ear space, Deformed ossicles, Narrow oval window niche, Absence of semicircular canal, Underdeveloped cochlea & vestibular structures.

Need for Study:

This is a rare syndrome seen in the audiology clinic/speech and language therapy. Therefore, our orientation towards diagnosis and therapeutic understanding of this disorder is of paramount importance so that effective treatment strategies can be planned.

Aim & Objectives:

To understand the audiological & speech-language disposition in person with Klippel Feil syndrome.

OBJECTIVE

To understand the sequelae of Klippel feil syndrome and audiological and speech and language characteristics for appropriate diagnosis and effective management.

Method:

This is a simple case study of 5yrs old male child whose parents reported to institute on 2/8/19 with the complaint that their child responds to only loud sound and does not speak age-appropriate

Case history-child's birth weight was 2kg. He was kept in NICU for 24hrs. There were delayed developmental milestones, the child started walking at the age of 2.5yrs.

MRI was done on 26/8/15 results reveal that the child was having fusion in cervical vertebrae i.e C1-C5. AUDIOLOGICAL EVALUATION: done on 5/7/19 at hospital it reveals that Right ear-moderately severe hearing loss. Left ear-moderate hearing loss.

OAE FINDINGS- 'refer' in both the ears.

OTOSCOPIC FINDINGS- right ear narrow ear canal and no abnormality detected in left ear.

IMITTANCE AUDIOMETRIC FINDINGS- B type tympanogram in both the ears and ear canal volume was 0.2 cc in the right ear and 0.3 in the left ear.

BOA FINDINGS- moderate to moderately severe hearing loss.

SPEECH EVALUATION: done on16/8/19 OPME reveals that all oromotor structures are normal in appearance except for deviated mandible and few missing teeth & adequate in functioning.

All vegetative functions such as chewing, sucking, blowing and swallowing are adequate.

Reception: Child's receptive verbal vocabulary consists of common objects, few fruits, vehicles, action verbs, adjectives and kinship terms. Child can understand psychological feelings. Child can differentiate between family members and strangers. Child can understand size differences.

Expression: Childs express his needs and wants through pointing along with vocalization or by dragging parents towards desired objects. Child uses a few rudimentary gestures. Child can narrate past events through a combination of gestures.

Clinical Impression: Speech and language developmental delay in k/c/o hearing impairment and Klippel Feil syndrome. PSYCHOLOGICAL EVALUATION - 27/8/19 at institute.

1. VSMS:SA= 36 & SQ=56 2. DST : MA=25 & IQ=39

3. ADHD Score: 7

Diagnostic Formulation: Mild to moderate intellectual disability with ADHD. NEUROLOGICAL EVALUATION:done on27/8/19.

DIAGNOSIS- Klippel Feil syndrome with mental retardation.

Results & Discussion:

PROVISIONAL DIAGNOSIS

RIGHT EAR- Moderately severe mixed hearing loss

LEFT EAR- Moderate mixed hearing loss

DELAYED SPEECH AND LANGUAGE DEVELOPMENT WITH MENTAL RETARDATION AND ADHD IN K/C/O KLIPPEL FEIL SYNDROME. MANAGEMENT

Therapeutic rehabilitation: Speech and language therapy

Linguistic approach- Focuses on all aspects of language in a naturalistic environment. It involves the use of procedures such as Self-talk , Parallel talk, Expansion, Expatiation, Recast sentences, Simplifying, Build-ups and breakdowns.

Technological rehabilitation:

  1. AC Hearing Aid: It may not be advisable in case of anotia, atresia or microtia. However, in the case of stenosis hearing aid can be provided with modified ear mould.
  2. BC Hearing aid: It may be advised if outer ear anomalies are present.
  3. Sound bridge: It is a middle ear implant device. It can be used if middle ear anatomy is normal. 4. Cochlear Implant (provided that there is no cochlear anomalies present).


Summary & Conclusion:

Klippel Feil Syndrome is a rare disorder that predisposes with Hearing Impairment that leads to Speech and Language problems. This disorder is also occasionally accompanied with cleft of palate and hence, we as Audiologist and Speech-Language pathologist must be aware of the intricacies involved in providing effective


  Abstract – AP1076: The Erroneous Practices in Auditory Lifestyles and Beliefs Related to Loss in Hearing Sensitivity among College Students in Delhi-National Capital Region: A Survey Report Top


Smriti Khurana1, Aparna M Kumar2, Puranjit Kar3, Rakesh Singh4, Ashis Avishek Prusty5 & Virender Kumar5

1simikhuranaaslp@gmail.com,2aparna.ms16@nish.ac.in,3puranjitkaraslp@gmail.com,4rakeshsingh4me@icloud.com,5ashisprusty062@gmail.com, &6viren.aslp@hotmail.com

1Ali Yavar Jung National Institute of Speech and Hearing Disabilities (Divyangjan), ERC, Kolkata - 700090

Introduction:

Young adults expose themselves to harmfully loud sounds without any regard to the consequences they may face. An estimated 17% of teens may have lost some of their hearing probably due to noise exposure and are still not aware of it. This unfortunate hearing loss from exposure to loud sounds can be minimized by using appropriate hearing protection devices (HPDs) such as earplugs or earmuffs and by recognizing and avoiding harmfully loud situations.

Although college-aged adults can be exposed to harmfully loud sounds in a variety of ways, three avenues appear most obvious. One is exposure related to occupational noise. In Delhi, NCR, construction labor is the second most common occupation during the school year and the most common summer occupation among male youths who are 18 years old. The frequency of the use of HPDs among young adults who work in such a noisy environment is unknown. The minimal use of HPDs may be associated with lack of adequate knowledge about noise-induced hearing loss or may be a contribution of risk-taking behavior.

A second avenue for potentially damaging noise exposure among young adults is the use of noisy equipment without ear protection. Many young adults use noisy equipment such as lawn mowers, snowmobiles, snow blowers, dirt bikes, power tools, chain saw and so on.

Another obvious way is the exposure to loud music from personal walkmans or stereo systems, iPODs, MP3 players, compact disc (CD) players, car sound systems, aerobic music and attendance at music concerts and nightclubs. It appears that exposure to harmful loud sounds through personal listening systems or other recreational gadgets may be associated with a lack of knowledge about the potential harmful effects of exposure to excessively loud sounds.

Need for Study:

The need of this study was to promote healthy hearing behavior among college students.

Aim & Objectives:

The aim of this study was to evaluate the auditory lifestyles and beliefs of college students with reference to exposure to loud sounds in the context of the health belief model and to promote a healthy hearing behaviour.

Method:

A survey was administered on 300 students (70 men, 230 women) in the age range of 17-22 years, in Delhi NCR. A general survey questionnaire was developed to address the questions raised in the current investigation. A total of 14 survey questions were assessed in the areas such as occupational noise with and without HPDs, exposure to loud sounds, experience with tinnitus and their health belief. The responses were marked as 'yes' or 'no' for open ended questions and on likert scale measurement from strongly agreeing to strongly disagreeing for quantifying questions.

Results & Discussion:

The data was analysed in SPSS v16.0. The result suggested that 48% of the students use noisy equipment without ear protection and 34% of the students work in noisy environments. Out of the total participants working in noisy surroundings, 31.8% reported of experience of tinnitus. The use of HPDs was associated with previous experience with hearing loss and tinnitus. Although 75% of the students were aware that exposure to loud sounds could cause hearing loss, around 50% of the students appeared to be exposing themselves to potentially harmful loud music. Furthermore, 46% of the students reported of not using HPDs during loud musical activities due to their learned difficulties to hear with HPDs. Most students in this study considered that hearing loss would be a serious problem but 79% of the students believed that they would not lose their hearing until a greater age. Although 31.8% of the students had experienced tinnitus, more than 50% of these students reported of not being concerned about it. These indicate a critical need in promoting the awareness on a healthy hearing behavior among the youth of India. Possible strategies to make them aware would include improved education, experience with simulated hearing loss for extended periods and availability of cosmetically appealing or invisible HPDs with uniform attenuation across the frequency range.

Summary & Conclusion:

The results suggest that 34% of college students may be currently exposed to occupational noise without ear protection, 48% of the students may be using noisy equipment without ear protection and around 50% of the students may be exposing themselves to potentially harmful levels of music. Although each factor alone may put students at risk for hearing loss, a combination of such exposures may further increase the risk. Such exposure seems to be occurring although most students perceive hearing loss as serious and many are aware that exposure to loud music can cause hearing loss. These results suggest a critical need for promoting healthy hearing behavior among college students. Previous experience with hearing loss and tinnitus is associated with the use of HPDs during occupational noise exposure. Future studies should explore provision of experience with simulated hearing loss as a strategy to promote the use of HPDs. In addition, improved education about causes of hearing loss, limitations of hearing aids and significance of the occurrence of tinnitus following loud activities can be provided. Effectiveness of HPDs that provide uniform attenuation across the frequency range during musical leisure activities also needs to be explored in future studies.


  Abstract – AP1078: Performance of Young Adults in Dichotic Digit Test in Tamil and Dichotic CV Test: A Comparative Study Top


Ramya Sudersonam1 & Ramya V2

1ramyasm25@gmail.com &2ramyavaidyanath@gmail.com

1Panimalar Medical College and Hospital, Chennai - 600123

Introduction:

Dichotic listening has been described as the simultaneous presentation of two different stimuli to both the ears (Kimura, 1967). Musiek and Chermak (2014) stated that dichotic listening tests can be used to assess an individual's binaural integration or binaural separation ability. Various tests to assess dichotic listening have been developed over the years by Katz (1968), Studdert-Kennedy and Shankweiler (1970), Willeford (1977), Museik (1983), Wexler and Halwes (1983), Fifer, Jerger, Berlin, Tobey and Campbell (1983) and Roberts et al., (1994).

American Speech-Language-Hearing Association (2005) has recommended that dichotic listening should be included in the behavioral test battery for assessment of auditory processing disorder (APD).

Dichotic listening tests has also been developed in Indian languages over the years by Yathiraj (1999), Kishore and Rajalakshmi (2007), Sangamesh and Rajalakshmi (2007), Singh and Devi (2008), Gridhar and Rajalakshmi (2010), Kumar and Jain (2011), Bharathidasan and Rajalakshmi (2013), Arefin, Chatterjee, Chatterjee shahi, Somsubhrachatterjee and Ghosh (2016), Selvaraj, Rajeswaran and Jayachandran (2018), and Sudersonam and Vaidyanath (2019).

Need for Study:

Dichotic CV (DCV) test recorded by Yathiraj (1999) one of the widely used in the test battery evaluation of APD in India (Yathiraj & Maggu, 2012; Yathiraj & Vanaja, 2015; Kumar & Gupta, 2015; Yathiraj & Vanaja, 2018; Kumar, Mohan, Pavithra, & Naveen, 2019). This test consists of 30 standardized pair of stop consonant CV syllables /pa/, /ta/, /ka/, /ba/, /da/ and /ga/. This test has also been used in assessment of Tamil speaking population. However, stop consonant perception in Tamil language are governed by specific features or rules. One such feature is lack of differentiation between voiced and unvoiced consonants orthographically (Bhuvaneshwari & Padakannaya, 2013). They state that Tamil voiced and unvoiced stops are produced with the same point of articulation in words and these sounds are interpreted as voiced or unvoiced based on the context they occur. Hence, when stop consonants are presented in isolation, native Tamil speakers might find it difficult to differentiate between dichotically presented voiced and unvoiced consonants.

Aim & Objectives:

The current study aimed to compare the performance of Tamil speaking young adults in DCV test and Dichotic digit test in Tamil (DDT-T).

Method:

Sixty young adults (30 males & 30 females) in the age range of 18 to 35 years (mean age = 21.5 years) participated in the current study. All the participants had bilateral pure-tone hearing sensitivity of <15 dB HL at the octave frequencies 250 to 8000 Hz for air conduction and 250 to 4000 Hz for bone conduction and normal middle ear function. The participants were native speakers of Tamil with no history of otological, neurological, cognitive, speech and language difficulties. Informed consent was obtained from all the participants. This study was approved by the ethics committee of the Institute (REF: CSP/18/ APR/68/124).

DCV test recorded by Yathiraj (1999) with 0 ms lag was used for testing. DDT-T developed by Sudersonam and Vaidyanath (2019) consisting of six lists with 30 digit pair in each list was the other test used. Tamil monosyllabic digits |onnɨ| |renɖɨ| |mu:nnɨ| |na:llɨ| |ʌnʤɨ| |patɨ| were the stimulus of the material. Each stimulus set has four digits with two digits presented to both the ears simultaneously. The digits presented to the two ears had simultaneous onset with an interstimulus interval of 500 ms between the first and second digit and inter-stimulus set interval of 5 sec.

DDT-T and DCV test were administered at 50 dB SL (ref: SRT). The participants were instructed to repeat the digits and CVs heard irrespective of the order. They were provided with practice trials before testing. The participant's responses were noted down and was scored as single correct right (SCR), single correct left (SCL) and double correct scores (DC).

Results & Discussion:

The mean, median and standard deviation scores of SCR, SCL and DC scores of DDT-T were better compared to the respective scores of DCV test. Further, analysis was carried out to evaluate if there were any significant difference between the scores. A Shapiro-Wilk test of normality was performed to evaluate if the scores were normally distributed. It was found that the SCL and DC scores of DCV test followed a normal distribution (p > .05) while, SCR of DCV and all the scores of DDT-T did not follow normal distribution (p < .05). Hence, non-parametric Wilcoxon signed-rank test was carried out to compare the scores of each ear on the two tests. The results revealed a significant difference between the scores of SCR (Z = -6.72, p < .05), SCL (Z = -6.73, p < .05) and DC scores (Z = -6.74, p < .05) of the two tests. Further, the participants response from DCV test were analysed for place, manner and voicing errors for each ear separately. Additionally, the number of errors for identification of each consonant was calculated. Wilcoxon signed rank test was carried out to compare the type of errors in each ear and also across the ears. The results revealed significant difference in the type of errors between the ears and across the ears (p < .05). From the error analysis it was observed that voicing errors were highest followed by place and combined errors in place and voicing in both ears. The highest error was in the identification of voiceless consonant /ta/ followed by /ba/, /da/, /ga/, /pa/ and /ka/ in right ear and /pa/, /ka/, /da/, /ba/ and /ga/ in left ear.

The better performance in DDT-T compared to DCV could be due to the higher number of cues available in digits compared to CV. This is similar to the findings of Speaks, Niccum and Tasell (1985). They administered dichotic digit test with dichotic Vowel words, consonant words and CV nonsense syllables on individuals with sensorineural hearing loss and found the performance on digits was better compared to other stimuli.

The results obtained from the DCV test of the current study was compared to the normative data for adults developed by Prachi (2000). It was found that the performance of young adults in the current study were poorer even though all the participants selected had normal auditory processing abilities and did not report of any auditory difficulties. Further, it was found that 18 of the 60 listeners had scores 1 SD below norms. This poorer performance could be due to the influence of the native language of these listeners and may be wrongly interpreted as having binaural integration deficit based on the performance in DCV test.

The higher voicing errors observed in the study finds support from Bhuvaneshwari and Padakannaya (2013) who reported that Tamil consonants are interpreted either as voiced or unvoiced based on the context. When the DCV test was administered, the CVs are presented in isolation and lack the contextual cues required by the Tamil speaking listeners to correctly identify the consonants as voiced or unvoiced.

Summary & Conclusion:

The DDT-T revealed to be a useful material to assess binaural integration ability in Tamil speaking individuals than compared to DCV. It is recommended to be used in the test battery assessment of APD in native Tamil speakers.


  Abstract – AP1084: Effect of Compression Ratio on Consonant Perception among Subjects with Mixed Hearing Loss Top


Rudrasis Swain1, Mrinal Sinha2, &SureshThontadarya3

1rudrasis89@gmail.com,2suresh.sphg@outlook.com, &3s.audioprojects@gmail.com

1International Institute Of Rehabilitation Sciences and Research, Odisha - 751030

Introduction:

Hearing impairments are the third most prevalent chronic disability identified by those the person over the age of 65 years (NCHS, 1977). Effect of hearing impairment and need for methods to improve hearing level are not a debatable issue any more. The World Health Organization (WHO) considers it to be a common problem among people above age of 60 by 2020. It is also termed as silent epidemic. With the prevalence of hearing loss at 25-40% for individuals over 65 years of age (Herbst,1983; NCHS, 1977,1986), deficits in speech understanding associated with hearing loss represent the major component of elderly persons' communication disability. Accurate perception of speech sounds are important for extraction of meaning and order of words in a narration. Mixed hearing loss present a major group of hearing loss in adults and elderly. Both otosclerosis and CSOM are prevalent in Indian population. While studies have been conducted to determine suitable compression ratio for sensorineural hearing loss. Such studies are lacking for mixed hearing loss.

Need for Study:

There are several fitting formulas developed to prescribe gain of a hearing aid for different type and degree of hearing loss & configuration of audiogram. In non linear hearing aids gain changes with input level of sounds and therefore compression paramters need also be specified in prescriptive formulae. Relation between amount of Gain and type of hearing loss is very crucial because for same degree of hearing loss , gain required would differ for conductive (Johnson EE 2013) and sensorineural hearing losses (Johnson EE, Dillon H. 2011). Evidence based methods are available for prescriptive parameters for conductive hearing loss (Linear) and sensorineural hearing loss (Linear and Nonlinear), such data are lacking for mixed hearing loss.

Aim & Objectives:

Aim of the study is to evaluate the effect of compression ratio on consonant perception in a group of subjects with mixed hearing impairment.

Method:

Inclusion Criteria:

  1. Subjects with Bilateral Mixed hearing loss.
  2. Degree of Hearing loss to be either mild, moderate or moderately severe degree as per goodmans classification.


Exclusion Criteria:

  1. Subjects with External Otitis, Otalgia, Otorrhea, impacted wax.
  2. Subject using analogue hearing aids in recent past


Hearing aids with 4-6 channels were selected for the study. Hearing aid used for testing was with all the additional parameters like, noise cancellation switched off. Directionality of microphone was set to omnidirectional.

Listening conditions and test order.

  1. The thresolds, both AC and BC were entered for each participant n the hearing aid fitting software.
  2. NAL-NL1 was chsen as the fitting formula.
  3. Hearing aids were programmed with defualt parameters of NAL-NL1, including compression ratios for each channel.
  4. Subjects were presented with consonant identification test. Subjects listened to consonants embeded in a/c/a ( intervocalic position) and repeated them. They were seated comfortably at a distance of 1 meter, 0-degree Azimuth away from the loudspeaker of the audiometer The stimuli were presented at 65 dB SPL. Before the presentation of stimuli, the level of the presentation was monitored with the calibration tone. During the presentation of the stimuli the average deflection on the VU meter is controlled within the green signal. The each VCV speech syllable was presented two times in randomized order. A Similar procedure was carried out in the presence of +10 dB and 0 dB noise conditions.
  5. Then the participant was asked to fill a Subjective Rating scale (used by Neuman et al. 1998 and Schmidt 2006)
  6. The steps 4-5 were repeated with a changed compression ratio of 1.3
  7. The steps 4-5 were repeated with a new compression ratio of 2.3.


A gap of one week was provided between each compression ratio setting to avoid familiarity with the test stimuli. The order of testing with respect to no. of compression ratio setting of hearing aids and SNRs were randomized across participants.

Results & Discussion:

The Statistical Package for Social Science (SPSS version We calculate the amount of consonant identification score (subjectively & Objectively) in different CR (Default CR, CR of 1.3 & CR of 2.3) with different listening environment (Quiet, 0dB SNR &-10dBSNR).

Descriptive statistical analysis was performed to document the mean and standard deviation of CIS in each setting of Compression Ratio, under different SNR conditions.

Overall consonant identification scores were higher for compression ratio of 1.3 followed by default CR and then for compression ratio of 2.3. consonant identification score for each of SNR conditions were predictable. I.e the scores were better for quiet followed by 0dB and then -10 dB SNR.

Repeated Measure ANOVA was conducted to see the interaction effect of compression ratio and SNR condition. Paired samples t - test was carried out as a post hoc test. Compression ratio effect was seen for 0 dB noise than other conditions.

Compression ratio was kept uniform for all channels. This could also be factor influcening the study.

Summary & Conclusion:

The present study shows that the individuals with mixed hearing loss will be benefitted with amplification with a lower than that of default compression ratio given by NAL NL1 prescriptive formula for their thresholds. Present study had majority of subjects with otosclerosis with large AB gap. Study should be repeated for subjects with mixed hearing loss other than otosclerosis. Effect of AB gap should also be studied.


  Abstract – AP1087: Definite and Non-Definite Factors for the Accurate Identification of Otosclerosis Top


Indira C P1, Sam Publius2, Shreyank P. Swamy3, Sharath Kumar K. S4 & Sandeep M5

1sharasyar@gmail.com,2sampublius52@gmail.com,3shreyankpswamy@gmail.com,4sharathkumarks08@gmail.com, &5msandeepa@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Otosclerosis is one of the highly investigated middle ear pathologies, in which stapes is fixed to the oval window due to sclerotic growth. It typically results in conductive hearing loss and Carhart's notch is considered the signature of this condition. Otosclerosis is considered a stiffness dominated middle ear pathology and the development of Otosclerosis leads to reduction in the static admittance of the middle ear and absence of stapedial muscle reflexes. In the multifrequency tympanometry, one usually finds an increase in the resonant frequency of the middle ear in these cases.

Need for Study:

Accurate diagnosis of otosclerosis is crucial as it guides the audiologist and the otologist in deciding the most appropriate management strategy for the individuals with the condition. Although Carhart's notch strongly hints at Otosclerosis, Kashio et al. (2011) found that only 31% of their 102 confirmed ears with otosclerosis demonstrated 2kHz dip in bone conduction. They also showed that significant number of other middle ear conditions such as ossicular chain discontinuity and otitis media may have 2kHz dip similar to that in otosclerosis. The finding hints at the low sensitivity and specificity of Carhart's notch in detecting otosclerosis.

Most often, audiologists depend on standard tympanometry and multifrequency tympanometry to infer the presence of otosclerosis. Reduction in the static admittance is expected in otosclerosis which is relative to the premorbid static admittance. As a result, one may find either type 'A' or type 'As' tympanogram in these individuals. Several studies have shown evidence for higher resonant frequency in ear with otosclerosis compared to normal middle ears. However, Maruthy et al. (2017) showed that sensitivity of resonant frequency in detecting otosclerosis is poor. Such lack of definite indicators of otosclerosis makes its diagnosis a challenge for audiologist in most cases and most often it is diagnosed based on the method of exclusion. It appears that more than one factor needs to be considered for the accurate diagnosis of the conditions.

Aim & Objectives:

Hence, the present study aimed to identify such definite and some of the indefinite characteristics of the otosclerosis so as to guide audiologists in the accurate diagnosis of otosclerosis.

The objective of the study was to investigate the percentage of occurrence of various demographic, otological and audiological characteristics in individuals with otosclerosis, in order to identify definite and indefinite factors of it.

Method:

The study used retrospective data of 53 individuals with otosclerosis who consulted audiology department of one of the speech and hearing institutions in south India. All of them were diagnosed as otosclerosis collectively by an experienced otologist and an audiologist. The diagnosis was based on the case history, otological signs and symptoms, and audiological findings.

The individual case files were looked into to note down the chronological age, age of onset, symptoms presented in the case history, unilateral versus bilateral presentation, duration of hearing loss, degree of hearing loss, type of hearing loss, configuration of audiogram, the presence of 2kHz dip in bone conduction, reported predisposing factor, the presence of tinnitus, characteristics of tinnitus, diagnosis made by otologist on the first instance, difference in the hearing loss between the two ears, type of tympanogram, and resonant frequency.

The group data was analysed to derive the percentage of occurrence of each of the above factors. The percentages were converted into z scores and the z scores were compared with each other using Equality of proportions test. Factorial analysis was carried out to identify the definite factors or definite group of factors that accurately identifies otosclerosis.

Results & Discussion:

Results showed that the presence of conductive hearing loss in the absence of ear discharge and ear pain, and bilateral presentation were present in more than 90% of the participants. Therefore, these can be taken as the definite factors for the identification of conductive hearing loss. The other factors such as stiffness tilt audiogram, the presence of Carhart's notch, 'As' type tympanogram and high resonant frequency were present in only 55%, 64%, 34%, and 43% respectively. Therefore, these cannot be considered as definite factors of otosclerosis. Only 51% of the cases were identified as otosclerosis by the otologist while the others were missed out. The finding hints at the importance of audiological findings in the accurate diagnosis of the condition. The mean age of occurrence of otosclerosis was found to be 27 years hinting at hormonal changes during pregnancy being the most important predisposing factor for it. Only 47% of the individual reported of tinnitus and hearing loss was symmetric in majority of them (62%).

The current findings support the earlier reports by Kashio et al. (2011) wherein Carhart's notch was reported as not a definite factor of otosclerosis. In the current study, it was found in 64% percentage of the individuals. Carhart's notch is known to be due to the alteration in the resonant frequency of the ossicular chain in otosclerosis (Homma et al. 2009). The large difference in the percentage of occurrence between the current study and Kashio et al's study suggests that the resonant frequency is different between Indian and the Japanese population.

Summary & Conclusion:

The study identified definite and indefinite factors of otosclerosis. The findings reveal that Carhart's notch is not a definite factor of otosclerosis whereas, presence of bilateral conductive hearing loss in the absence of ear discharge and ear pain is found in most of the cases. The study shows evidence for definite need of audiological results for the accurate diagnosis of otosclerosis and therefore highlights the role of audiologists and the audiological evaluation in it


  Abstract – AP1088: Effect of Prolonged Occupational Noise Exposure on Hearing Among Indian Armed Forces Top


Ashutosh Maurya1, Aditya Singhal2, Shashank Shekhar3 & Hitarthi Sukhija4

1ashutoshmaurya40@gmail.com,2adi54sin@gmail.com,3shashank8777@gmail.com, &4hitarthisukhija@gmail.com

1Amity University, Haryana - 122413

Introduction:

Occupational noise-induced hearing loss (ONIHL) is hearing loss that develops slowly over a longer period (several years) as the result of exposure to continuous or intermittent loud noise and it is the prevalent types of hearing loss (Mather et al., 2000). Besides hearing loss, prolonged noise exposure may cause several other health hazards resulting in adverse socio-economic consequences. Approximately 19% of total disability is categorized only to hearing disability (Nelson et al., 2005). Over 180 million people develop a disabling hearing impairment (HI) during adulthood, with occupational noise-induced hearing loss (NIHL) estimated to account for 16% of these cases (Sanju & Kumar, 2016). In 2000, 4.1 million disability-adjusted life years were lost to occupational NIHL. The pathogenesis of ONIHL involves the induction of a progressive, sensory-neural, hearing deficit, resulting from irreversible damage to sensory hair cells of the cochlea within the inner ear (Nelson et al., 2005). Apart from hearing loss, ONIHL causes overall discomfort, sometimes pain and reduces the quality of life (Liberman, 2017). Armed forces are constantly exposed to high levels of noise and it is not surprising that ONIHL remains the second most prevalent service-connected to disabilities (Muhr et al., 2016). Noise is also the reason for annoyance, interference with speech and communication, and ultimately produces psychological effects similar to reducing potency at work (Sanju & Kumar, 2016).

Need for Study:

Until today no PubMed/Google scholar published research has been observed on the impact of prolonged noise exposure on hearing impairment among individuals who served at Indian Armed Forces. However, until today there are 11 research papers regarding the impact of exposure of noise on Indian Air Force of technical and non-technical trades (Muhr et al., 2016). In majority of these studies, there was audiometric based hearing threshold estimation under with and without noise exposure conditions. However in this study apart from audiometric assessment static compliance and ear canal volume status of impedance audiometry was conducted.

Aim & Objectives:

The objective of this study was to investigate the hearing thresholds of the individuals who served in the Indian army for 18 to 36 years. We also compared hearing thresholds at (500Hz, 1000Hz, 2000Hz, 4000Hz) with 4KHz response and investigated middle ear function especially SC (Compliance) and ECV (Ear Canal Volume) status on persons who served in the Indian Army.

Method:

Participant: 20 male participants with a mean age of (55 ± 6.38) years and the age range of 45 to 79 years participated in this study. These individuals had an experience of (25.85 ± 5.96) at a range of 14 to 36 years serving in various infantry division of Indian army. They were Indian army veterans working in Amity University after retirement from the Indian army. Written consent was obtained from each participant.

Test procedure, timing and test outcome were explained to these participants. All of these participants were divided into two categories: (a) Up to 18 years of occupational experience and (b) 19 to 36 years of occupational experience. Participant's designation, specialization, unit, socioeconomic status such as education, salary, etc. were also recorded. Instrument: HARP INVENTIS dual channel audiometer was used to evaluate the hearing thresholds of each participant. Impedance Audiometer using MAICO 34 tympanometry test was conducted. Procedure: Before testing, a detailed history was taken to exclude individuals with a history of ENT diseases and medication with ototoxic drugs. The hearing examination included an otoscopic evaluation, pure-tone audiometry, and impedance audiometry. Audiometric testing was conducted in a sound-treated room at the Department of Audiology & Speech-Language Pathology using HARP INVENTIS dual channel audiometer to evaluate hearing thresholds of each participant. The hearing threshold at frequencies 250,500, 1000, 2000, 4000, 6000 and 8000 Hz was evaluated. A threshold of above 15 dB was considered to be a hearing loss in any of the above frequencies. The hearing loss was graded in severity as normal (<15 dB), minimal/slight (16-25 dB) mild (26-40 dB), moderate (41-55 dB), severe (56-90 dB) and profound (> 90 dB) (Clark, J. G. (1981). Age-related correction of 5 dB per decade was applied for personnel above 50 years to nullify effects of age-induced hearing loss. Audiogram interpretation was done by the trained audiologists of Department of ASLP.

Results & Discussion:

The hearing threshold of person with work experience in the army was compared for overall hearing threshold with respect to 4k response. Karl Pearson correlation was done across these two conditions and significant difference (r= 0.78, p<0.001) across overall hearing threshold and 4k Notch was observed. Overall PTA score and 4KHz notch score were compared across participants having up to 19 to 36 years of experience. One way analysis of variance was conducted to check performance across these two conditions at significance value of p<0.05 and confidence interval (CI) of 95%. No significant difference was observed [F (1, 19) = (2.01, p=0.44)] between overall PTA and 4KHz notch across years up to 19 and 19-36 years of experience groups. Middle ear status of the participants with two distinct work experience category was compared for static compliance and ear canal volume. One way ANOVA was done across these two conditions and the significant was observed static compliance [F(1, 19) = (0.37,p=0.55)]. However, no significant difference was observed for ear canal volume.

Present findings of a distinguished 4KHz Notch as described by (McBride & Williams, 2001) is the characteristic of occupational noise. It was observed that the overall audiometric threshold significantly differed (45.90±19.74 dB) from 4KHz response (60.07±21.09 dB). Present findings can be generalized as, approximately 16 to 18 years of prolonged noise exposure at various levels of Indian army such as Infantry, Artillery & various mechanical and vehicle noise can induce moderate to severe SNHL with significant 4KHz notch. It was observed that Impedance values of static compliance significantly differed (1.22 ± 1.27 ml) from normative (0.8 ± 0.5ml) and values of overall ear canal volume (1.09 ± .37 ml) differed marginally from the normative value of (1.1 ± 0.4ml) (Margolis & Heller, 1987). Present findings also indicate that prolonged noise exposure largely impacts hearing thresholds with a characteristic SNHL. However, it has limited impact on ear canal volume. Findings of this study further compels to test the effectiveness of ear productive devices and hearing conservation programme implemented by Indian Army specially for those working in infantry and artillery division. Addition of more number of participant’s with varying years of experience especially upto 5 years and 5 to 10 years will be an interesting aspect where this study may proceed further.

Summary & Conclusion:

This study was conducted to check hearing status of of those individuals of the Indian Army who worked in infantry & artillery division. It was hypothesized that professional noise exposure of these specific group of Indian Army may impact hearing thresholds. Prolonged exposure of noise on hearing thresholds and middle ear function was assessed using cross-sectional design. It is inferred from the results that increase in ONIHL causes severe, non-reversible type of SNHL. At the policy level, various protective measures should be induced to preserve hearing in armed forces. The use of ear defenders is considered a must in high noise environment with a positive effect on the reduction of the NIHL. The periodic hearing evaluation will help to track the change in the hearing status of these professionals who are at auditory risk.


  Abstract – AP1089: Hearing Aid Benefit Questionnaire in School Going Children with Hearing Impairement in Kerala Top


Sneha P1, Devi N2, Chandni Jain3 & Kirti Joshi4

1snehaaiish12@gmail.com,2deviaiish@gmail.com,3chandni_j_2002@yahoo.co.in, &4kirtij@ymail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Hearing aid is the most important tool in auditory habilitation for a child with congenital hearing loss (Schow &amp; Nerbonne, 1989). The hearing makes use of the residual hearing and provides amplification hence providing the child with hearing loss access to sound. Hearing aid satisfaction is defined as hearing aid gain and hearing aid benefit (Giordano et.al 2013). When fitted appropriately, hearing aids in most instances enable the child to use residual hearing ensuring age appropriate speech and language development (Cunningham, 2008). Apart from the appropriate fitting, the hearing satisfaction/benefit is also influenced by several factors such as the severity and type of hearing loss, cognitive ability of the patient, age of identification and rehabilitation, expectations from the hearing aid, motivation and overall quality of life (Boothroyd, 2010; Kollmeier, Lenarz T, Winkler A 2011; Walden TC, Walden BE, Summers V, et al 2009). The hearing aid benefit can be assessed by subjective and objective outcome measures using questionnaires that assess the listening skills along with their social, communication and psychological aspects of an individual. Validation is a process to assess the benefits and limitations of the aided auditory abilities. It is an ongoing process and starts immediately after the fitting and verification of amplification device (Pediatric working group, 1996). Validation includes both subjective and objective outcome measures. The subjective outcome measures that are part of the hearing aid validation process include questionnaires that specifically assess hearing aid benefit. These hearing aid benefit outcomes are designed to directly assess treatment efficacy, or the subjective benefits perceived by the listener.

Need for Study:

Even though various objective validation measures are done in children with hearing loss to assess the hearing aid benefit, there is lack of subjective validation measures using questionnaires. The traditional hearing aid outcome measures in laboratory set up gives limited information about the hearing aid use in everyday listening situations. In order to quantify the true impact hearing loss and its associated treatment have on activity limitations, lifestyles, etc., self-report measures of outcome should be used.

In Indian scenario, the subjective evaluation of hearing aid benefit for children in terms of various developmental processes is highly limited. Hence a standardized questionnaire for Indian population with respect to various communication skills is required.

Hearing aid benefit questionnaire is such a tool that can be used to evaluate the effectiveness of the hearing aid benefit. Hence, this study was undertaken to evaluate the hearing aid benefit in children with hearing loss attending special school in Kasargod, Kerala using the hearing aid benefit questionnaire.

Aim & Objectives:

Aim of the study was to assess hearing aid benefit among school going children with hearing impairment in a segregated setup of Kasaragod district of Kerala.

Method:

A total of 20 children with hearing impairment studying in Mar Thoma School for the Deaf (Segregated education) in the age range of 7-15 years (M=12.5, SD: ±2.29)along with their caretakers/teachers served as the participants of the study. A detailed demographic history was obtained from all the participants which included the degree and type of hearing loss, age of identification and rehabilitation, aided scores for tonal perception, duration of use of hearing and their mode of communication. The severity of the hearing loss ranged from moderately severe-profound hearing loss. The hearing aid fittings were monaural and binaural. The hearing aids used by the children were either analog or digital for a period of more than 6 months. All the caretakers of the children were evaluated using the hearing aid benefit questionnaire. The questionnaire is divided under favourable and unfavourable conditions and under each condition questions are divided in to different subscales such as listening skills, communication skill, social and emotional skill. It also includes the areas of language skills, academic skills, and use of device, discomfort or aversiveness.

Hearing aid benefit questionnaire was administered on caretakers of children who were using the hearing aid. The caretaker was asked to rate each questions, based on the child's behaviour of response. The responses of the child were rated on a 5 point rating scale A, B, C, D, and E where A-always, B-occasional, C-half of the time, D-rarely, E-never. The instructions given were that an alphabet between A and E should be circled to indicate which rating best described the child's behaviour. Scoring for each point will be 100% for response 'A', 75% for 'B', 50% for 'C', 25% FOR 'D' and 0% for 'E'. Comparison of scores with and without hearing aid condition were analysed.

Results & Discussion:

The data obtained was subjected to Shapiro-Wilks test for normality. The results showed that the data was normally distributed hence parametric tests were administered. The mean aided and unaided scores were compared for both favourable and unfavourable condition. Paired t test was done to compare the scores with and without hearing aid condition. The results showed that all the participants had a significant higher performance aided condition for favourable conditions compared to the unfavourable conditions on all the subscales.

However, there was no significant difference between aided and unaided condition in unfavourable conditions even. It was also noted that children had poor knowledge regarding the hearing aid use subsection compared to the parents.

Result showed that there was a significant difference in mean percentage scores between unaided and aided condition, indicating observable hearing aid benefit only in favourable condition indicating limited benefit in difficult listening situation. These results are supported by other studies using objective measures that suggest increased processing difficulty in unfavourable condition (Souza &amp; Kitch, 2001). The insignificant difference between aided and unaided scores in unfavourable situation could be attributed to aided scores out of speech spectrum indicating limited benefit from hearing aid, delayed rehabilitation age, inadequate auditory training/speech and language therapy.

Summary & Conclusion:

Even though the children in this study were identified with hearing loss before the critical age, the results of administered questionnaire suggests limited benefit from hearing aid. From this results we can conclude that along with early identification, early rehabilitation and proper auditory training is very crucial in speech and language development.


  Abstract – AP1092: Awareness of Motor Vehicles Act (1988 and 2019) among Bikers Top


Shubham Shaniware1 & Mitali Thakkar2

1shubhamshaniware333@gmail.com &2mitalit151@gmail.com

1KEM Hospital, Pune - 411011

Introduction:

The decadal growth of the urban population in India rose to 31.8% during the last decade (2001-2011). Increased urbanization has led to various public health challenges, one of it being environmental pollution. Noise is regarded as a pollutant under the air (Prevention and Control of Pollution) Act, 1981. Noise causes a number of short- and long-term health problems however, it is highly underestimated. It is increasingly becoming a potential hazard to health, physically and psychologically, and affects the general well-being of an individual. It also interferes with communication, and this can even endanger life. However, it is a physical pollutant, not visible and the damage occurs silently, going undetected. There exists two major settings where noise occurs, viz., community noise and industrial noise. Sources of community noise (also called environmental noise, residential noise, or domestic noise) are the noises emitted from all sources, except noise at the industrial workplace. Major sources of community noise being automobiles, construction work, loudspeakers, recreational activities, fireworks, etc.

Several studies have been conducted in various parts of the country to assess the ambient noise level. Motor Vehicles noise form a major part of total environmental noise. Day time noise levels measured along roads between two campuses of a University in Balasore, Orissa, ranged from 70.1 dB(A) to 120.4 dB(A) which are above the permissible limits for road traffic noise (70 dB[A]). Noise generated by different vehicles was also measured. And it was found by them that none of the vehicles emitted traffic noise within the permissible limits. Vehicular air horns emitting loud noise have been reported to be the major contributor to high noise levels. Daily exposure to such noise levels over a long period can have harmful effects on physical health.

Need for Study:

Hearing loss due to community noise pollution is greatly preventable. Awareness of the public and stakeholders is the major component in the process of prevention and control of community noise pollution.

Motor Vehicles Act given in 1988 and amended in 2019 has laid down rules and regulations for reduction of noise emitted by the vehicles considering the types of horn, sirens and even motor noise to be used. It is imperative for bikers to know about these rules and regulations to ensure that they abide by them and cause a great reduction in the amount of community noise generated by these vehicles. Preventing of hearing loss due to this community noise is a part of responsibility of an audiologist and for that step it is essential that an idea about the level of awareness is obtained from bike users.

The amount of noise generated by bikes is exacerbated by the use of loud exhaust and silencer which otherwise is not generated by cars or any 4 wheeler vehicle and hence it is essential to assess awareness among bikers at a priority bases. It can even be included in the syllabus of Driving Exams and thereby ensure that they receive first priority information before obtaining licenses.

Aim & Objectives:

Aim: To assess awareness level of Motor Vehicles Act among Bikers Objectives: To assess awareness level of Motor Vehicle Act among Bikers

To increase the awareness level for Motor Vehicle Act among motor vehicle users

Method:

Participants: Study was conducted on 100 bikers who use to ride bikes every day for daily commuting purposes travelling atleast for 2hrs daily.

Tool: A simple questionnaire consisting of 10 questions was developed by the researcher. Question on the questionnaire were Yes and No type, no scoring was provided for responses. They were based on knowledge about the Motor Vehicle Act, rules for bikers in terms of use of type of horn, allowable noise limits, punishable offence under the act and the amount of penalty charged for the same.

Procedure: Questionnaire was provided to bikers and it took around 5-10mins for each respondent to tick Yes/No. Following which the scoring was performed and even an information leaflet was provided to each of the respondents that had info about the Motor Vehicle Act 1988 and 2019, it had pointers essential to know in terms of permissible noise levels for bikes, types of horn and sirens to be used, the amount of penalty charged in case of offence of breaking law.

Results & Discussion:

Results and Analysis: Survey of 100 respondents was performed and the analysis was performed in terms of percentage.

Total No of respondents: 100, out of which no of Males: 75 (75%), no of Females: 25 (25%).

Awareness about the Motor Vehicle Act had 2 questions and 60% respondents were aware about the act. Awareness about the permissible noise level from bike had 1 question and 56% respondents were aware about it.

Awareness about the types of horn and sirens to be used in bikes had 2 questions and 55% respondents were about it. Awareness about the punishable offence related to bikes and noise generated by it had 2 questions and 67% were aware about it. Awareness about the penalty associated with violation of these laws had 3 questions and 65% were about it.

Discussion: City of Mumbai has crossed 20 lakh bikers and the number of bikers has increased in the past 6yrs by 77%, it is thus imperative to assess awareness among these bikers about the acts laid down by Government to abate the ill effects of community noise. Also the incidence of noise pollution has increased in the past years and the amount of community noise has added up to the total amount of pollution generated, this is mainly due to increased number of motor vehicles, increasing population, use of modern machinery and techniques.

Hearing loss as a consequence of community noise forms a huge part of the overall noise induced hearing loss. The level of awareness among the bikers as observed in this study is on an average around 50-60% and it requires collaborative action from Government, Audiologists and other Health care professionals to increase the awareness level and thereby curb the ill effects of community noise. Also the survey was conducted on a small population i.e. 100 participants and hence a greater population needs to be surveyed upon to have a larger picture of the current scenario of the situation. And as suggested earlier, it would be a good idea to include knowledge about these acts in the syllabus of driving license exams.

Summary & Conclusion:

Conclusion: Motor Vehicles Act laid down in 1988 and amended in 2019 has laid down rules and regulations to monitor noise generated by motor vehicles. The ever increasing community noise due to motor vehicles requires a need to increase awareness among bikers and other motor vehicle users. Hence, looking at the results of the survey there is a need to generate awareness about the act among the bikers. This shall enable us to reduce the incidence of noise induced hearing loss as a part of community noise.


  Abstract – AP1094: Utility of Wave V Latency of the Auditory Brainstem Responses in Identifying Suprathreshold Deficits in Older Individuals with Clinically Normal Hearing Sensitivity Top


Sneha Uttakalika1, Varsha M Athreya2, Srikar Vijayasarathy3 & Animesh Barman4

1sneha.u.sah@gmail.com,2varshamathreya.audio@gmail.com,3srkrv.y@gmail.com, &4nishiprerna@yahoo.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Ageing leads to a reduction in absolute auditory sensitivity (Liu et al., 2007) as well as deficits in temporal processing, spectral processing, speech in noise perception and cognition (Rosenhall, 2003, Moore, 2003). Recent findings (Kujawa, Libermann, 2015) indicate that the Inner hair cell-Auditory type I neuron synapse may be the most delicate and vulnerable part of the auditory system rather than the hair cells themselves. Tools to identify and diagnose this hidden hearing loss are critical to the progress of the field.

Need for Study:

Though characteristics of presbycusis have been well studied, the mechanisms leading to this degradation in performance are not well understood. The variance in performance of individuals with similar audiometric profiles have not been well explained to date. This becomes important since rehabilitation of ageing individuals are one of the foremost challenges faced by the hearing health professional (Moore, 2003). It has been reported recently that Cochlear synaptopathy may be one of the causes leading to suprathreshold deficits (Fernandez et al., 2015) in the ageing auditory system. Identifying clinically relevant tools and parameters sensitive to synaptopathy may be critical in effective management hidden hearing loss associated with not just presbycusis, but also other conditions like tinnitus, hyperacusis and noise exposure (Valderrama et al., 2018). In this study, we investigate if the auditory brainstem response wave V latency parameters can be used to identify possible synaptopathy in the ageing ear.

Aim & Objectives:

The study aimed to investigate the sensitivity of parameters of auditory brainstem response wave V latency to clicks to identify possible synaptopathy due to presbycusis. The study also aimed to investigate correlation between these electrophysiological measures and suprathreshold measures of audition like speech perception in noise and gap detection threshold.

Objectives

  1. To compare the absolute latency of wave V to clicks at low (11.1/sec) and high repetition rates (90.1/sec) in young and older subject groups
  2. To compare the relative measures of latency of wave V to clicks at low (11.1/sec) and high repetition rates (90.1/sec) in young and older subject groups
  3. To investigate wave V latency shifts at low (50 dB nHL) and high intensity (80 dB nHL) in young and older subject groups
  4. To investigate the correlation between behavioral and electrophysiological measures: Gap detection threshold, Speech in noise perception and measures of wave V latency


Method:

Participants: The control group consisted of 30 young adults (18-25 years) with clinically normal hearing. The clinical group consisted of 26 older adults (50-70 years) with air conduction thresholds not greater than 20 dB nHL at any of the octave frequencies measured clinically. Both the groups had acoustic reflex thresholds at normal levels and had normal auditory brainstem responses. None of the subjects had a history of otological and neurological problems.

Procedure: Behavioral measures of Gap detection and Speech in Noise measurement were done monaurally at comfortable level (60-70 dB SPL) using Sennheiser HDA 200 earphones. Across channel Gap detection (1000-2000Hz narrow band noise) was chosen instead of broadband noise to avoid ceiling effects often seen in the latter condition. Speech in Noise was measured at 0 dB SNR for words in noise (Manjula et al., 2015).

Auditory Brainstem Responses to Clicks (ABRs) were measured at 11.1/sec and 90.1/sec and 80 dB nHL in both groups as a primary measure of analysis. Responses were also measured at 50 dB nHL as a secondary/confirmation measure. The stimuli were delivered to the Right ear of subjects through ER-3A insert ear phones and response was acquired through a 4 channel IHS system using conventional ABR recording parameters. The wave V latency was considered for analysis: the absolute latency and the slope of latency shift with increase in repetition rate. Another relative measure was considered: the Complex amplitude ratio (Don et al., 2007) measured as [Latency(90.1/sec)-Latency(11.1/sec)]/Latency(11.1/sec)

Results & Discussion:

Behavioral measures: Gap detection thresholds and the Speech in Noise scores in the younger group significantly differed from normal distribution based on Shapiro-Wilk test (p < 0.05). The younger group had better gap detection thresholds (median=33.3 ms; IQR=21.25 ms) than the older group (median=71.5 ms; IQR=60.63), and this was statistically significant based on Mann-Whitney U test (z=-4.24; p=0.000). The same trend was true for speech recognition in noise (0 dB SNR). The older group (Median=15; IQR=13) had significantly poorer scores (z=-6.25; p=0.000) than the younger group (Median=21; IQR=1.25).

Electrophysiological measures: Shapiro-Wilk test revealed that all the parameters considered were within the limits of normal distribution (p>0.05). Since there were many dependent variables, to see the combined effect of the variables on age related group differences, MANOVA was done, and the meshed variance between the two groups was not statistically signinficant (F(3,52)=1.87, p=0.146, Å'P2=0.097). Univariate analysis with a Bonferroni correction for multiple comparisons: Wave V latency was not significantly different between the two groups at 11.1/sec nor at 90.1/sec (p>0.05). However, the slope of the shift in latency from 11.1/sec to 90.1/sec was significantly greater (F(1,54)=4.94, p=0.03, Å'P2=0.084) in the younger group (mean=0.007, sd=0.002) than the older group (mean=0.006, sd=0.002). The complex amplitude ratio of the latency shift was also significantly higher in the younger group (F(1,54)=5.35, p=0.025, Å'P2=0.09) and as predicted, had lesser variance, and a large effect size (d=0.61). It was thus observed that the older group had an aspect of saturation in latency (Vijayasarathy et al., 2019) with increase in repetition rate of the stimulus at high intensity while that of the younger group remained more sensitive. Most importantly, this difference between the age groups was not found at a lower intensity (50 dB nHL). The complex amplitude ratio of wave V latency shift with repetition rate at 50 dB nHL was not statistically different (t(1,42)=0.056,p=0.96) between the two groups. This finding of suprathreshold deficits in the absence of/relatively reduced deficits at lower intensities is in line with cochlear synaptopathy reported in older subjects by various investigators (Fernandez et al., 2015).

Correlation between measures: Pearson product moment correlation was carried out between behavioral and electrophysiological measures separately for the two groups. In the younger group, Wave V latency was positively correlated with Gap detection thresholds at both 11.1/sec (r=0.4, p=0.027) and 90.1/sec (r=0.46, p=0.039) indicating that early latencies were associated with better gap detection thresholds (Poth et al., 2001). In the older group, the wave V absolute latencies were positively correlated with the age of the subject (11.1/sec: r=0.42,p=0.033; 90.1/sec: r=0.437,p=0.026) implying an increase in latency with age.

Summary & Conclusion:

Behavioral measures of Gap detection and Speech perception in Noise were poorer in the older group. The wave V absolute latency for clicks however, was similar in the two groups at both low and high repetition rates. Parameters associated with the latency shift with change in repetition rate, particularly the complex amplitude ratio was found to be significantly different between the two groups with younger individuals having larger shifts than the older individuals. This phenomenon was observed only at higher intensity, but not at lower intensities suggesting a physiological basis for suprathreshold deficits in older individuals. Wave V latency shift with repetition rate, particularly the complex ratio could be sensitive tool to identify possible cochlear synaptopathy.


  Abstract – LO133: Aphasia Recovery from the Perspective of Persons with Aphasia Top


S.P. Goswami1 & Aditi Rao2

1goswami16@yahoo.com &2aditi.slpa@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Aphasia can have a pervasive impact on the person's cognitive, personal, social and emotional functioning, the effects of which are not restricted to the person with aphasia (PWA) but extend to his/her immediate family and friends as well. Other than the highly visible consequences of aphasia, the effects that are out of sight are much more complex and multi-faceted, which includes the psychosocial and emotional impact that aphasia has on the individual. To reduce the weight of the burden carried by the PWA, some models have been proposed and are widely used to assess and understand the factors that play a role in recovery. The World Health Organisation (WHO, 2001) put forth the International Classification of Functioning, Disability and Health (ICF) model which attempts to elucidate the concept of accessibility and inclusion of the person with a disability in their environment. The Life Participation Approach to Aphasia (Chapey, Duchan, Elman, Garcia, Kagan, Llyon & Simmons-Mackie, 2001), on the other hand, refers to a model of service delivery that reinforces clinicians and researchers to target real-life goals for PWAs. However, the use of traditional measures to evaluate outcomes in PWAs gives a shallow understanding of the recovery process. The A-FROM; Living with Aphasia: Framework for Outcome Measurement (A-FROM) was formulated by Kagan, Simmons-Mackie, Rowland, Huijbregts, Shumway, McEwen, Threats & Sharp (2008) targets the impact of communication disability caused by aphasia and attempts to represent the outcome in a coherent yet dynamic manner. A-FROM incorporated factors that are often overlooked during outcome measurement, namely quality of life, environmental influences, participation as well as personal identity, that often remain inconspicuous but play significant roles in recovery. With the ideologies of these models at the crux of the study, PWAs were interviewed to view the process of recovery from their perspective.

Need for Study:

Over the recent decade, there has been a significant push in the developed countries toward improving the quality of life of PWA and professionals have actively attempted to address issues in accessibility and inclusion of PWAs in the society. However, in the Indian context, advocacy for the requirements of PWAs to be integrated into society has not received much emphasis. It is, therefore, the need of the hour to understand the various barriers and facilitators in recovery from the stakeholders (PWA) and to explore the positive or negative influences seen in our context that promote or impede the recovery of the PWA. This study is one-of-a-kind in the Indian scenario, as the stories of recovery from aphasia in detail were retrieved from the stakeholders themselves.

Aim & Objectives:

The aims of this study were to examine the paths of recovery of PWA focusing specifically on their return to work and life activities, as well as changes in communicative, physical and emotional functioning. Using interview data collected from patients, we explored the barriers and supports they experienced along the way, examining paths common to all patients as well as the unique successes and challenges faced by each. A detailed analysis of the factors that facilitate or impede recovery essentially provides professionals with a deeper understanding of the process of recovery from aphasia.

Method:

The study was carried out in 6 distinct phases. The first phase comprised the preparation of the protocol to carry out a semi-structured interview. The overall framework for three semi-structured interviews with the PWA was designed, and each interview could be ideally conducted in 45-60 minutes. The first interview was conducted with PWA and the caregiver, whereas the other two were carried out with the PWA. 18 PWAs participated in the study, which included individuals from diverse ethnocultural groups, hailing from differing levels of socio-economic status, to demonstrate the patterns of recovery seen across individuals within our society. Participants were familiarised with the purpose and method of the study, and signed consent was obtained from the PWA and caregiver. Discourse elicitation during the semi-structured interviews was carried out by the researcher in a progressive manner; opening with simple introductory conversations followed by detailed discourse. All interviews were video recorded and the PWA was instructed to respond to the best of their abilities with any communicative means that they were comfortable with. The interviews were transcribed translations of communicative productions accompanied verbatim and transcriptions. The transcribed data was used to identify narratives produced by PWAs and to recognise recurring themes across participant data in their path to recovery.

Results & Discussion:

Analyzing the stories of the participant, valuable information was collected regarding the challenges as well as victories that these individuals face on their path to recovery. A number of factors influenced the process of recovery which included family support, their own personality traits, social participation, communicative behaviour, and restoration to work/routine, access to professional services and responsibilities and finances. Six major themes were observed to play an important role in recovery namely support, functional independence, significant relationships, attitudes toward communication, communicative environment and self-motivation or positivity. Traditionally, recovery from aphasia was viewed as being dependent on factors such as the extent of brain damage, site of lesion, age when aphasia was acquired, etc. However, delving deeper into the stories of these individuals, it could be seen that we must adopt a holistic perspective considering the factors that impact the activity and participation of these individuals in our society.

Summary & Conclusion:

The path to recovery for all participants was a challenging and long one; however with a strong system of support, acceptance and understanding their abilities could be maximized, providing them with the confidence to be functioning individuals within our society. The primary inference that can be drawn from the data obtained from all the participants is that it is highly important for us as professionals to keep in mind the various factors that could facilitate or impede recovery in persons with aphasia.


  Abstract – LO136: Pragmatic Ability of Children with Severe to Profound Hearing Loss Top


Hemangi Vaidya1 & Aarti Waknis2

1hemangivaidya.95@gmail.com &2aartiwaknis1@gmail.com

1Sparsh Clinic, Pune - 411028

Introduction:

The term pragmatics was originally used in 1932 by Pierce and further extended by Morris in 1946. Pragmatics is related to the context in which the language is being used including the time, place, speaker, and listener. Pragmatic language impairment can actually be a disorder by itself, but it is more commonly associated with Autism spectrum disorder, Down syndrome, Specific language impairment, Attention deficits hyperactivity disorder and Hearing impairment.

Hearing plays a very important role in language development (Weisel, 2005). With the help of technological advances, about 96% of children with hearing impairment who are fitted early with an appropriate hearing device can reach normal language development in terms of vocabulary, semantics or syntax (Murria, Guerzonia, Fabrizib, & Marian, 2014). For oral deaf children, 36 months might be a critical age to examine emerging communication with respect to function or modality interactions (Nicholas et al., 1994).

Need for Study:

The prevalence of hearing impairment in India is fairly high. The National Sample Survey 58th round (2002) surveyed disability in Indian households and established that hearing disability was the 2nd most common cause of disability and uppermost cause of sensory deficit. Four in every 1000 children suffer from severe to profound hearing loss and more than 1, 00,000 babies are born with hearing deficiency every year.

Children with hearing impairment are known to have a delay in their language and speech development. Hearing loss has an impact on all the parameters of language including phonology, syntax, semantics as well as pragmatics. Developments of the structural aspects have been studied quite extensively in India However; there is a lack of studies with respect to the pragmatic aspects in India.

Hence it becomes important to study the aspect of pragmatic language of children with hearing impairment in Indian context.

Aim & Objectives:

Aim of the present research was to study the pragmatic skills of children with severe to profound hearing loss. Further comparison of the pragmatic skills of children with severe to profound hearing loss using hearing devices (hearing aid or cochlear implant) was done with typically developing children under three conditions, viz: when the two groups were matched according to their expressive language age, when they were matched as per their chronological age, and also when the hearing age was matched. Further the pragmatic skills of children fitted with hearing aids were compared with the pragmatic skills of children who had undergone cochlear implant/s.

Method:

Two groups of children participated in the study. These included 40 children with severe to profound hearing loss (Group I) using a hearing device (Hearing aid or Cochlear Implant) having a chronological age range of 6 months to 68 months, with an expressive language age of 6 months to 48 months (as assessed on REELS -Receptive Expressive Emergent Language Scale) and with a hearing age in the range of 12 months to 36 months. Group II consisted of 40 typically developing children with mother tongue Marathi in the age range of 6 months to 48 months (chronological age, expressive language age and hearing age). Children with known syndrome, intellectual deficits or any developmental disorder or visual impairment were excluded from the study. Checklist for Assessment of Pragmatics of Pre-schoolers (Gejji & Waknis, 2015) was administered to children in both the groups. The checklist is divided into 4 sections- Section I: Communication functions, Section II: Response to communication, Section III: Interaction and communication, and Section IV: Contextual variation. There are a total of 44 questions in the checklist. The checklist uses the framework of Pragmatic profile for preschool children (Summers & Dewart, 1988) and the order of responses is as per the normative development in typically developing Marathi speaking children (Thakur & Waknis, 2014). The checklist was administered by interviewing the children's primary caregiver and was confirmed with actual observations of the children.

Observations were marked on the checklist and scoring was done as per the instructions of the authors of the checklist.

Statistical analysis was done with SPSS version 20 software. Prior to selection of appropriate statistical analysis for comparison of data, Shapiro Wilks test was performed to assess the normality of distribution. The results revealed that data was normally distributed (p<0.05). Hence parametric statistical methods were used for analysis of data. Descriptive statistics and Unpaired t test was done to compare the mean pragmatic scores of the two groups under three conditions as mentioned in the objectives.

Results & Discussion:

Results of the study indicated that the pragmatic skills of children with severe to profound hearing loss using hearing devices (hearing aid or cochlear implant) were significantly poorer than typically developing children (p<0.05) when they are matched on the basis of their chronological age as well as expressive language age. However, no statistically significant difference was found in pragmatic skills of children with severe to profound hearing loss using hearing devices (hearing aid or cochlear implant) and typically developing children when they were matched on the basis of their hearing age. Further the percentage of children having pragmatic skills above, below and appropriate to their expressive language skills and chronological age was determined for children fitted with cochlear implants and children using hearing aids by comparing it with the norms of the checklist (Gejji & Waknis, 2015). From the comparison it was observed that, almost similar percentage of children were delayed in overall pragmatic scores, however the profiles of the two group appeared to be slightly different with respect to the sections of the tool. Nicholas, Geers, & Kozak (1994) reported similar findings in literature. They stated that, when children with hearing impairment are matched in terms of expressive language age to their hearing counterparts, their pragmatic skills appeared to be developing in a similar pattern, but with significant delay. The early intervention and fitting of appropriate hearing device (hearing aid or cochlear implant) has been known to have a significant impact on the language and speech abilities of children with hearing impairment. Thus, in children with hearing impairment, not just the chronological age but the hearing age (the duration between the age of hearing device fitting till the date of assessment) becomes an important variable to consider.

Summary & Conclusion:

Results of the study indicated that when comparison of pragmatic skills of children with severe to profound hearing loss using hearing devices (hearing aid or cochlear implant) and expressive language age matched and chronological age matched typically developing children was done, it was found that there was a significant difference in the development of pragmatic abilities in children with hearing impairment and typically developing children. However, the pragmatic skills of children with severe to profound hearing loss using hearing devices (hearing aid or cochlear implant) were similar to the pragmatic skills of hearing age matched typically developing children.

Thus, it can be concluded that the pragmatic abilities of Marathi speaking children with severe to profound hearing loss is statistically similar to the pragmatic abilities of hearing age matched typically developing children, but is delayed as compared to the expressive language matched and chronological age matched typically developing children. Pragmatic abilities are thus a function of hearing age of children with severe to profound hearing impairment.


  Abstract – LO137: Estimating the Efficacy of Artificial Neural Network in Diagnosis of Aphasia Top


Eliza Baby1, S.P. Goswami2 & Abhishek B.P3

1elizacm1994@gmail.com,2goswami16@yahoo.com, &3abhiraajaradhya@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Aphasia is an impairment of language, affecting the production or comprehension of speech and the ability to read or write. Aphasia is always due to injury to the brain-most commonly from a stroke, particularly in older individuals. But brain injuries resulting in aphasia may also arise from head trauma, from brain tumors, or from infections (National Aphasia Association).

Aphasia can be diagnosed by employing standardized tests. Some the standardised tools for aphasia assessment include Aphasia Language Performance Scale (ALPS)-Keenan and Brassell, 1975; Frenchay Aphasia Screening Test (FAST)- Enderby et al (1987);The Mississippi Aphasia Screening Test (MAST)- Nakase-Thompson, (2004) and diagnostics tests such as Minnesota Test for Differential Diagnosis of Aphasia (MTDDA)- Schuell 1973; The Boston Diagnostic Aphasia Examination (BDAE)- Goodglass, Kaplan and Barresi 2001; Western Aphasia Battery (WAB)- Shewan & Kertesz, 1980. The standardised test have been translated and standardised in Indian languages. The examples include WAB in languages such as Kannada- Shyamala & Vijayashree (2007), Telugu - Pallavi, & Shyamala, (2010). Bedside Screening Test for Aphasics in Kannada- Ramya, & Goswami, (2011), Malayalam- Kanthima, & Goswami, (2011). Rehabilitation begins with assessment and diagnosis. The accurate diagnosis is influenced by many variables. The behaviours exhibited by persons with aphasia can be distinct on one hand or overlapping on the other to the extent of leading to misdiagnosis. The major cause of this is variability which is the hallmark of aphasic syndromes. Therefore individual differences must be accounted for to prevent such errors. Performance of every PWA is different from another. This poses a great difficulty in reaching an appropriate diagnosis using the available subjective assessment tools. The presentation of symptoms may not match with the clinical manifestation thus limiting the efficacy of assessment. Clinical expertise is another major variable influencing the diagnosis. Owing to these factors, researchers have employed the Artificial Neural Network as an adjunct in diagnosis. The ANN is not a replacement to the speech language Pathologist in the arena of diagnosis. It can be used as a tool to confirm the diagnosis and has been extensively used in the field of speech language pathology.

Need for Study:

The artificial neural network is an advent in the field. ANN can be built by using MATLAB. ANN was originally used in the diagnosis of blood and urine samples. The same concept has been extrapolated to the field of medicine and allied health discipline. The published literature in this regard shows the utility of ANN in communication disorders like stuttering, aphasia etc. The diagnosis of voice disorders and hyper nasality in cleft lip and palate can be diagnosed by employing perceptual and objective evaluation. The diagnosis of aphasia on behavioural tests can be correlated with objective measures as in the ANN in order to arrive at a more confident and robust diagnosis.

Aim & Objectives:

To build an objective tool that provides assistive objective evaluation along with confidence index on aphasic individuals.

To train the ANN to sensitise the parameters essential for diagnosis. To check the efficacy of the ANN in diagnosing ANN diagnosis

Method:

The study was carried out in two phases i.e. Phase I included the development of the tool using MATLAB software. Artificial Neural Networks (ANN) was used for data mining. The data of Eight hundred individuals diagnosed with aphasia were recruited as participants. The data of the participants were picked retrospectively. A language deficit was the only inclusion criteria; post stroke duration and the degree of severity of the condition were not considered. Cases with sub cortical lesion and crossed aphasia were not considered as the study is in its preliminary phase. The data base had client with all types of aphasia with a minimum of 26 (in isolation and conduction aphasia).

A multilayer ANN is used to create models of a system state using nonlinear combinations of the input variables (Bishop, 1995; Duda et al., 2001; Hastie et al., 2001). The ANN employed in this study was a feed-forward network with sigmoid activation functions in the hidden layers and a linear activation function in the output node in MATLAB (The Mathworks Inc, R2015b).The ANN was trained using a back-propagation algorithm with gradient descent and momentum terms. Appropriate pruning of the ANN network along with variable node weightage was given for the best possible classification.

In Phase II of the study, testing the efficacy of developed tool was carried out. 97 new cases were fed to the ANN. The diagnosis of these cases were determined by 3 independent expect speech language pathologists with a minimum of 5 year experience is management of aphasia was carried out. Along with this Aphasic confidence index i.e. a score representing the confidence with which the patient is classified into a type of aphasia was also correlated with the diagnosis made by SLPs. Further analysis was carried out to find an agreement between the two ratings was carried out.

Results & Discussion:

An interactive ANN was built as the result of the phase I of this study. The ANN was tuned by feeding the scores on domains such as fluency, comprehension, repetition and naming) and the diagnosis. The operational criterion for diagnosis was defined as the output. Further, the data of the new cases were fed to the system in In Phase II. Cohen's Kappa analysis was carried out to evaluate the reliability of results obtained from the developed tool to that of SLP's evaluation. An overall Kappa coefficient of 0.916 with the p value of <0.05 was found indicating a positive agreement between both the ratings. The confidence interval was directly proportional to the number of cases fed to the system. Reliability was less for isolation aphasia. The ANN had slight difficulty in differentiating anomic aphasia with normals with a Kappa Co efficient of 88%.

The paper was an extension of preliminary data. The results of the preliminary data showed less robustness owing to the constraint in sample size. Hence it was decided use a large sample of cases for training ANN. However the number of cases in each variant could not be kept uniform just because the data was retrospective and the prevalence of some aphasia types dominate the other types. The efficacy of the developed ANN was analysed by using 97 new cases, high kappa co efficient was derived on checking the reliability of the diagnosis made by SLPs with the diagnosis made by ANN. The findings showed that the ANN can be used an effective tool in confirming the diagnosis thus acting as an adjunct to diagnosis.

Summary & Conclusion:

The present study aimed to build an objective tool that assists professionals in deriving a confidence index along with classification of various aphasic symptoms. 800 retrospective case file were used for training in the first phase and the second phase aimed at determining the efficacy of the developed tool by correlating the diagnosis with the diagnosis made by SLPs. Results of the present study have revealed good agreement between the diagnosis made by SLPs and the results of the developed tool. This tool can help guide novice clinicians in decision and an assistive tool that would help the SLP for confirming the diagnosis.


  Abstract – LO140: Investigating Distinct Semantic Processing Ability in Individuals with Dementia using N-Back Task Top


Jesnu Jose Benoy1, Hema N2 & Devi N3

1jesjosben@gmail.com,2hema_chari2@yahoo.com, &3deviaiish@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Comprehension and functional use of language requires cognitive processes such as retrieval, information processing, maintaining and interpreting information or representations (Martin & Reilly, 2012); which are a part of Working Memory (WM). Various cognitive tasks like verbal reasoning skills, learning ability, mathematics, and language processing are related to working memory (Conway et al., 2005). WM stores and updates appropriate information to aid goal directed behaviour (Gajewski et al, 2018).

Aging is often accompanied by WM decline (Salthouse, 2015). This decline is much faster in pathological aging such as dementia or Mild Cognitive Impairment (MCI). Dementia is often characterized by a progressive reduction in memory and/or other cognitive processes including WM (Bragin et al., 2015).

A common paradigm to assess working memory capacity is the n-back task (Kirchner, 1958). The n-back task has also been used in fMRI studies as a test of WM, and activations were found in a range of areas including dorsolateral prefrontal cortex, inferior frontal cortex, anterior cingulate and posterior parietal cortex (Cohen et al., 1997; Owen et al., 2005). The n-back task has face validity as a WM task as it requires maintaining, continuous updating and processing of information. It has moderate to good correlation with other measures such as stroop task, measures of fluid intelligence and measures of short term memory (Gajewski et al, 2018).

Need for Study:

Initial phases of dementia manifests with executive dysfunction and WM impairments along with episodic memory deficits (Kirova et al., 2015). These cognitive deficits arise during MCI and manifests as a sign of advancement to dementia.

In the recent past, WM has been investigated in individuals with aphasia using distinct n-back tasks (Deepa & Hema, 2019; Karunika & Hema, 2019). These studies revealed significant differences across individuals with normal aging and individuals with aphasia and have upheld the possibility of an association between WM and linguistic processing ability in individuals with aphasia.

Given that individuals with MCI/dementia have an early and progressive reduction in working memory, the use of an objective computerised task for early detection of cognitive-linguistic changes may aid in faster recognition of MCI and/or dementia (Fleming & Harris, 2008; Harris et al., 2008).

Aim & Objectives:

The aim of the present study was to assess working memory capacity and its effect on linguistic processing ability in adults with and without dementia using n-back task.

The objectives of the study were:

  1. To examine the working memory capacity in individuals with dementia and age matched neuro-typical adults in the n-back task using E-Prime software.
  2. To study the effect of working memory abilities in processing distinct linguistic information (semantic) in the n-back task using E-Prime software.


Method:

Participants considered were 7 individuals with dementia (IWD) and 10 neuro-typical individuals (NTI) within the age range of 60-85 years. IWD were diagnosed as having dementia by a neurologist and were evaluated by a Speech-Language Pathologist using Kannada version of Montreal Cognitive Assessment (MoCA- Nasreddine et al., 2005) and Clinical Dementia Rating Scale (CDR- Morris, 1993). All the IWD were having a severity of very mild to mild dementia as per CDR Scale.

With reference to study of Wright et al., (2007), n-back task was replicated using lexical items (common objects, fruits and vehicles); alphabets and single digits as stimuli. Stimuli were obtained from the Kannada version of Western Aphasia Battery (WAB-K). Each category contained 10 stimuli which were randomly arranged to form the sequence for 1-back, 2-back, 3-back, and 4-back tasks.

Participants of the study were seated comfortably in front of the computer screen and were instructed about the n-back task and were given a trial before the actual experiment, using a different set of stimuli. The experiment was programmed and ran using Psychology Software Tool’s E-Prime Professional software (version 2.0) on a HP Notebook-15-ac101tu laptop. Within E-Prime, E-Studio and E-Data Aid modules were used to design the sequence of presentation of stimulus with a fixed duration (2000ms) and inter-stimulus interval (1500ms). To indicate their response, participants had to press, number keys 1 or 2 on a standard US keyboard; with 1 for a match between test and target stimuli at n-back and 2 for a no-match. Responses which were delayed more than 5000ms were not considered. For every n-back 5 trials were used of which 3 were test trials and 2 were catch trials in a random order of presentation. Correct responses for a minimum of 3 trials within every level of n-back task determined the level/threshold/accuracy of responses for the participants. The reaction time (RT) (in ms) and accuracy of responses was extracted using E-Data Aid module within E-Prime 2.0 and were imported into SPSS for data analysis.

Results & Discussion:

The performance of individuals with dementia and neurotypical individuals were analysed under two aspects viz., level/threshold/accuracy of responses and reaction time.

Performance of IWD with respect to the level/threshold/accuracy of responses were scattered. IWD demonstrated a threshold of 4-back for alphabets; 3-back for numbers, lexical category “fruits― and “vehicles― and 2-back for lexical category “common objects―. However, NTI were able to perform till 4-back tasks for all categories of stimuli. This difference in thresholds could be because of different processing load for different semantic categories (Bragin et al., 2015).

With reference to descriptive statistics, the mean RT was better for NTI than IWD at all levels of n-back tasks. At the threshold level, mean and SD of RT for lexical category “common objects― for IWD was 4170±547ms and for NTI was 2859±531ms. For “fruits―, it was 4379±300ms for IWD and 2709±307ms for NTI; for “vehicles―, it was 3627±563ms for IWD and 2949±498ms for NTI; for numbers, it was 4182±528ms for IWD and 3009±598ms for NTI; and for alphabets, it was 4368±598ms for IWD and 2790±390ms for NTI. Thus, IWD required longer processing time to access and retrieve information from WM. Similar findings of prolonged RT have been reported by Fraga et al., (2017) together with higher latencies for evoked response potentials in MCI.

Between group comparisons using Mann Whitney-U test revealed significant differences (p<0.05) between IWD and NTI groups. This could have been contributed by impaired structure of the semantic system and impaired ability to access semantic information in IWD (Grossman et al., 1996; Kensinger et al., 2003). Further, within group comparisons using Friedman’s test revealed significant differences (p<0.05) between the categories of stimuli for both IWD and NTI groups. Given that n-back task simultaneously taps both familiarity and recollection-based processes, familiarity obscures the relation to recall-based complex items (Oberauer, 2005). This would lead to varied processing of different categories of stimuli along with the processing load. The present study is an evident contribution for the cognitive linguistic processing ability being affected in individuals with dementia.

Summary & Conclusion:

Results of the present study revealed that NTI had better working memory capacity than IWD as measured using distinct linguistic processing of the n-back tasks. Category specific differences (n-back threshold) were also found across both the groups which suggest that the processing involved is different for different semantic categories. An objective testing procedure like n-back task can aid in faster recognition of MCI and dementia following the routine subjective assessment of dementia.


  Abstract – LO142: International Online Communities of Practice for Speech-Language Pathologists to promote better services for multilingual children with autism: The case of India and the UK Top


Maclanie Graux

mg696@cam.ac.uk

University of Cambridge, United Kingdom

Introduction:

Autism is a neurodevelopmental condition now well-established in the research and socio-political agendas in India (Juneja & Sairam, 2018; Subramanyam, Mukherjee, Dave, & Chavda, 2019) and in the United Kingdom (UK) (Autism Act, 2009; Cusack & Sterry, 2016; National Health Service, 2019). Across countries, studies demonstrate that Speech-Language Pathology (SLP) support services represent a promising avenue for real impact on the quality of life and functioning of children with autism (Adams et al., 2012; Batool & Ijaz, 2015; Parsons, Cordier, Munro, Joosten, & Speyer, 2017). However, the literature also overwhelmingly points to significant limitations in the SLP management of these children. Indeed, due to the wide spectrum of abilities displayed by children with autism, SLPs have disclosed the lack of confidence and limited knowledge to best support these children (Mendonsa & Tiwari, 2018). Moreover, SLPs mention significant additional barriers in the management of Linguistically and Culturally Diverse (LCD) children, which is highly problematic when accounting for the diverse demographics SLPs provide support for (Arias & Friberg, 2017; Chengappa, 2001; Jordaan, 2008; Letts, 2003; Mennen & Stansfield, 2006; Stow & Dodd, 2003; Winter, 1999). Last but not least, the SLP literature harnessing models of Continuing Professional Development (CPD) to support clinicians and evidence-based practice is under-explored. Not only does it limit opportunities to reduce the gap between research and practice, but it also diminishes the potential for SLPs to extend their knowledge base effectively throughout their career and to provide up-to-date and optimal services.

Need for Study:

In the age of globalisation, there is an urgent need to investigate the necessary and innovative adaptations to SLP practice to best support the LCD community with autism. This is the first study known by the author to explore the potential of an International Online Community of Practice (IOCoP) in the field of Speech-Language Pathology. Considering socio-cultural, linguistic and expertise arguments, this study explores the specific benefits of an IOCoP connecting SLPs based in India and the UK providing services for LCD children with autism. This innovative research endeavour promoting knowledge exchange has the potential to generate complementary and mutually beneficial CPD opportunities for SLPs and to foster evidence-based practice.

Aim & Objectives:

Overall, this research study aims to investigate an IOCoP for Speech-Language Pathologists (SLPs) practising in India and the UK, and supporting LCD children with autism. Firstly, a hybrid theoretical framework (based on bio-psycho-social and activity theory models) is proposed to outline the critical elements underpinning SLP clinical knowledge and practice in a contextually sensitive manner. Secondly, the pilot study for the first (India-UK) IOCoP in the field of Speech-Language Pathology is trialled. This empirical work enables different stages of data collection and analysis that are based upon a cyclical CPD model, which should ensure the highest impact and clinical relevance.

Method:

Theoretically, two conceptual frameworks are identified as providing a holistic, complementary and contextually-sensitive perspective on SLPs knowledge and practice. Firstly, the International Classification of Functioning, Disability and Health (ICF; (World Health Organization, 2001)) represents a useful model to elucidate the complex facets of SLPs knowledge in autism in the context of LCD. Secondly, the Cultural-Historical Activity Theory (CHAT; (Engestram, 1987)) is a comprehensive model to elucidate the practical elements of healthcare services. This hybrid 'ICF-CHAT' model is critically analysed against the scientific literature on bilingualism, autism, clinical practice and guidelines.

Empirically, this study operates according to a Mixed Methods explanatory sequential design as follows: (1) online survey [quantitative strand], (2) individual interviews [qualitative strand], and (3) implementation of IOCoP [action research strand]. Firstly, the survey method should enable to document the main trends in SLPs knowledge, practice and expectations for IOCoP. Secondly, individual interviews will be organised in India and the UK to gather in-depth SLPs insights. This information will be used to tailor the aims and practicalities of the IOCoP. Last but not least, the IOCoP will be facilitated through an online platform to trial real-world practitioner-led knowledge-exchange between SLP communities in the UK and in India. This should eventually promote better services for LCD children with autism.

Results & Discussion:

Our analysis shows that the hybrid ICF-CHAT model is a particularly powerful tool to elucidate the complex dynamics of SLPs knowledge and practice in the context of autism and LCD. On the one hand, the ICF dimension of this model has the potential to capture SLPs knowledge at the intersection of autism and LCD. Indeed, by including medical, social and functioning concepts, this model allows for a comprehensive representation of the children's experiences (e.g., the impact of an autism diagnosis for children with a multilingual upbringing can influence home language decisions). This, in turn, can be used as the window to elucidate SLPs state of knowledge underpinning clinical decisions and rationalising the care pathway for these children. On the other hand, the CHAT dimension of this model is crucial to highlight the practical and contextual facilitators and barriers of practice (e.g., limited resources, presence or absence of adequate tools, impact of institutional and professional guidelines etc.). Methodical consideration of the literature demonstrated that the ICF-CHAT is a valid holistic tool to explore the continuity between SLPs knowledge and practice. The ICF-CHAT model is therefore used to inform the empirical IOCoP work of this study.

For ISHACON 2020, the results of the survey will be discussed (currently in progress). These will account for the main trends in SLPs knowledge, practice and expectations for IOCoP practising in the UK and India. The author will also detail the overall methodological underpinnings and rationale for this innovative IOCoP research endeavour. This argumentation will focus on the benefits related to embedding CPD and participatory-based approaches in SLP studies to ensure the highest impact and clinical relevance.

Summary & Conclusion:

To conclude, this paper addresses two urgent gaps in the field of SLP. Firstly, there is a need to consider a holistic conceptual framework allowing to account for the continuity between clinicians' knowledge and practice. The hybrid ICF-CHAT model has proven particularly powerful to achieve this goal. Secondly, in the age of globalisation, there is an unprecedent need to investigate the necessary and mutually beneficial collaborations between SLP communities across borders. This is the first study known by the author to explore the potential of an International Online Community of Practice (IOCoP) in the field of Speech-Language Pathology. If this pilot study proves successful, this innovative model of knowledge exchange can significantly change the field of SLP clinical practice and research. Such an approach has the potential to result in (1) opportunities to foster evidence-based practice by bridging the gap between research and practice, (2) increased SLPs knowledge and skills, and (3) considerable (indirect) benefits to service-users.


  Abstract – LO143: An Indian Corpus of Gestures for Nouns and Verbs Top


Nikitha M1 & S.P. Goswami2

1nikitham25@gmail.com &2goswami16@yahoo.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Gestures are a set of bodily movements which almost always is produced by a speaker along with speech or in isolation. The earlier work on gestures has reported an existence of a potential link and interaction between gestures and speech at various levels of language processing. This evidence has been majorly drawn from the language evolution studies (Corballis, 2012), developmental studies in children (Iverson & Thelen, 2004), and gesture processing studies in neuro-typical (Kelly,Healey,Özyürek & Holler, 2015) and language impaired adults (Papeo & Rumiati, 2012). However, these studies have yielded heterogeneous findings either showing a weak or strong association between the gestural and language processing. These varied findings could be attributed to different variables owing to focus of study (i.e, development, learning, assessment, therapeutics etc.) domains studied (i.e, gesture perception, gesture production or both), varied methodological paradigms (i.e, behavioral, electrophysiological, neuroimaging etc). In addition to these, gesture stimuli per se used in these studies to have resulted in varied findings (Agostini, Papeo, Galusca, Lingnau, 2018) is reported. There exist differences across culture (LeBaron & Abu-Nimer, 2003) and language (Özçalışkan, Lucero & Goldin-Meadow, 2016) in understanding and use of gestures.

Alongside, the gesture video properties (i.e., the video quality, video noise, background, position, focus etc); the actor attributes (i.e., attire, physical distractors, cultural differences etc) could also be the source of variations. Further, the gesture stimuli used in these studies have not been rated or validated largely for its overall comprehensibility. Therefore, these variations in the stimuli per se could have led to heterogeneity in gesture studies (Agostini et al., 2018).

Need for Study:

Literature has highly supported the importance of gestures in communication sciences and its close interaction with verbal language.

However, heterogeneity in findings of gesture processing studies owing to its diverse gesture stimuli used has been documented. A comprehensive gesture corpus which could parallel the major verbal language system could be beneficial. There is no reported gesture corpus mimicking verbal language system available in the Indian context. Nouns and Verbs form the major content of one's verbal language system. Nouns are an important part of one's utterances and form the content words in a sentence. Verbs on the other hand, also form the major part of the sentence, i.e. the predicate, and helps in understanding the meaning of the sentence. Thus, potential need to develop a stand-alone corpus for a set of nouns and verbs in parallel to verbal language in the Indian context was sensed.

Aim & Objectives:

The current study aimed to develop and validate a corpus of gestures for a set of nouns and verbs.

Method:

The current study was conducted in the following three phases: Phase 1: Word list selection:

A set of 310 nouns and 133 verbs were collected from various available sources such as Boston naming test (BNT; Shyamala, Ravi &Vijayetha, 2010), Action naming test (ANT; Girish & Shyamala, 2015) and others were collected. Three speech and hearing professional served as participants who rated the word list on a 3- point scale (0-not gestural, 1-fairly gestural, 2-highly gestural) to check if the word could elicit a gesture. A mode based analysis of the ratings yielded a total of 221 nouns and 122 verbs which entered the next phase of the study.

Phase 2: Gesture generation

This phase had two participants; a) A 24 year old male professional actor (PA) and b) A 27 year old male professional cameraman (PC). As pre-training, PA was exposed to the existing noun gesture corpus (Safa & Goswami, 2019) and verb gesture corpus (Anusha & Goswami, 2019). The recording was done in an auditorium stage with fixed white backdrop, white floor and adequate lighting. PA wore neutral clothes with least possible accessories and maintained the same throughout the days of video recording. The recording was done using Cannon 700D with effective focal length of 24-105 mm and maximum aperture of f/4L. The videos were edited using Adobe Premiere Pro cc 2017. All the 221 noun and 122 verb gesture edited videos entered the next phase of the study.

Phase 3: Validation and Rating

A total of 30 neurotypical individuals, grouped as 15 young (age range: 19-38 years; mean age: 28.5years) and 15 old (age range: 42-69 years; mean age: 55.5years) participated in two tasks namely; a) validation and b) rating. In the validation task, the participants named the gesture videos as they perceived in writing. The purpose of this task was to determine the comprehensibility of the gestures. In the rating task, the participants were provided appropriateness rating on a 3-point rating (0-poor, 1-fair, 2-good) scale in terms of simplicity, familiarity and cultural relevance of the gestures. The videos were presented in a HP laptop (15.6 inches with 1366 x 768 resolution) kept at a distance of 3 meters in mp4 format for a maximum of two times.

Results & Discussion:

Two groups of participants (i.e., young and old) validated and rated the two sets of gesture corpuses (i.e., noun and verb gesture video corpuses) respectively. The percentage analysis of the raw data was done using R programming language (version 3.6.1.), a statistical and visual analysis tool. In the validation task, the maximum correct response of 75.79% among old and 76.56% among young for nouns; 80.22% among old and 75.96% among young for verbs were elicited. The maximum responses were analyzed for linearity by constructing the scatter plots of the raw data. The maximum linearity between the groups was observed at 85% although linearity was present throughout. Thus, the gesture videos eliciting a correct response (CR) in the validation task and a score of 2 in the rating task with greater than or equal to 85% agreement among the participants entered the final corpus. Further, there were alternate responses such as semantically related response (SR), explanatory response (ER), visually related response (VR), irrelevant response (IR) and no response (NR). There was linearity observed for the alternate and responses of the validation task and responses of the rating task as well. All the alternate responses have been provided weightages and been analyzed for their linearity and agreement. Finally, a total of 86 noun gestures and 60 verb gestures entered the final gesture corpus which satisfied the 85% agreement criterion. However, gesture videos which have not satisfied the 85% agreement criterion also have been considered based on the weightage and agreement criterion.

Summary & Conclusion:

The aim of this study was to develop a corpus of gestures for a set of nouns and verbs and validate the same in the Indian context. A set of 221 noun and 122 verb gesture videos were enacted and recorded by a professional actor and a cameraman. These videos were validated and rated by 30 neurotypical adults (both young and old). There was maximum linearity between the participants at 85% for both noun and verb gestures. Thus, 86 noun and 60 verb gesture videos entered the final gesture corpus. Further, the varied responses among the participants in interpreting the gestures have been documented.


  Abstract – LO144: Assessing the Comprehension of Figurative Expressions: Using an Explanation Based Task Top


Priyanka Nayak1, Aysha Shaheen2 & Sudhin Karuppali3

1nayakpriyankamlr@gmail.com,2ayshashaheen2028@gmail.com, &3sudhin.karuppali@gmail.com

1Sri Ramachandra Institute of Higher Education and Research, Chennai - 600116

Introduction:

The period of adolescence is an important phase in life for the usage of a range of expressions pertaining to metaphorical or figurative language. Figurative language comprises of figures of speech, which are rhetorical devices that makes use words in distinctive ways to achieve a special effect. This may include similes, metaphors, personifications, paradoxes, metonymy, irony, overstatements, understatements, and allusions. Proverbs are considered as statement that express their shared beliefs, moral concerns, social norm values, and wisdom of a society that serves a variety of communicative purposes. Similarly, idioms are considered as string of two or more words for which the meaning is not derived from the meaning of the specific words comprising that string. While the interpretation of proverbs can be considered as a metalinguistic skill that reflects an individual's cultural knowledge and verbal competence, abstract reasoning ability and general intelligence; the interpretation of idioms can be an indicator of one's language and fluency, irrespective of the language spoken. The capacity to interpret such figurative expressions does reflect an individual's general intelligence, abstract reasoning ability, verbal competence, and cultural knowledge.

The use of figurative speech helps us to increase the knowledge of vocabulary, memorize and organise new words, and to integrate and improve language awareness and use. The knowledge and understanding of the figurative expressions such as proverbs and idioms begins during the preschool years with subsequent improvement observed throughout adolescence and adulthood. It is during the adolescent period that individuals learn to use more complex language and to communicate differently and effectively depending on the situation. There have been several studies examining the ability of the developing children in the comprehension of the types of figurative language. Such studies indicate that even though the basic ability to comprehend figurative expressions does occur during the preschool years, the refinement of these skills takes place throughout their early adulthood. Being proficient with the usage of figurative language is an important aspect of becoming a socially literate and a linguistically superior person. It has been observed that in comparison to choice based tasks, production based tasks are more demanding and require higher cognitive flexibility. Therefore, the current study is planned to explore the comprehension of figurative expressions using a production based task.

Need for Study:

Though the occurrence of figurative expressions such as proverbs and idioms are more frequent during discourse, these linguistic entities remain investigated. There are few studies pertaining to the development of figurative language in adolescents especially in a multilingual setup like India. Tests such as the Manipal Manual of Adolescent Language Assessment (MMALA) (Karuppali & Bhat, 2016) is the first Indian test that has solely targeted the assessment of figurative expressions. However, the task used in the test does incorporate choices from which the examinee is required to respond. Choice based tasks are known to be less demanding than production based tasks. Therefore, the current study was planned on this line. This study attempts to use the similar items that were used in the Proverb/Idiom task of MMALA; however, the response method was planned to be generative rather than choice-based.

Aim & Objectives:

The aim of the study is to assess the comprehension of figurative expressions using an explanation based task in adolescents between 10 and 16 years of age.

  1. To administer the Proverb/Idioms task of MMALA using an explanation based method of response.
  2. To analyse the generated responses of the administered task.


Method:

Participants:

Adolescents within age range of 10 and 16 years were included in the study. A total of 90 participants were classified into 6 groups (10-10.11, 11-11.11, 12-12.11, 13-13.11, 14-14.11, and 15-15.11 years), with each group consisting of 15 participants. The participants were selected if they fit the age criteria, having a good proficiency in English (based on LEAP-Q), and having age adequate language skills based on MMALA.

Procedure:

Informed consent was obtained from all the participants included in the study. The study was carried out in three phases. Phase 1 involved the administration of the task Proverb/Idiom task of MMALA on 5 subject experts in order to obtain possible explanations which will help in analyzing the response. Phase 2 involved the administration of the stimuli. The samples were recorded using an audio recorder (Sony ICD- UX533F/SCE). Each participant required 5-10 minutes to complete the task. Phase 3 involved the transcription of the recorded explanation task obtained by individual participants.

Scoring:

The responses obtained from the subject experts were used to rate the participants responses. A score of 2, 1 and 0 were given to each of the responses, were 2 indicated most accurate response explaining the general meaning, 1 indicated the related but incomplete and vague response and 0 indicated incorrect response.

Results & Discussion:

The current study focused on assessing the comprehension of figurative expressions using an explanation based task in adolescents between 10 and 16 years of age. The results are discussed based on the performance exhibited by each group on their comprehension of figurative expression and the total scores across the six groups. The mean and SD across the groups are as follows: Group I, II, III, IV, V and VI received a mean (SD) score of 9.40(2.09), 9.20(3.25), 15.66(5.93), 19.60(6.91), 21.93(3.88), 31.00(5.14) and Total scores 17.80(8.93) respectively. The results of the one way ANOVA across the groups [F (5, 84) =44.320, P=0.000] showed a good level of significance (p<0.05) for the overall total performances. The Post-hoc results revealed a significant difference (p Ë,0.05) among the groups II-III, III-IV, V-VI and poor level of significance difference among Group I-II and IV-V.

Based on the mean and SD values for the explanation task there was an increase in the mean scores prominently from school age to adolescents. The overall performance showed an increase in the task score with the increase in age. The difference in the individual scores were due to the individual skills possessed by the children. Proverbs and idioms with lower familiarity attained poor responses. The use of poor response can be associated with the use of nonspecific terms and vague responses thereby concluding the poor lexical storage and retrieval capacity of the individual.

Though the scoring system used was the same in terms of the scores assigned 2, 1 and 0, the MMALA scores indicated Figurative, literal and incorrect respectively. However the total score was the same in both the studies. Post Hoc analysis revealed a good significant difference for the explanation based task. The comparison between the findings of both the studies estimated, emphasizes the increased cognitive load placed during explanation based task.

Summary & Conclusion:

The findings of the present study helps in eliciting the level of comprehension of the figurative expressions among typically developing adolescents. The assessment should also focus by exploring other modes of responses. The use of production based responses gives us a deeper understanding of the individual's figurative system and hence further studies should also focus on using production based tasks.


  Abstract – LO145: Cognitive-Linguistic Intervention for Children at risk for Language-based Learning Disability: A Response-to-Intervention (RTI) Study Top


Jayashree C Shanbal1 & Anagha Balakrishnan M2

1jshanbal@yahoo.co.in &2anaghabm.slp1@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

In India, it is reported that at least five students with Learning Disability (LD) are present in every average-sized classroom (Thomas, Bhanutej, & John, 2003). Around 70 to 80% of children are reported to have Language-based learning disability (LLD) within LD. LD is usually diagnosed when children show signs of scholastic issues in school and therefore the average age at which LD assessments are carried out was reported to be at 9 years (Shaywitz, 1998). Whereas during the preschool years learning disabilities are manifested in children as deficits in speech and language development, reasoning skills and early literacy skills. The early identification and intervention of children at risk for learning disabilities alleviate those undesirable circumstances which are the consequences of delayed intervention, by providing preventive services as early as possible (National Institute on Health, 2000). In this context, Response to Intervention or RTI is a method of early identification and treatment of LD without labeling (Fuchs & Fuchs, 2006).

The cognitive abilities such as attention, memory, organization, reasoning, problem solving and metacognitive skills are important for comprehension and expression of language (ASHA 1987). Strong correlations between several cognitive skills and linguistic skills exist which are crucial for developing reading and writing skills in children (Verhoeven, Reitsma and Seigel, 2010). Children with LD have been found to show poor reading and writing skills due to issues in cognitive and linguistic skills (Myklebust, 1981; Farmer & Klein, 1995; Siegel, 1994). Few longitudinal studies reported the importance of cognitive-linguistic skills in literacy development (Muter, Hulme, Snowling & Stevenson, 2004). It has been reported that before starting formal education children must develop many linguistic and cognitive skills that are important for academic learning (Entwisle & Alexander, 1993).

Need for Study:

A review of the literature showed a variety of attempts made to improve the academic issues in children at risk for LLD (ARLLD). Most of the studies focuses on reading accuracy and other academic skills and did not usually focus on the additional cognitive and linguistic factors which affect academic skills (Snowling & Hulme, 2014). Shaul (2016) suggested that it may be the combination of cognitive and linguistic factors more beneficial to the children in remediating academic difficulties. There have been limited attempts to systematically provide a scientific-research based cognitive-linguistic intervention for children ARLLD at an early age through the RTI method. The purpose of a multi-tiered service delivery model of RTI is that of a prevention model to limit or prevent academic failure for students who are having difficulty learning by providing scientific research-based interventions and to bring students up to grade level achievement. Another purpose of RTI is to serve as part of a comprehensive evaluation for LD.

Aim & Objectives:

The aim of the present study was to study the effect of Cognitive Linguistic Intervention for Children ARLLD through an RTI method. The primary objective of the present study was to develop a cognitive-linguistic intervention module for early service delivery to children ARLLD. The secondary objective of the present study was to investigate the response to intervention of the cognitive-linguistic based intervention model in children ARLLD.

Method:

A multiple baseline ABA research design was followed in the present study. The participants included ten children identified as ARLLD (3.0 years to 5.0 years) by a Speech-Language Pathologist (SLP) and a Clinical Psychologist in a multidisciplinary Learning Disability Clinic with L1 Kannada and L2 English. Early Literacy Screening Tool (ELST; Shanbal, Goswami, Chaithra, & Prathima, 2011) was used for the identification and diagnosis. Cognitive Linguistic Intervention Program (CLIP) in English was developed and validated by SLPs, Psychologists and Special educators. The domains included working on Memory (M), conceptual relationships and association (CRA), organization and categorization (OC) and problem-solving and reasoning (PSR). The activities were designed in such a way that it followed a developmental model to the acquisition of academic language skills in children. After the baseline assessment (A), children were assigned 45 minutes session each day for 20 sessions (B). The researcher (SLP) carried out the sessions. Individual lesson plans were prepared based on the baseline including goals for specific skills. The activities were selected from the material prepared initially. The intervention program was carried out in a clinical setup. Post testing was done after 20 sessions (A). Maintenance and generalization were also carried out after every goal taken up as part of the program. The data included the performance of children on the cognitive-linguistic domains referred to in the above section (Memory, conceptual relationships and association, organization and categorization, and problem-solving and reasoning). The data was statistically analyzed using the SPSS version 20.0.

Results & Discussion:

A parametric Paired t-test was done to analyze the data of children ARLLD. A paired t-test with Bonferroni's alpha correction was applied. The results indicated a significant improvement in memory from the pre to the post-therapy evaluation (t= -32.43, p < 0.0125).

Research has also reported that memory skills help differentiate good and poor readers (Da Fontoura & Siegel, 1995) and early memory intervention will improve the accuracy of reading and comprehension (Farnia & Geva, 2013).

The results indicated that there was a significant difference between pre and post-therapy CRA (t = 13.25, p < 0.125). It has been reported that children with LD shows immature association skills (Mindell, 1978), and relations develop at an early age (Giacomo, Federicis, Pistelli, & Passafiume, 2012) and increasing experiences in such tasks reinforce associative strength and thematic orientation (Blaye, 2004), which is important for future academic skills. The results indicated that there was a significant difference between pre-therapy and post-therapy OC (t = -17.47, p < 0.0125). From the results, it is indicative that young children seem to learn how to categorize (Kelman, & Elisabeth, 2012) and organize (Bornstein & Arterberry, 2010) to enable them to perform academic skills effectively.

The results also indicated that there is a significant difference between pre and post-therapy PSR (t = -10.59, p < 0.0125). PSR skills are reported to be very important in curriculum achievement (Meltzer, Solomon, Fenton, & Levine, 1989), but these skills are often neglected in the early stages of education of children with disabilities (Agran, Blanchard, Wehmeyer, & Hughes, 2002). Reasoning skills are hypothesized to be the basic underlying construct for reading development (Elkind, 1976). Treatment of the core cognitive-linguistic skills in children ARLLD could help in improving their academic language skills so that children ARLLD are able to process information at a higher cognitive level. The results of the present study suggested improvement in all the four cognitive-linguistic domains trained in the present study, which indicates that it is possible to effectively ameliorate the cognitive-linguistic deficits which are underlying the academic language deficits in children ARLLD at an early age.

Summary & Conclusion:

The present study highlighted the need for early identification and early intervention through a multi-tiered RTI model of intervention in children at risk for LLD. The findings of the study indicated that there was a significant difference in the performance of children with LLD on pre and post-test comparisons across all the domains of cognitive-linguistic skills. The findings suggest that more explicit instruction is likely to accelerate progress in various areas of cognitive-linguistic skills in children at risk for LLD.


  Abstract – LO148: Factors Affecting Task Compliance and Self-Directed Usage of Cues in App Based Tele-Rehabilitation Top


Vimala Jayakrishna1 & S.P. Goswami2

1k.vimalajayakrishna@gmail.com &2goswami16@yahoo.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Tele-practice can be used to overcome barriers of access to services caused by distance, unavailability of specialists and/or sub-specialists, and impaired mobility and offer extended clinical services to remote, rural, and underserved populations (Speech-Language Pathologists Providing Clinical Services via Telepractice: Position Statement, 2005). This has encouraged dedicated research on computerized rehabilitation services to enhance tele- rehabilitation services (Brennan, Tindall, Theodoros, Brown, Campbell, Christiana... & Lee, 2011).

Need for Study:

The current study took its roots from the lack of availability of service providers in the field of speech language pathology in India. According to the Indian Speech and Hearing Association, there are about 3500 registered professionals in the field of audiology and speech-language pathology in India for a population of 1.3 billion. Hence, various means for the implementation of tele-rehabilitation were explored in multiple dimensions to overcome the barriers of access. The study in concern utilised Constant Therapy (CT) an iPad (Apple Inc., Cupertino, CA) software platform for therapeutic purposes. It was developed by Kiran, Des Roches, Balachandran & Ascenso (2014), it offers an impairment-based, individualized treatment plan for persons with aphasia, who have suffered a traumatic brain injury (TBI), stroke or dementia, or children with learning disabilities or other disorders. The English version of CT was adapted to Hindi for use in the Indian context (Kasturi & Goswami, 2016). Considering the fact that tele-rehabilitation is in its initial stages of implementation in India, it is worth exploring the different dimensions of its implementation, one of it being the usage; being measured as 'Task Compliance' in the current study.

Aim & Objectives:

The current study, aimed to explore the task compliance to app based tele-rehabilitation and usage of self- directed cues using CT-Hindi in the Indian context during the post-discharge rehabilitation. The association between task compliance and personal factors like age, caregiver support; environmental factors like if the therapy was guided or remotely administered was explored.

Method:

Task compliance and usage of self-directed cues was studied in 18 Persons with Aphasia (PWA) using Hindi version of Constant Therapy (CT) an iPad (Apple Inc., Cupertino, CA) software platform after the study was approved by the institution ethical committee. Each PWA was required to work out five or ten items under all of the 20 tasks selected either with the guidance of the researcher or independently after a brief period of training by one of the researchers. A total of three to five sessions, each lasting between 30 to 45 minutes was provided for the completion of the tasks. The task compliance was recorded in a journal the researcher providing therapy to a particular PWA. The app automatically recorded the responses through an active internet connection. The responses recorded included login credentials, the name of the task, number of items tested, latency of response, the accuracy of response, number of cues used and type of cues used. Discussion is only limited to the cue usage in the current study.

Results & Discussion:

The task compliance was found to be good in younger adults and older adults in comparison to adult PWA. These results were associated to caregiver support. Younger adults and older adults were always accompanied by caregivers, while adults were rarely accompanied by their caregivers. In consensus with earlier research, this study emphasises the continued support from caregiver and positive environmental factors such as continued guidance from a rehabilitation specialist to be driving forces in participation to therapy. The PWAs participating in the study were grouped based on their cognitive linguistic profiles. There were five PWA with High Language-High Cognition (HL-HC), nine PWA with Low Language- High Cognition (LL-HC) and four PWA with Low Language- Low Cognition (LL-LC). The usage of cues available on Constant Therapy Hindi was extracted from the app for each PWA. The percentage of cues used varies across each of the cognitive linguistic profiles. The findings suggested that highest amount of cues were used by the LL-LC group, followed by the LL-HC group and the least amount of cues were used by the HL-HC group.

Summary & Conclusion:

The study yielded certain pertinent findings highlighting the importance family support and assistance by the therapist in order to maintain a good compliance to therapy. The study thereby intends to highlight the applicability of principles of Life Participation Approach to Aphasia (LPAA) in the purview of tele-rehabilitation. The current study re-iterates the ideology of enhancing caregiver and family support to lessen the burden on the PWA and thereby improving their quality of life. The dynamics of right proportion of support from man and machine in aphasia rehabilitation are further described.


  Abstract – LO151: Analyzing the Type of Acquired Dyslexia in Fluent and Non-Fluent Kannada- English Bilingual Individuals with Aphasia Top


Sunil Kumar1, Gopi Kishore Pebbili2, Shyamala K Chengappa3 & Rashmi J4

1rsunilkumar86@gmail.com,2gopiaslp@gmail.com,3shyamalakc@yahoo.com, &4rashmiyuvashree@gmail.com

1Shravana Institute of Speech and Hearing, Ballari - 583104

Introduction:

Reading is a complex cognitive process which involves decoding of written stimulus into its subsequent spoken units of its language. The fluency of reading depends on several cognitive-linguistic factors, subject-related and stimulus-related factors. Grapheme-phoneme correspondence (GPC), which is the process of assigning a phoneme for each grapheme, is one of the prime factors influencing reading which varies from one orthographic script to other. Based on this GPC, orthographic scripts are classified into transparent and opaque orthographies. Transparent orthographies are ones which have correspondence between grapheme to phoneme; contrary to this is opaque orthography, which does not follow grapheme to phoneme correspondence.

Any loss of the ability to read the previously acquired reading skills is known as acquired dyslexia. Acquired dyslexia is classified into central and peripheral dyslexias. The central dyslexias include deep, surface and phonological types of dyslexias. The peripheral dyslexias are thought to be caused due to the impairments in peripheral pathways of visual processing system involved in the reading process, which these include pure alexia or letter-by-letter reading, attentional dyslexia, and neglect dyslexia. Central dyslexias are commonly seen in individuals with aphasia and also in degenerative conditions like dementia, primary progressive aphasia etc., while the peripheral dyslexias are mostly reported in individuals with right hemisphere damage. However, most of the acquired dyslexic studies documented in literature are on individuals with aphasia.

Need for Study:

On implication of rules of the DRC model, Coltheart (1981) has given procedure to classify different types of dyslexia in alphabetic languages. Attempts were also made to adapt the DRC model into the Kannada language which is an alphasyllabary language, to explain different types of acquired dyslexias. However, most of these Indian studies carried out are case studies and on monolingual aphasics. Hence there is a dearth of studies documenting on types of acquired dyslexia exhibited between fluent and non-fluent types of bilingual individuals with aphasia.

Aim & Objectives:

To analyze the type of acquired dyslexia between fluent and non-fluent Kannada (L1) and English (L2) bilingual individuals with aphasia.

Method:

The study recruited 20 Kannada-English bilingual individuals with aphasia. These participants were assigned into 2 groups, group I consisted of ten individuals with fluent aphasia (FA) and group II consisted of ten individuals with non-fluent aphasia (NFA). Study consisted of participants in the age range of 23-65 years. Both the groups consisted participants who are native speakers of Kannada (L1) and English (L2) as their second language. Participants with reading abilities at single word level and participants with minimum of vocational proficiency level in second language on ISLPR were only included in the present study. Informed consent was taken from all the participants before conducting the study.

Materials and procedure: A list of 20 words and 20 non-words in Kannada were selected from the test material Analyzing Acquired Disorders of Reading in Kannada (Karanth, 1984). Also, list of 20 words and 20 non-words in English were selected from word list developed by Coltheart, 1984. These stimuli were presented in HP laptop through power point presentation. The participants were instructed that a list of stimulus will be presented one by one and he/she has to read the presented stimulus loudly. These verbal responses of the participants in both the groups were audio recorded and analyzed qualitatively for the reading errors. Further, these responses were analyzed using the procedure followed in the analyses of acquired dyslexia (Coltheart, 1981) for identifying the type of acquired dyslexia.

Results & Discussion:

The results of the qualitative analysis revealed that in group I, three out of ten individuals with FA manifested with the features of acquired dyslexia. Among three individuals with FA, one exhibited type I and II letter-by-letter reading, surface dyslexia and phonological dyslexia in L2 but not in L1. In the remaining two, one individual exhibited type I and II letter-by-letter reading and surface dyslexia in L2 while, the other participant exhibited deep dyslexia and surface dyslexia in L2 alone. In general, the reading errors such as visual errors, semantic paraphasias, neologisms and regularization errors were commonly seen in the FA group.

On the other hand, six out of ten individuals with NFA exhibited acquired dyslexia while the remaining four had normal reading skills in group II. Among these six individuals who exhibited acquired dyslexia two of them had phonological dyslexia in L2 and sub-lexical dyslexia in L1 while, one manifested only phonological dyslexia in L2. Out of the remaining three individuals, one of the participants disclosed deep dyslexia in L2 while, the other participant manifested in L1 and the last participant exhibited sub-lexical dyslexia in L1 and deep dyslexia in L2. In general, reading errors in group II were more of phonemic paraphasias.

Discussion: The above results highlight that, individuals in group I (FA) manifested features of acquired dyslexia only in L2. This selective impairment of reading only in English highlights the effect of orthographic script. As the Kannada language is relatively transparent with high GPC compared to English with an opaque script. This effect of script might have led to a better reading performance in Kannada than English. It can also be ascribed to the fact that, Kannada being the native and more dominant language in which, therapy was first initiated followed by English may perhaps yield greater lexical strength in L1 compared to L2 in these participants. Overall reading errors in group I are ascribed to comprehension deficit.

On the other hand, most of the individuals in group II (NFA) had good reading aloud of words, but had impaired ability in reading aloud of non-words indicating preserved lexical route with the impaired sub-lexical route in Kannada and English. Also, presence of sub-lexical dyslexia on reading Kannada non-words suggests existence of sub-lexical or non-lexical route even in alphasyllabary language which is being impaired. These findings support the models of reading such as DRC and connectionist model which highlights the existence of different reading routes for words and non-words. These findings are in coherence with the findings by Krishnan et al., (2013); Kiran et al., (2017) in Indian context. Overall reading errors in group II are attributed to articulatory programming and execution deficits.

Summary & Conclusion:

The present study highlights that, irrespective of aphasic groups, the type of dyslexia is found to be more influenced by lexical strength of the language and also the orthographic script. Also, better performance in L1 than L2 indicates stronger lexical strength in L1 and, highlighting the existence of separate lexical organization for different languages.


  Abstract – LP573: A Preliminary Study on Central Executive Function and Auditory Working Memory in Children with Hearing Impairment using Wisconsin Card Sorting Test (WCST) and Digit Memory Test (DMT) Top


Mohammed Asif Basha1, Rahamani Shahni2 & Harjeet Singh3

1ashasif555@gmail.com,2rkeibung@gmail.com &3asifbeast555@gmail.com

1Shravana Institute of Speech and Hearing, Ballari - 583104

Introduction:

Cognitive Linguistics is the integrating branch for cognitive-psychology and linguistics in which Working Memory, a term coined by Alan Baddeley and Graham Hitch, 1974 is a topic which is studied in an intensified manner over the past mid century. Working memory is the ability of eliciting the transiently stored desired information from the brain during the cognitive activities. Alan B and Graham H described the working memory as the Central Executive function encompassing of three components Phonological loop, visuo-spatial sketchpad and the episodic buffer. The working memory is sequential linguistic process which comprises of encoding the information, retaining the information and retrieving the information (Dolu, Basar-Eroglu, Osezmi, Suer, 2005). Working memory in Normal children enhances the conversational skills with better comprehension, expression of thoughts and suggestions (Tina Ibertson et.al; 2013). However, children with specific language impairment show declined performance in working memory tasks compared to Normal hearing children (Susan Ellis W, Julia L. Evans, 1999). Consequently, test such as Wisconsin Card Sorting Test (WCST) (David A. Grant and Esta A. Berg, 1948) analyzes the executive function and task switching by utilizing the auditory feedback. The Executive functions are the cognitive controlled operations to control thoughts and emotions (Diamond, 2013; Miyako et.al, 2000). Further, The Digit Memory Test (DMT) (Martin Turner, Jacky Ridsdale, 2004) analyzes the Auditory working memory and larger domain of executive function. The DMT comprises of Forwards digits and Backwards Digits task. The Digits Forwards Task predominantly taps the short term memory, while the Digits Backwards task analyzes the ability to handle the transiently stored verbal information.

Need for Study:

In the early stages of life the development of language is a critical domain in the overall development of the child. Learning of language encompasses behavioral and cognitive activities some of which are attention, concentration and memory; which lay a platform for the development of psycholinguistic aspects such as phonemics, semantics, pragmatics, syntactic, morphemic and lexemic. In the year 1952, J. Piaget proposed four comprehensive cognitive developmental stages- Sensorimotor stage (0- 2 years), Preoperational stage (2-7 years), Concrete operational stage (7-11 years), and Formal operational stage (11 — older age). From the past six decades each stage has been studied extensively on children with ADHD, LD, ID; but only few studies has focused on to analyze each stage with the executive function for children with hearing impairment. Therefore, the purpose of this study is to analyze the executive functioning and higher order cognitive functions such as abstract thinking and auditory working memory in Hearing impaired children during the concrete operational stage.

Aim & Objectives:

The objective of this study is to analyze the executive functioning and auditory working memory among Normal Children and Hearing Impaired children by comparing the performance of both the groups in WCST and DMT.

Method:

Eight children (4 Males, 4 Females) with normal hearing controlled group; and Eight Children (4 Males, 4 Females) with pre-lingual severe to profound Sensori-Neural hearing loss with Bilateral use of BTE SP hearing aids for minimum of 5 years since diagnosis and in the age range of 7.3-10.9 years (M=8.23, SD=1.33), with minimum three years of formal speech and language training with Receptive and Expressive language age above four years according to REELS (Kenneth R. Bzoch and Richard League, 1971) with comprehension and expression of colors, numbers and shapes were selected for the study. The participants were made to sit comfortably in a distraction free room for the data collection. Written consent form was taken from the guardian. Participants were comprehensively instructed about the Wisconsin card sorting test. Five unrecorded trails were given prior to the actual test to ensure that the participant is familiar with the task. The participant was instructed to pick one card each time and sort it to the four key cards. Sixty trails were administered with auditory feedback of 'correct' and 'incorrect' after each sorting without additional cues. The response for each sorting was recorded under following parameters- Total correct, total errors, Preservative errors, Non- Preservative errors, conceptual level responses, categories completed, failure to maintain set. Time gap of 5 minutes was given to the child. Then, Administration of Digit Memory Test was done. The Digits forwards task was performed followed by Digits Backwards task. Each Digit was given at a rate of one per second with no repetitions of digits. The child was instructed to repeat the digits in forward manner for Digits Forwards task and in backward manner for Digits Backwards task. The responses obtained were tabulated and statistically analyzed using IBM SPSS (Version 26) software for the interpretation of results. Paired t-test was performed to analyze the significant difference among normal hearing and hearing impaired children.

Results & Discussion:

The result obtained from the paired t-test among Normal hearing and hearing impaired children for both males and females revealed significant difference for the total number of correct responses (M=7.063, SD= 6.846); [t(15)= 4.127,p<0.01]. Significant difference was observed for preservative errors (M=-8.875, SD=3.324); [t(15)=-10.679,p<0.01]. Whereas, no significant difference was observed for non Preservative response (M=1.9375, SD=5.7209); [t(15)=1.355,p>0.05]. Significant difference was observed for conceptual level responses (M= 16.688, SD=8.228); [t(15)= 8.113,p<0.01] and in failure to maintain set (M=-2.500, SD=2.503); [t(15)=-3.995,p<0.01]. The results obtained from the Digit Memory Test among controlled group and Hearing Impaired children revealed significant difference for Forward Score (M=2.750,SD=1.125); [t(15)=9.774,p<0.01] and Backward Score (M=3.063, SD=1.806); [t(15)=6.782, p<0.01]. Consequently, significant difference was revealed for Percentile Equivalent scores (M=59.62, SD=16.54); [t(15)=14.412,p<0.01].

The results for Wisconsin Card Sorting Test revealed that the children with severe to profound hearing loss show significant deficits in the Executive function and working memory tasks as stated by Delis et.al; Jennifer C and Penny V in 2007, than that of normal hearing children suggesting poor abstract reasoning and cognitive flexibility in hearing impaired children (Heaton et. al, 1993). The hearing impaired children performed low for set shifting in sorting which is considered to be the central cognitive function (Barcel, 2001). The digit memory test for Forwards Digits showed significantly low scores in hearing impaired children suggesting deficit in attention and in auditory working memory. The test for Backwards Digits showed significantly low scores in hearing impaired children suggesting deficits in higher order cognitive functions. However, in this study the normal and hearing impaired children remembered two digits more in digits forwards than in digits backwards (Martin T, Jacky R, 2004) because the forwards digits task taps on the short term working memory while the digits backwards involves complex higher order cognitive functions for processing the information and activating the working memory, further analyzing and manipulating the information (Esther van den Berg, 2008; Mammarella, IC; Cornoldi, C; 2005).

Summary & Conclusion:

The observations of the current study show substantial hindrance in the executive functioning and auditory working memory of the hearing impaired children. Further, the ease in administration of WCST and the reliable results obtained from it also makes it an efficient daily task in child's rehabilitation process. Future studies can focus on analyzing the effectiveness of using WCST and DMT as a test battery for analyzing the executive function and auditory working memory in different individuals such as with Autism spectrum disorder, Learning disability and Intellectual disability.


  Abstract – LP575: Relationship between Listening, Speech and Language, Cognition and Pragmatic Skill in Children with Cochlear Implant Top


Divya Rawat1, Himanshu Kumar Sanju2, Arun Kumar3 & Vijay Kumar4

1divya.rawat20182019@gmail.com,2himanshusanjuaslp@gmail.com,3aykumararun@gmail.com, &4vkumar@ggn.amity.edu

1Amity University, Haryana - 122413

Introduction:

Recent Investigation revealed that early identification and management of hearing loss has positive effect on speech, language and communication.1-4 Young children with hearing impairment fitted with traditional hearing devices (hearing aid) and cochlear implant. Previous research has shown that children using cochlear implant perform better in speech and language skill compared to children using traditional amplification devices.3-4 A study done by Yoshinaga, Sedey, Wiggin and Mason in 2018 reported earlier cochlear implant activation had a positive and direct outcome on speech and language skill.5 Dettman et al., in 2016 concluded that cochlear implantation in children younger than 12 months of age has positive effect on language skill and articulation accuracy.6 Recently, Cejas, Mitchell, Hoffman and Quittner in 2018 revealed similar intelligence quotient (IQ) between typically developing children and children with cochlear implant.7 Cochlear implant provides opportunity to children with hearing impairment to develop speech-language whereas intensive intervention must be given to improve pragmatic skill among these children.8 However, in an another study authors reported no difference in pragmatic skill between children using cochlear implant and hearing aid.9 Ibertsson, Hansson, Torkko, Svensson and Sahlen in 2007 reported children with CI performed equal responsible conversational partners as the normal hearing teenagers.10

Need for Study:

From the previous literature it can be observed that there is a lack of investigation regarding correlation between listening, speech and language, cognition and pragmatic age in children with cochlear implant.1-4 There is a need to study correlation between listening, speech and language, cognition and pragmatic age in children with cochlear implant.

Aim & Objectives:

The aim of the study was to investigate correlation between listening, speech and language, cognition and pragmatic age in children with cochlear implant. The objective of the study was to find listening, speech and language, cognition and pragmatic age of children using cochlear implant. The other objective was to check correlation between listening, speech and language, cognition and pragmatic age of the children using cochlear implant.

Method:

A total of 20 children in the age range of 1.5 to 5 years who were using the Nucleus Sound Processor cochlear recruited from a private clinic of audiology based in New Delhi, India. All children belong from the families with good socioeconomic status. Parents of all children were well educated with a minimum graduate degree. Informed written and oral consent were taken from the parents of all children participated in the study. In the present study Integrated Scale of Development from listen, learn and talk was used to calculate listening, speech and language, cognition and pragmatic age in children with cochlear implant. Data was collected by two qualified audiologist with master degree in audiology. Mean and standard deviation were calculated using SPSS 20. Spearman correlation analysis was done to check for correlation between different variables i.e. listening, speech and language, cognition and pragmatic age.

Results & Discussion:

The finding of the present study revealed very strong significant positive correlation between listening and receptive language age (r=0.95, p<0.05), listening and expressive language age(r=0.98, p<0.05), listening and speech skill age(r=0.93, p<0.05), listening and cognition age(r=0.89, p<0.05), listening and pragmatic age(r=0.95, p<0.05). Similarly, current study also revealed very strong significant positive correlation between receptive and expressive language age (r=0.97, p<0.05), receptive and speech skill age (r=0.93, p<0.05), receptive and cognition age (r=0.93, p<0.05), receptive and pragmatic age (r=0.94, p<0.05). The very strong significant positive correlation also observed between expressive and speech skill age (r=0.95, p<0.05), expressive and cognition age (r=0.90, p<0.05), expressive and pragmatic age (r=0.94, p<0.05). The interesting finding of the present study was very strong significant positive correlation between cognition and pragmatic age (r=0.94, p<0.05). The outcome of the present study revealed enhancement in one domain of communication skill leads to improved performance in other domains. The finding of the current study also showed that improvement in receptive and expressive language skills also improve cognition and pragmatic skill in the children using cochlear implant.

Summary & Conclusion

The outcome of the present study showed very strong positive correlation between listening, speech and language, cognitive, and pragmatic age in children with cochlear implant. The findings of the present study revealed enhancement in one domain or communication skill improved performance in other domains.


  Abstract – LP576: Maternal interactions during shared book reading in children with Cochlear Implant and their Typically Developing Peers: A comparative study Top


Sneha Mareen Varghese1, Jeffy Philip2 & Manasa R B3

1snehamv2002@yahoo.com,2ncphilip@gmail.com, &3manasa_rb11@yahoo.co.in

1Dr S R Chandrasekhar Institue of Speech and Hearing, Bengaluru - 560084

Introduction:

Shared book reading plays a significant role in fostering language and literacy skills in children. Book-reading episodes provide an opportunity for adults and children to co-construct knowledge in a social setting and negotiate meanings together. Numerous studies have shown that early readers come from homes where adults read to them regularly and where books and reading materials are readily available (Bus, van IJzendoorn, & Pellegrini, 1995). Questions remain, however, about the specific characteristics of these interactive sessions that may promote children's literacy and language development. It is not only the frequency with which a parent reads to a child that affects the child's success; what that parent does during shared reading and how he or she mediates the shared text is important as well. In one of the few many studies that made an attempt to study the home literacy practices in India, reported that the bilingual children being schooled in English medium have home environments that are defined by two distinct domains which included book-reading practices and teaching practices (Kalia & Reese, 2009). Correspondingly, Khurana & Rao (2008) reported that 87% of the parents read story books to their pre-school children. Given the importance for a diverse country like India, with research on home literacy practices as a whole being scant, shared book reading among the Indian population has not been explored. The parents who merely read to their children may not take the full advantage of the opportunities to introduce the meaning or concepts and vocabulary. Thus with backing from supporting literature, it can be hypothesized that maternal complexity of utterances as well as maternal diversity of vocabulary may be related to children's early language.

Need for Study:

There exists a strong need to explore the interactions during shared book reading among the disordered population; children with hearing loss in particular, who are at increased risk for experiencing difficulties during the period of language development. Research has consistently shown that literacy outcomes among children with hearing loss are notoriously poor compared to that of their same-aged typically developing peers. In a study done by Freeman and Werfel (2014), the print-referencing behaviors (drawing attention to the print) were less frequently used by the parents of children with hearing loss than parents of normal hearing children. However, the study only compared the print referencing behaviors as a whole and did not compare the different types of verbal and, non-verbal interactions that happen around a book. Additionally, it may also be possible that the interactions during book sharing may vary in accordance to its salience in picture or print. Picture salient books contain more of illustrations where the attention is directed to the pictures whereas the print salient books direct the focus to the print aspects in the books. Therefore, any conclusions based on the frequency of the activity alone may underestimate the overall benefits of shared book reading with young children, and perhaps the benefits of particular types of interactions. It is also established that fine-grained analysis of interaction can happen only through video recordings and not through means of a questionnaire. Thus, the present study was designed to bridge the gaps in the existing literature.

Aim & Objectives:

The aim of the study was to compare the different verbal strategies and non- verbal references used in shared book reading by mothers of children with Cochlear Implant (CI) and mothers of typically developing children.

Therefore, the objectives of the study were:

  1. To compare the frequency and type of verbal and nonverbal references behaviors used by mothers of children with cochlear implant and mothers of typically developing children
  2. To investigate the frequency and type of verbal and nonverbal referencing behaviors with respect to print-salient and picture-salient books.


Method:

A total of 15 mothers of children with hearing impairment (Cochlear Implant users) and 20 mothers of typically developing (TD) children who met the inclusion criterion were the participants for the study. Language proficiency of the mothers was also kept uniform. The participants were blind to the purpose of the study. The mothers were given a print-salient book and a picture salient book that were to be read to the child in the order/preference of her choice. Prior to the recording, the mothers were given a time of 4-5 minutes to read through the content of the book for familiarity. The book-reading interaction was video-recorded using a digital camera with a built-in microphone in a well-lit room with minimum background noise. Video recordings of the mothers engaged in a book reading activity were observed for different verbal and non-verbal behaviors that occurred in the activity using the EUDICO Linguistic Annotator (ELAN) software (Lausberg & Sloetjes, 2009). Inter-rater and intra-rater reliability were computed.

Results & Discussion:

The coded data obtained from the software was subjected to statistical analysis. The different verbal interactions used by the mothers included; immediate talk, non-immediate talk, eliciting texts, providing texts, concepts of print, written text, engaging with book, book handling concepts and non-verbal references during a shared book reading activity. The results of the study from the Mann Whitney U test indicated that the mothers of the CI users showed greater frequency of verbal behaviors such as immediate talk, non-immediate talk, eliciting texts, providing text, concepts of print and engaging the child with the book during the activity (p<.01). This finding suggests that the mothers simplified their utterances to accommodate their children’s perceived language delay by asking more questions, providing more texts thereby promoting learning. On the other hand, the mothers of typically developing children interacted less. It was also noted that the mothers of the CI children used more non-verbal references when compared to the mothers of the TD children (p<.01). This may imply that mothers of CI children provide an additional use of intact visual modality of children to facilitate better comprehension during the shared book reading activity. The results are in dissonance with the study reported by Freeman and Werfel (2016). Similarly, type of book did have an influence on the kind of interactions. On subjecting the data for analysis using Wilcoxon signed rank test, verbal behaviors such as non-immediate talk in picture salient books and, written text and engaging with books for print salient books showed significant differences in the TD group (p<.01). However, only non-verbal references for print salient books were found to be greater when compared to picture books among mothers of CI users (p<.01). This could indicate that the mothers frequently required the need to direct the attention of the child on to the text of the book which are abundant in a print-salient book.

Summary & Conclusion:

The results of the study indicated the mothers of CI users exhibited greater frequency of both verbal and non-verbal behaviors that occur during shared book reading. Additionally, it was also evident that both verbal and non-verbal interactions varied in accordance with the type of book. The findings of this study hold implications for use of de-contextualized language that can be facilitated during use of either type of books. Future research can focus on the effect of interactions on the child's language and literacy skills.


  Abstract – LP577: Testing the Hypotheses of Lexical Retrieval for Sentence Production Top


Rushali Hemant1, Tanvi Sanghavi2 & Abhishek B. P3

1rushalihthakar@gmail.com,2tanvisanghavi@gmail.com, &3abhiraajaradhya@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Lexical access is defined as the retrieval of the most appropriate word from the lexicon. The words in the lexicon is assumed to be arranged in atypical pattern. Words which belong to the same lexical category is assumed to be arranged together. Based on this notion on lexical arrangement, two hypotheses on lexical access/retrieval namely facilitation and suppression/inhibition hypotheses are formulated. The facilitation hypothesis (Krill, 2002) derives its roots from the shared lexical representation, which states that if a sentence with identical or related noun phrase or verb phrase in relationship to a target sentence is used as a prime, naming would become easier. A sentence with semantically related noun phrase or verb phrase condition facilitates lexical semantic activation of the noun phrase or verb phrase in the target sentences. The other hypothesis in consensus with distinct lexical representation is the suppression hypothesis (Gould, 2004). The proponents of the suppression hypothesis believe that presentation of a neutral sentence (with distinct noun phrase and verb phrase) would enable easier retrieval of target sentence, as the distinct noun phrase and verb phrase would not interfere with lexical-semantic activation of the target sentence.

The two hypotheses can be tested by employing picture-word interference paradigm. Using tasks involving picture word interference effect where pictures are displayed with semantically related or unrelated precursors, experimental evidence on facilitation or suppression hypothesis can be derived. The facilitation hypothesis may be assumed to function when the reaction time or accuracy scores obtained for stimulus pairs under a semantically related noun phrase and semantically related verb phrase condition is better than unrelated condition. The interference hypothesis may be presumed to function when the reaction time or accuracy scores obtained for the stimulus pair preceded by an unrelated noun and verb phrase condition in the picture-word paradigm is better than the related condition.

Need for Study:

Most of the evidence pertaining to facilitation hypothesis and suppression hypothesis has been derived through priming experiments. The priming experiments operate at the word level; hence the findings cannot be generalized to the sentence level retrieval. The effect of word class on lexical retrieval in regard to facilitation and suppression hypothesis has not been explored much. These factors paved way for the current study.

Aim & Objectives:

Aim: To test the two hypotheses regarding the lexical retrieval in context to sentence naming.

Objectives: To compare the accuracy scores for sentences preceded by unrelated and related noun and verb phrases.

Method:

Total number of participants taken for the study were 40, of which 11 males and 29 females were taken between the age range of 18-25 years. LEAP-Q (Ramya & Goswami, 2009) was administered before beginning the test to confirm language proficiency of the participant in English. Action Naming Test (Girish & Shyamala, 2015) was used to test the lexical retrieval. The test material contained 57 stimuli. Each stimulus contained a noun phrase and verb phase. The sentences were preceded by precursors. The precursors were presented in auditory modality. The precursors were recorded in the female voice at an appropriate rate of speech. Out of 57 pictures, 19 pictures were preceded by semantically noun related noun phrase (where the noun of the precursor was semantically related to the noun in the target), 19 pictures were preceded by semantically related noun phrase (where the verb of the precursor was semantically related to the verb of the target) and the remaining 19 pictures were preceded by unrelated precursors (where both noun and verb were distinct). The auditory stimulus acting as the precursor was presented simultaneously with the visual picture stimulus. The duration for each stimulus was 3 seconds. The participants were instructed to name the pictures in a full sentence as the pictures are presented, in the presence of auditory stimulus. The participants were asked to not pay weightage to the precursor and concentrate on the target only.

Results & Discussion:

Analysis: The response was considered correct only when it was complete and had the right noun and verb. Each correct response was given a score of 1, while a no response, incorrect response and partially correct response was given a score of 0. The scores for the target preceded by unrelated, semantically related nouns and semantically related verb phrases were tabulated separately. The maximum score for each of these 3 categories was 19. The testing was carried out in English.

Results:

All the participants were highly proficient as they had speaking proficiency of 4 (on 5-point rating where 4 is the highest level recorded). The mean scores for the stimulus preceded by unrelated precursor was 8 and the mean scores for the stimulus preceded by related noun precursor was 10, while the mean scores for target preceded by semantically related verb precursors was 12. The range of scores for the three sets in the same order was 6 to 13, 8 to 15, and 7 to 17. The standard deviation was high for the targets preceded by verbs compared to the other two conditions. The data was subjected to statistical analysis; the data did not abide by the properties of normal distribution.

Friedman's test was used to compare the accuracy scores for targets preceded by unrelated, related noun phrases and related verb phrases. The X2 obtained was 3.93 and the corresponding p value showed significant difference. Hence, Wilcoxon's signed rank test was used for pair wise comparison. On comparing the accuracy scores of targets preceded by unrelated phrases, with related noun phrases; related verb phrases with related noun phrases and unrelated phrases with related verb phrases, Z scores of 3.54, 2.62 and 3.85 was obtained, the corresponding p values showed significant difference when unrelated condition was compared with related verb phrases, and when related noun phrases was compared with related verb phrases. Statistically significant difference was not seen for the condition where the accuracy scores of unrelated condition was compared with the accuracy scores of related noun phrases.

The stimulus used for the test was from the action naming test, where the emphasis was on the retrieval of verbs, and the noun phrases were relatively unimportant. The noun phrase was same across many target items and the participant did not rely on the noun phrases for the sentence production. Hence the noun phrase did not evoke facilitation or inhibition. The related verbs facilitated the retrieval. The participants felt it easy to retrieve sentences when it was preceded by verb phrase as the related verb phase facilitated the retrieval. Thus, the facilitation hypothesis was more applicable in this context. The accuracy scores were poorest for unrelated condition as it did not have any role in retrieval of the target.

Summary & Conclusion:

The study was carried out with the aim of testing facilitation and inhibition phase in regard to sentence production. 40 participants were recruited for the study. They were asked to name the 57 picture stimuli borrowed from ANT. 19 of these sentences were preceded by unrelated, 19 were preceded by semantically related noun phrases and 19 stimuli were preceded by semantically related verb phrases. The accuracy scores were better for related targets compared to unrelated targets favoring facilitation. The semantically related verb phrases facilitated the target.


  Abstract – LP581: Variations in Home Literacy Environment (HLE) of Children with Communication Disorders and their Typically Developing Peers Top


Sneha Mareen Varghese1, Anna Kariatty2 & Manasa R B3

1snehamv2002@yahoo.com,2ncphilip@gmail.com, &3manasa_rb11@yahoo.co.in

1Dr S R Chandrasekhar Institue of Speech and Hearing, Bengaluru - 560084

Introduction:

Emergent literacy describes a time period from birth to about the end of preschool in which children will achieve their earliest literacy abilities (Justice & Ezell, 2004). This period helps children achieve emergent literacy skills such as advancement in oral language competence, awareness of print, concepts related to book print, letter knowledge, phonological awareness, control of reading and writing, and, matching of speech to print (Lipson & Wixson, 1991). A bulk of literature indicates that these pre-literacy skills are predictors of later literacy achievements of children (Justice & Ezell, 2002; Bird, Bishop, & Freeman, 1995; Storch & Whitehurst, 2002). This is attributed to the reason that effective acquisition of emergent and early literacy skills is significantly linked with later reading outcomes (Walpole, Chow & Justice, 2004; Chaney, 1998).

Home environment is plays a major role in developing emergent literacy skills as pre-schoolers may have chances at home to become accustomed with literacy materials, , involve in shared literacy activities with other people and gain from the print referencing behaviors that the family members use while participating in a shared book reading. Thus, the home literacy environment (HLE) is presently viewed as a multidimensional construct that consists of various interconnected aspects associated with literacy and language development (Leseman & De Jong, 1998; Bus et al. 1995). Consequently, those children who possess an active home literacy environment may enter the school with well-developed emergent literacy skills. But children who lack a rich home literacy environment may face long-standing reading and writing deficits (Bird et a1., 1995; Catts et al., 2001). Notably, the majority of the literature concerning the HLE has focused on preschool children with typical language development; however, exploring the HLE of children with communication disorders is an critical area for research. Children with communication disorders are considered to be at increased risk for literacy difficulties. Accordingly, investigating the extent to which children with communication disorders experience different types of home literacy activities is essential to a broaden the Speech Language Pathologists (SLP) understanding of how the HLE influences children with communication disorders.

Need for Study:

It has been well documented that the children with speech and language impairments are at risk for not acquiring age adequate pre-literacy skills essential for later reading acquisition (Bishop et al, 2009). The possible reason could be that these children may be exposed to less print and less early literacy activities than their typical peers, because of their deficits in communication. If this is the case, the home environment may intensify the language and literacy deficits of these children (Bordreau & Hedberg, 1999; Justice & Ezell, 2000).

Despite its growing significance, only a few studies has been carried out to determine the home literacy environment of children with communication disorders. In India, a few researchers have studied the home literacy environment of preschool children with typical development (Khurana and Prema, 2009). However, no studies focusing on the home literacy environment of children with communication deficits have been conducted. Moreover, the multidimensional nature of the home literacy environment was not considered in those studies. Recognizing this, the present study attempted to understand a broadly defined home literacy environment of children with typical development and children with communication disorders.

Aim & Objectives:

The current study was designed to examine the home literacy environment of children with communication disorders. A questionnaire that assesses the broad definition of home literacy environment was compiled for the study. The objectives of the study were as follows:

  1. To understand the variations in the components of home literacy environment of typically developing children and children with communication disorders.
  2. To explore the variations in home literacy environment of children with respect to the type of communication disorders.


Method:

The participants included a total of 100 mothers of typically developing children (Group I) and 95 mothers of children with communication disorders (Group II). The communication disordered group included children with autism, spoken language disorder and hearing impairment. A questionnaire to study the home literacy environment of children was used based on the one used by Peeters et al., (2009) which they used to obtain the home literacy environment of children with Cerebral Palsy. Modifications were made in the questionnaire to meet the objectives of the study. All the items pooled were grouped under five subsections: parental activities for child literacy development, parental beliefs, parental expectations, materials for child's literacy development available at home and parental own literacy materials and activities. The questionnaire was given for content validation to 4 speech-language pathologists who had nearly 5 years of experience in the field. Mothers who met the inclusion criterion was instructed to fill up the questionnaire. Once the completed questionnaires were received, responses were analysed and tabulated. A score of 0-5 was given based on the responses for each item which was later subjected to statistical analysis using SPSS version 16 software.

Results & Discussion:

The questionnaire consisted of five domains which included parental activities for child's literacy development, parental beliefs about HLE, parental expectations, materials for child's literacy development available at home and parents own literacy materials and activities. Descriptive statistics (Percentage of frequency of behaviors) and inferential statistics (t-test) were employed to the collected data. Item-wise analysis (Percentage of responses) of the parental activities revealed that the children with communication disorders have less opportunities and exposure to literacy activities in their home than typically developing children. The results also suggests that there were more mothers of typically developing children who performed HLE activities always with their children, than mothers of children with communication disorders. Taken together the findings, there were statistically significant differences for a few of the items in the questionnaire. Significant differences were also noted on t-test (p<.05) for items such as providing cues to the child, waiting for the child to fill up the story, rephrasing the content of the story and playing rhyming games with the child. These results are in dissonance with previous findings of Boudreau (2005). This could be attributed to the methodological variations in their study. Additionally, no much differences on parental beliefs about HLE and literacy behaviours were reported. Notably, mothers with typically developing children had more number of literacy materials when compared to the mothers of children with communication disorders. While looking into the differences in the HLE of children with various communication disorders the results revealed differences in few of the items.

Summary & Conclusion:

The purpose of the study was to determine the variations in the home literacy environment in typically developing children when compared to children with communication disorders. Results revealed differences in the frequency of occurrence of a few parental activities in typical children and in children with communication disorders. Differences in HLE across disorders were also noted. The results of the study indicate that exploring the home literacy experiences of children with communication disorders during assessment may eventually improve SLPs abilities to incorporate literacy goals and activities during intervention. Additionally, the information regarding the HLE will aid clinicians during counseling.


  Abstract – LP582: Effects of Vocal and Sub vocal Rehearsal on Recall Top


Nahida C K1, Nidha Fathima2, Shana Yasmin3 & Malavi Srikar4

1nahidayoosuf@gmail.com,2nidhapadiyath@gmail.com,3shanayasmin97@gmail.com, & 4p_srikar@rediffmail.com

1All India Institute of Speech and Hearing, Mysuru - 57006

Introduction:

Memory is a mental faculty that enables retention of information over time. There are three stages conceptualized in memory: Encoding, storage and recall. Encoding refers to the mental process of receiving, processing and combining of information. Storage refers to creating a record of the encoded information and recall refers to the act of retrieving this stored information. Information may be stored in short-term memory or long-term memory. Numerous studies have revealed that rehearsal techniques facilitate better retention and recall of information (Miller, McCulloch &Jarrold, 2015). Rehearsal can also be divided into two variants based on the modality namely vocal and sub vocal rehearsal.

Vocal rehearsal involves repeating the target items aloud. Sub vocal rehearsal, on the other hand, involves the movement of muscles associated with speaking, without any verbal output. This internal speech cannot be detected overtly and may not be noticeable to speakers themselves. Immediate recall tasks have been widely employed to tap individual's short-term memory abilities. The phonological loop is a concept implicated in maintenance rehearsal that also acts as a function of short-term memory. It comprises of two components: Sub vocal rehearsal called the articulatory rehearsal loop and short-term store called the articulatory store. Sub vocal rehearsal is said to constantly refresh memory traces of the items held in the short-term store. Thus, facilitating better performance in recall (Baddeley, 2015).

Research has indicated that sub-vocal rehearsal procedures produce a better performance in recall than vocal rehearsals (Fischler, Dewey & Atkinson, 1970). Some authors on the other hand, report a better improvement in recall abilities with overt or vocal rehearsal procedures (Bebko, 1979).Whereas, some other studies suggest that rehearsal itself plays little role in the development of verbal short-term memory performance in individuals (Jarrold, 2013). Hence, there exists a lack of consensus in literature regarding the effects of vocal and subvocal rehearsal procedures on verbal recall.

Need for Study:

Rehearsal is known to facilitate recall, it varies as a function of the mode of rehearsal. Some of the studies in this direction have indicated that sub vocal rehearsal would be better than vocal rehearsal. While few other negate the view and claim and indicate that vocal rehearsal is better than sub vocal rehearsal. The current study tries to investigate the effect of the mode of rehearsal on recall in typically developing adults.

Aim & Objectives:

To investigate if the mode of rehearsal influences recall.

Objectives: To compare the performance of group 1 participants on recall for set 1 and set 2. To compare the performance of group 2 participants for set 1 and 2.

Method:

40 Neurotypical adults served as participants. The participants were divided into two groups, and the groups were designated as group 1 and group 2. Group 1 consisted of 20 individuals in the age range of 19-25 years while the second group consisted of 20 individuals in the age range of 55-60 years. Same number of males and females were considered in each of these groups.

Stimulus: The stimulus consisted of 96 pictures; the pictures were divided into two sets of 48 each designated as set 1 and set 2. The set 1 consisted of 48 pictures were presented in 6 stimulus sets. Each stimulus set comprised of 8 pictures. Set 2 consisted of same number of stimulus sets and the stimulus sets had same number of pictures. The pictures were presented through Power Point Presentation. The participants were shown the pictures and were asked rehearse the first set of pictures (set 1) through vocal mode of rehearsal and second set of pictures were rehearsed through sub vocal mode.

Procedure: The participants were asked to carry out recall task, where the participants were asked to see the picture stimulus, remember the pictures, and rehearse the names of pictures through vocal mode for set 1 and through sub vocal mode for set 2. Vocal rehearsal referred to the process of overtly mentioning the labels while sub vocal rehearsal referred to process of silently rehearsing the item label. The participants were asked to vocally or sub vocally rehearse in between the picture presentations.

Scoring: Serial recall was used to test recall, where the participants were asked to adhere to the order of stimulus representation. Each correct recall of picture was given a score of 1. The maximum score which could be obtained for each stimulus set was 8 and the maximum score for set 1 and set 2 was 48 respectively (8*6 stimulus sets). Scores were tabulated separately for set 1 and set 2.

Results & Discussion:

Group 1 participants secured a score of 46 for set 1 and 45 for set 2 while group 2 participants secured a score of 34 for set 1 and 38 for set 2. Further, the scores were analysed for males and females of each group separately. Group 1 males secured a score of 45 and females secured score of 47. For set 2, males and females secured scores of 45 and 45 for set 1 and set 2 respectively. While group 2 males secured scores of 32 and 40 for the two stimulus sets respectively, females secured scores of 36 and 36 respectively.

The objectives of the study were to compare the performance on recall for set 1 and set 2 in group 1 and group 2 participants separately. Between group comparisons was not done as it was not the part of coined objectives. As the data abided by properties of non normal distribution, Wilcoxon's signed rank test was used and Z score obtained by comparing the recall on set 1 and set 2 for group 1 and group 2 was 1.72 and 3.12 and corresponding p value showed significant difference only for group 2.

Group 2 referred to older participants and this group of participants performed well on set 2 i.e. for the items rehearsed through sub vocal mode of rehearsal compared to set 1, this showed that sub vocal rehearsal was better than vocal rehearsal. When the items are rehearsed overtly the lexical node would impede the recall of the next item and this phenomenon would be seen more in older adults owing to which recall would have varied as a function of the mode of rehearsal especially for this group. The younger individuals on the other hand would not have experienced inhibition for items presented through vocal mode as a result, the results may not have shown significant difference.

Summary & Conclusion:

The study was carried with aim of investigating if the mode of rehearsal influences recall. The participants were divided into group 1 (19-25 years) and group 2 (55-60 years) with 20 participants in each group. The participants were asked to recall pictures. The pictures were presented in two sets. Items in first and second set was rehearsed through vocal and sub vocal mode of rehearsal. The performance did not vary as a function of the mode of rehearsal for first group while the performance varied as function of the two mode of rehearsals for second group with performance better for items recalled through sub vocal mode.


  Abstract – LP586: Collaboration between Speech Language Pathologists and Teachers in Mainstream and Special Education in schools- A survey Top


Sanya Modi1 & Pooja Thakkar2

1sanyamodi2@gmail.com &2poojarajivthakkar@gmail.com

1The Gateway School of Mumbai, Mumbai - 400088

Introduction:

Communication is an essential skill right from childhood and it supports as well as impacts a child's language, social, emotional and academic development. A speech and language pathologist (SLP) is trained to work with children having communication difficulties. They combine students communication goals with academic and social goals. Research suggests that collaboration between school teachers and SLP is beneficial for school going children with communication disorders. Studies show that successful collaboration between school teachers and SLPs promotes a holistic approach to meeting the child's communication needs (Wright & Kersner, 2004). Many studies have also suggested that teachers who work jointly with SLPs were more aware of the impact of communication disorders on their student's academic and social success and made more appropriate classroom adaptations.

Need for Study:

The traditional model of speech therapy service delivery in the Indian context has largely been of an SLP in a clinic, implementing the direct one-on-one intervention with a child. However, best practice research shows that the consultative and collaborative model where the SLP is seen as the expert or specialist who advises (consults with) the classroom teacher regularly and is in direct contact with the child in the classroom is more effective. In the Indian context, however, there is less clarity on the knowledge, attitude, and awareness that teachers in mainstream and special needs schools have with regards to the effectiveness of collaboration with SLTs for children with communication difficulties in school settings.

Aim & Objectives:

This study aims to explore perceptions, knowledge, and attitudes of school teachers who are involved with children having communication difficulties, regarding their views on collaboration with the SLPs. The objective of the study is to explore issues and consider recommendations for change in the service delivery provided by SLPs to school-going children with communication disorders.

Method:

A list of mainstream and special needs schools were made in the city of Mumbai. From the 10 schools that agreed to participate in the study, a validated questionnaire was given to a group of 10-12 randomly selected teachers from each school. A survey questionnaire was used as it was believed to be the best method of reaching the selected sample within a limited time span. The sample size was of 50 participants for the study. The content of the questionnaire was developed in consultation with 3 SLPs and 2 educators and by reviewing the literature, to develop themes to be explored. The questionnaire sought data relating to:

  1. Participant demographic information (gender, job title, years of experience)
  2. Awareness of SLPs and their services
  3. Awareness and attendance of training courses provided by SLPs
  4. Perceived importance of collaboration between teachers and SLPs
  5. Desired frequency and format of collaboration of SLPs with teachers
  6. Satisfaction of working collaboratively
  7. Suggested improvements to collaborative working.


Both quantitative and qualitative data were obtained, with a combination of a five-point Likert scale, together with three open questions at the end of the questionnaire to allow for narrative answers. The inclusion of open-ended questions (for e.g. their views regarding current collaboration with an SLP, changes they would like to see in the future) gave a holistic view as to what recommendations can be made for the future. This also allowed participants to add any comments or highlight issues that were not covered by the other questions. Percentage analysis was done to examine the results.

Results & Discussion:

Based on the received responses on the questionnaire, it was found that 88% of the participants were aware of SLPs and their services while only 41% of them were aware when to refer a child to the SLP. The lowest awareness and attendance (35%) was reported for the workshops and seminars conducted by SLPs for training teachers. When the participants were asked how important they consider the collaboration between teachers and SLPs to be, no participants rated it as 'not important at all' or 'not important', 52.1% reported it was 'very important'. The most preferred method of collaboration with an SLP was meeting in person (62%). A written format was the second choice for collaboration (23%). Less than half (48%) of participants were satisfied with the current standard of collaboration, (20%) being dissatisfied, while the rest were undecided. Most of the participants (58%) preferred meeting with the SLPs more than once per term.

More than half (56%) of the teachers found the speech and language assessment and therapy reports useful for setting the child's IEP. When asked about the purpose of collaboration with the SLP, the main themes that emerged were for the benefit of the child; to provide strategies for teachers to support children with speaking, listening and language skills in class; to exchange information, to monitor progress and to set specific goals. Some participants described that they found it effective to discuss intervention techniques, IEP targets and sharing any concerns regarding the child's communication skills. However, when asked about their views on current collaboration, the information seemed unavailable, or insufficient for some teachers. They expressed a lack of practical advice by SLPs to implement in class; lack of support from the SLPs to intervene with the child; and lack of feedback after sessions. Some teachers reported that the SLPs did not consistently discuss with them intervention techniques/programs and hence they undervalued the importance of collaboration with the SLPs

It was suggested that there should be an increase of SLP time in schools and continuous and regular contact, mandatory teacher training programs by the SLPs. Teachers also responded by saying that SLPs are crucial in language classes. SLPs should be more involved in conducting programs with students in school rather than just giving recommendations and advice. They also suggested the appointment of SLPs in mainstream and government school especially for primary year programs to build the foundation of language and communication skills.

Summary & Conclusion:

There is a need for greater provision of information regarding SLP service delivery, creating awareness and ongoing examination of collaboration between the SLP and teachers for best meeting a child's communicative needs. As children with different types of communication problems are now commonly integrated within a mainstream classroom setting, it has become important to provide SLP services within school settings with the collaboration of classroom teachers for better generalization of communication skills. Since there is a lack of awareness among teachers about the training programs provided by SLPs, these could be publicized more widely. Research has shown that few mainstream school teachers receive any information on speech and language impairments as part of their initial training (Sadler, 2005). RCI can emphasize the need for teachers to be trained in working with children who have communication difficulties. Teachers and SLPs should observe and review school programs where joint collaboration is successful to highlight good practice. These schools could then be used as a model to be implemented in the schools where collaboration is reported to be poor. SLPs and school teachers should be seen as equals so that the information flow is bi-directional.


  Abstract – LP590: Highlighting the Role of Speech Language Pathologist in Assessment and Management of Post Japanese Encephalitis: A Single Case Study Top


Nidhi Thomas1, Bhupendra Kurmi2, Priyanka3 & Garima Dixit4

12571665@gmail.com,2tularamkurmi17@gmail.com,3dpriyanka006@gmail.com, &4garimadixit73@gmail.com

1Sri Aurobindo Institute of Medical Sciences, Indore - 453555

Introduction:

Japanese encephalitis known as the 'plague of the orientâ' is the commonest cause of epidemic viral encephalitis globally (Rashmi kumar 2014). In India JEV activity was first noticed in Nagpur in the territory of Maharashtra in 1952. Japanese encephalitis virus (JEV) is a flavivirus related to dengue, yellow fever and West Nile viruses, and is spread by mosquitoes. It is seen that rapid onset of high fever, headache, stiffness of the body parts, insensibility, unconsciousness disorientation, sometimes state of coma, seizures, spastic paralysis, cognitive issues and inability to respond & speak are more common seen in these cases. According to WHO 2019 20-30% patients with Japanese encephalitis suffers from severe speech and language disorders.

Need for Study:

Japanese encephalitis is the most common prevalent and significant mosquito borne viral encephalitis occurring with an estimated 30000 to 50000 of cases and 15000 deaths annually (Gitalim Kakoti et.al 2013). About 20% to 30% of JE cases are fatal & many survivors continue to have long term neurologic psychiatric speech, language or cognitive problems. Children under 15yrs of age are more prone to the infection.

Thus the early detection of speech, language & cognitive issues becomes the most important because early assessment & intervention of speech language & cognitive skills can have a significant impact on the long term prognosis of many children with Japanese encephalitis. The need of the present study was to increase experts knowledge about children affected by Japanese Encephalitis, along with this highlighting the speech & language characteristics which are highly impacting the overall quality of life in children with Japanese encephalitis.

Aim & Objectives:

The study was undertaken for a better understanding and to determine the clinical profile which will be highlighting the clinical features related to speech, language, hearing and cognition in a case with Japanese Encephalitis reported at the department of Speech and Hearing of Sri Aurobindo Institute of Medical Sciences (SAIMS), Indore.

Method:

Here we described a five- year old male child who reported SAIMS with chief complaint of reduced speech and language skills.

Assessment was completed by pediatrician, speech-language pathologist psychologist and audiologist. The pediatrician clinically examined the detailed spectrum of the disorder.

The psychologist assessed the intellectual ability behaviors of social skills using Wechsler Intelligence scale for children Revised (WISC-R; 1974), and Indian adaptation and Social Maturity Scale (VSMS; Malin, 1992) respectively.

A speech language pathologist evaluated Receptive expressive emergent language scales (Kenneth R Bzoch), Attention level (Reynell 1978), Speech language development chart 2ndEd (Gilman.L, Gorman.G) 3Dimensional Language Acquisition Test (Geeta Harlekar). Along with audiological evaluation was done by audiologist included tympanometry and Brainstem Auditory Evoked Potential (BAEP) testing.

Results & Discussion:

The classic description of this present case shown illness in all three stages of Japanese encephalitis were categorized by fever , headache ,vomiting , altered sensorium , seizure , disorientation , focal weakness , choreoathetosis , facial grimacing , and lip smacking with ataxia. With these clinical features, pediatrician diagnosed the case as JAPANESE ENCEPHALITIS.

Case history revealed that child was the first baby of consanguineous marriage and had cesarean delivery at 39 weeks of gestational age with normal birth weight (2.8 kg). Post- natal history depicted presence of Japanese encephalitis at the age of 19TH months. He was kept in NICU for 2 months due to the comatose condition. After the complete treatment of Japanese encephalitis client had shown regression in speech, language and cognitive skills which include no verbal utterance, lack of understanding familiar commands and recognition of family members. He took physiotherapy for 2 years but did not take speech therapy during this period. After a gap of two year parents came with a chief complaint of no verbal utterance with limited understanding of speech and language skills in our department (Sri Aurobindo Institute of Medical Sciences).

Speech Language Evaluation:

Speech and language analysis depicted regression in speech and language skills. On the basis of standardized test described (Receptive Expressive Emergent Language Scales) receptive language age 20 to 24 months and expressive language age 3 to 4 months respectively. Attention level was evaluated it suggested level 1 i.e. (1-2 years), speech language development chart were administered which revealed following ages like Phonology- 0 to 3 months, semantics -3 to 6 months, play development -0 to 3 months, and pragmatics age -3 to 6 months respectively.3D LAT revealed comprehension age 18to 24 months, expression age less than 9 months and cognition age 9-11 months. Informally memory was assessed which demonstrated lack of identification about family members, common objects and body parts. Detailed Oral Peripheral Mechanism Examination shown weakness, reduced strength, tone, accuracy and range of motion of all articulators like lips , tongue , jaw , palate with and without resistance with absent reflexes( jaw jerk, rooting, sucking, and smacking reflexes) were noticed..

In audiological evaluation tympanometry and reflexometry was done it revealed bilateral A type tympanogram with present acoustic reflexes suggestive of no middle ear pathology. ABR testing indicated well- formed responses to click stimuli suggestive of bilateral hearing sensitivity within normal limits.Psychological evaluation results revealed that mild intellectual deficit.

Intervention of this case included 12 session which shows drastic improvement in speech, language and cognitive skills. Following ages are illustrating the improvement graph of the child. Receptive Expressive Emergent Language Scale indicates receptive language age 24 to 27 months and expressive language age 12 to 14 months respectively. Attention level 2 (2-3 years), Speech language development chart reveals following achieved and emerging ages. Phonology —emerging 18 to 24 months, semantics 12 to 18 months, play development 9 to 12 months, syntax and morphology emerging 12 to 18 months and pragmatics age 12 to 18 months respectively. 3Dimensional Language Acquisition Test results are comprehension age 24 to 26 months emerging, expression age 12 to 14 months and cognition age 15 to 17 months. Informally memory assessment revealed child is able to identify family members, common objects and body parts.

Summary & Conclusion:

Assessment of speech and language skills in case of Japanese encephalitis shown regression of speech, language and cognitive skills after the viral infection. Continue and proper intervention shows drastic improvement in speech, language and cognitive skills in this case study. Research can be a stepping stone in terms of exploring information as assessment and therapeutic intervention in case of Japanese encephalitis and adequate awareness and counseling of parents and professionals are helpful to improve the overall quality of life of the child.


  Abstract – LP593: Play and Social Skills Development and Correlation in Children Top


Ujjwal Gaurav1, Nirbhay2 & Irfana M3

1ujjwalgaurav00@gmail.com,2nirbhaykumar495@gmail.com, &3fanairfana@gmail.com

1Netaji Subhash Chandra Bose Medical College, Jabalpur - 482003

Introduction:

Play skills are part of developmental pattern of a child and it is a collective result of cognitive communicative motor development. Previous studies reported of improved executive functions and self-regulatory abilities in preschool children who were attending play based curriculum (Diamond, Barnett, Thomas & Munro, 2007; Hyson, Copple & Jones, 2007). Similar results were observed in another study and reported that better short and long-term academic, motivational and well-being outcomes were seen in children who were active in playground (Marcon, 2002).

Need for Study:

As seen in the review of literature, there is a positive relation between cognitive, social and communication abilities in children who has adequate play skills. Play behaviours and environments varies based on the place, economic status and sibling.Similarly the social settings where the child is a major factor of their overall development.

Aim & Objectives:

The study aimed to understand the developmental pattern of play skills and correlation between social and play skills across age i.e 1 to 5 years.

Method:

Participants:There were 47 participants where 21 were males and 26 were females and all were from the age range of 1 to 5 years.There were 15 participants in the first group(1 to 2 years), 11 in second group(2 to 3 years), 16 in third group(3 to 4 years) and 5 in fourth group(4 to 5 years).All were informally assessed not to having any kind of speech language hearing and neurological deficit.All the participants were from middle socioeconomic status and 36 of participants were had siblings and 11 did not have sibling.

Material: There were two checklists used for assessing the social and play skills in children. Social skill development checklist (kid sense child development, 2017) was used access social skills in children which age range was extending from 0 to 7 years. This checklist is categorised into eight groups based on the age and all the questions had binary choice i.e. yes or no. Play checklist (Sandra & Hewitt, 2010) was used to analyse play skills development in participants which included close ended questions.This included varied number of options of answers across questions.

Procedure: Written consent was taken from parents prior to the interview and observation of child. Each child observed and parents undergone for interview separately and recorded the same for further analysis.

Results & Discussion:

As seen in the results, first age group i.e. 1 to 2 they were interested in playing with substitutes objects for other objects which was same in all the age groups and there was no developmental pattern seen in present study.There is no role playing seen in the first age group, however, children started using one sequence of play by second age group and for third and fourth age group combination of sequences in the role playing was seen. Verbalization during the play scenario seems to be developed from first age group to further where in the first group they did not use pretend words while playing, words to describe substitute objects was started from age 3. Regarding the entrance into play group, first group did not attempt to enter in play group, while in other group they asked to enter in play group.In problem solving first age group use to seeks adult assistance while others forced to solve problems.

Correlation coefficient was found and showed significant correlation between social and play skills according to the age (p<0.05). Correlation coefficient was varied from 0.56 to 0.87 and the relation was positively directed.

Summary & Conclusion:

Present study results showed that there was a improvement in developmental pattern of play and social skills from the age 1 to 5 and showed significant correlation between both domains. This is in congruent with previous studies where they showed better social outcome of children where interested in play (Diamond et al, 2007; Hyson et al, 2007; Marcon, 2002). Children were using speech output while playing during in the second age group which is sooner than as seen in the previous study (Lenormand, 1986).

Present study highlighted the importance of play skills in children and seen the developmental pattern in current Indian scenario. However, longitudinal study should have been served better as it is focused developmental pattern.


  Abstract – LP598: Effect of Distance between the Marker Agreement Dependencies: Grammatical Judgement of Sentences Top


Darshan H.S

darshanhs23@gmail.com

All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Rules and regularities are embedded in all language structures. Extracting these help in speech-language acquisition and processing (Saffran, Aslin & Newport, 1996). Wide range of information like lexical-semantic, phonetic, morpho-syntactic and contextual cues plays a crucial role in processing language in real-life situations. Sentence processing relies on the transitional probability of the dependencies which are present within the sentence.

Recently the topic of debate is how individuals utilize the covert structures in a sentence in order to process it. Majority of the studies are in particular with sentences containing relative clauses that alter the head noun phrases. Processing of subject and object relative clauses have been studied extensively. Both types of relative- clause sentences involve a non-adjacent dependency between the head-noun and the main verb, across the embedded clause. In addition to that, object relative clause involves back-tracking of non-local dependency (between an embedded verb and its antecedent object). Because of this kind of structure, an object relative clause is complex in nature. It has been concluded that object relative sentences are complex and it is difficult to process when compared to subject relative sentences (Gibson, 1998). These well-established results are put forth using tasks like reading aloud task, response accuracy to probe question and online lexical decision.

Another set of researchers have considered the information of inflectional morphemes between dependencies within a sentence is required for processing the complete sentence. In these studies, it has been studied how learning/training happens when this information is manipulated in artificial languages which mimics the natural language. Tracking short and long-distance dependencies considering marker agreement in a sentence is less studied.

Need for Study:

For successful sentence processing, available information about the dependencies present within the sentence has to be used.

Probabilistic information about the dependencies helps in sentence comprehension. In the literature, many studies have been carried out on different relative clause sentences in natural language, learning in an artificial language, adjacency effect of dependencies. These studies have been carried out in inflectional and non-inflectional languages and mainly in the western context. In the Indian context, there is a dearth of research literature on this topic. Dependencies distance and marker agreement (PNG markers and Tense markers) can be considered as variables to study the role of transitional probabilistic information in sentence comprehension process using online sentence judgment task in one of the South Indian language i.e., Kannada.

Aim & Objectives:

Aim: The present study aimed at investigating the effect of the distance between agreement dependencies in sentences in neurotypical individuals.

Objectives: To compare the reaction time and accuracy scores across three types of sentences.

Method:

Participants: 20 neuro-typical individuals aged between 18-40 years were recruited for the study. Only those who are free from any speech, language, hearing disorders were selected as participants for the study. All the subjects selected for the study had a minimum of 10 years of formal education and who were able to read Kannada sentences, native Kannada speakers.

Materials: The sentences used in this task were manipulated based on the distance between the dependencies of the marker agreement. The examples in English (1 & 2) and Kannada (3 & 4) are as given below.

Example 1: The boy is sleeping; Example 2: The boys are sleeping

From the above example 1 and 2, an auxiliary verb (is/are) is dependent on the plural marker or inflectional morpheme of the word 'boy/boys'.

Example 3: Non-anomalous sentence - /avanu malagidanu/ (he slept): short distance gender agreement. Anomalous sentence / avanu malagidalu/ : short distance gender agreement

Example 4: Non-anomalous sentence: /Avanu manchchada mele malagidanu/ (he- cot-on- slept): long distance gender agreement. Anomalous sentence: /Avanu manchchada mele malagidalu/.

A total of 60 Kannada sentences were used and were grouped into three categories i.e., short sentence (had short distance between dependencies) (SSD); Longer sentences (had long-distance between dependencies) (LLD) and longer sentences (had a short distance between dependencies) (LSD). The agreement markers in the sentence were manipulated w.r.t distance among them and grouped it as adjacent (short distance) and non-adjacent (long-distance) type of sentences.

Procedure: Participants were seated in front of the laptop screen in a room free from noise and visual distracters. The participants were instructed to read the sentence and judge whether it is grammatically correct or not by pressing the key corresponding to 'yes' or 'no' on the keyboard.

Scoring: Correct judgment of sentence was scored as 1 and incorrect as 0. A total score of 20 in each sentence type which consists of 20 sentences.

Results & Discussion:

Accuracy and reaction time measures were derived for each sentence types. Mean Accuracy scores of SSD type of sentence was 19.50, LLD type of sentence was 19 and LSD type of sentence was 18.50. Mean reaction time for SSD type of sentence was 2028ms, LLD type of sentence was 2631ms and LSD type of sentence was 3287ms. Data were subjected to a normality test and found that data was not normally distributed. Hence non-parametric tests were applied. In order to verify the difference between accuracy scores of all three sentence types, the Friedman test was applied. It was found that there was a statistically significant difference (p < 0.05). Further, Wilcoxon signed-rank test was used to see a difference across all possible pairs. It was found that there was a statistically significant difference for SSD vs LLD (/z/=2.27, p <0.05), SSD vs LSD (/z/=2.91, p <0.05) and LLD vs LSD (/z/=2.44, p < 0.05). To verify the difference between reaction time for all three sentence types, the Friedman test was applied. It was noted that there was a statistically significant difference (p < 0.05). To see the difference between the possible pairs of the sentence types on reaction time measure, Wilcoxon signed rank test was applied. It was observed that there was significant difference for SSD vs LLD (/z/=2.41, p <0.05), SSD vs LSD (/z/=3.29, p <0.05) and LLD vs LSD (/z/=3.29, p <0.05).

It was observed that better performance was for SSD type when compared to the other two types. This can be attributed to the shorter length of the sentence leading to better accuracy and faster reaction time in judgment. Shorter the sentence, load on working memory will be lesser, time taken to read and processing will be faster. It was noted that better accuracy scores were seen for LLD type and reaction time was faster than LSD type. It was hypothesized that processing will be difficult for LLD type when compared to LSD type assuming when the distance between the dependencies is more, greater time will be required to process. Because there will be more demand on working memory in order to hold the information to process. But this hypothesis was discarded. Poorer performance on the LSD type of sentence can be attributed to its sentence structure.

Summary & Conclusion:

The present study aimed at understanding the effect of the distance between the agreement dependencies in a sentence in typically normal individuals. It was found that there was a significant difference across the types of sentence for both accuracy and reaction time measure.

Better performance was seen for SSD compared to LLD and LSD. LSD type of sentence yielded lesser scores when compared to the other two types, which can be attributed to sentence structure.


  Abstract – LP599: Effectiveness of Semantic Based Treatment in Persons with Aphasia Top


Deepak P

deepakdeepu9327@gmail.com

All India Instittute of Speech and Hearing, Mysuru - 570006

Introduction:

Semantic treatment is believed to be one of the salient approaches in persons with aphasia (PWA), when treating deficits pertaining to word retrieval. In addition, semantic based approach has the virtue of improving semantic relationships. This in turn, improves processing and naming. However, these approaches have evidenced to have robust findings, when treating nouns and action verbs. Furthermore, these approaches may have prominent role in rehabilitation of aphasia.

Most of the left hemisphere stroke affects the cortices connecting semantic system or the interface which connects perisylvian language and cortices. Hence, it is important to address the association cortices which are in right hemisphere, where semantic knowledge is assumed to be preserved. Training these areas becomes pivotal using semantic based approach.Although, semantic treatment can be delivered by several ways, and there are certain approaches which are outstanding among other approaches. For instance, Semantic feature analysis (SFA) and Verb Network Strengthening Treatment (VNest) and so on. However, the former approach helps to elicit names of nouns and verbs through pictures using the semantic features of the particular word. Alternatively, the latter approach aforementioned uses verbs as the core element to elicit corresponding nouns relevant to the particular verb.

The paradigm of VNest (Edmonds, Nadeau & Kiran, 2009) was used in the current study. However, retaining the basic principles and steps of VNest, researchers designed slightly different protocol titled Semantic Cueing for Verb and its Thematic Roles (SCVTr).Where all the steps pertaining to VNest had been carried out.However,this study had slight changes by introducing semantic cues grading from more complex to less complex. Additionally, this protocol made use of visual cues to train assigned verbs and its thematic role. Thus, the study evinced that protocol which used multimodal cues; consequently facilitates word retrieval with ease. Keeping this modification, the current study has been investigated.

Need for Study:

Conducting research pertaining to semantic based treatment in aphasia rehabilitation becomes imperative. Because these kinds of approaches are amenable. Also, it is hypothesized that, improvement might generalize beyond the trained probes. It might be, due to involvement of systematic way of word retrieval in general. However, previous researches on these lines have found equivocal responses. Further, SCVTr approach which is used in the current study uses verbs as the core element in training, since verbs play a pivotal role in sentence formation. Likewise, it also carries critical meaning in the sentences. Additionally, researchers have argued that when a verbs is heard or read, it activates generalized situation knowledge. Also, the activated knowledge encompasses the agent and patient associated with the verbs. In spite of this, SCVTr approach bolsters the semantic network and activates the surrounding semantic network via spreading activation. In this way, PWA is likely to increase their retrieval abilities across words sentence and discourse. Thus, the present study has the crucial role in aphasia rehabilitation. Additionally, word retrieval is influenced by nature of language used. Hence, investigating this study in Indian scenario becomes the fundamental role of the researcher.

Aim & Objectives:

The current study was conducted with the intent of understanding the effectiveness of Semantic Cueing of Verbs and its Thematic role (SCVTr) on word retrieval abilities in PWA.

  1. To investigate the effectiveness of SCVTr approach, on word retrieval abilities (Agent+patient+verb) of trained and untrained conditions.
  2. To compare the discourse abilities across baseline, mid and post treatment session.
  3. To determine the error patterns across baseline, mid and post treatment session.


Method:

Participants

Three participants who were diagnosed with aphasia were recruited for the study. All were native speakers of Kannada. Subjects were diagnosed with Brocas aphasia, where participant one (P1) was aged 64, (P2) was aged 42 and (P3) was aged 31years.

Materials and Stimulus

Nouns and verbs for the study, were selected from different resources, such as Mental lexicon of nouns and verbs in the adult speaker of Kannada and so on. Further, these verbs and nouns selected for the study have been validated. However, from the set of validated stimuli 20 verbs and 60 nouns were employed, along with the flash cards and written cue card. Further, these 20 verbs were randomly assigned to 10 verbs as trained and 10 as untrained stimuli. ANT, BNT and Discourse analysis scale were used for pre, mid, post assessment.

Procedure

The present study was carried into two phases. Where,'phase 1' involved in assessing noun retrieval,verb retrieval,discourse quotient and Aphasic quotient.Phase 2 involved in delivering SCVTr approach.The approach involved series of steps. In step 1,PWA was asked to retrieve the verbs with the help of semantic cues(Minimal cue) which varies from broader cue to more specific cues. If PWA failed to retrieve, visual cue along with written cue, followed by maximum cue (1 target card along with 3 foils) were given. Further, PWA was encouraged to retrieve agent and patient associated with the verbs. However, same cueing strategy was used alike verbs training.In addition, researcher made PWA to elicit minimum of 2-3 pair of agent and patient for each verbs. In step 2,reading aloud the pairs was employed.In step 3, questioning pertaining to elicited pairs was asked. In step 4, semantic judgment for the pairs was carried out.In step 5, trained verb was retrieved either with prompting or without. In step 6,all the steps were carried out without cues. In addition, researcher gave therapy duration of 20 sessions (60-80 min for each session).

Results & Discussion:

Participants

Three participants who were diagnosed with aphasia were recruited for the study. All were native speakers of Kannada. Subjects were diagnosed with Brocas aphasia, where participant one (P1) was aged 64, (P2) was aged 42 and (P3) was aged 31years.

Materials and Stimulus

Nouns and verbs for the study, were selected from different resources, such as Mental lexicon of nouns and verbs in the adult speaker of Kannada and so on. Further, these verbs and nouns selected for the study have been validated. However, from the set of validated stimuli 20 verbs and 60 nouns were employed, along with the flash cards and written cue card. Further, these 20 verbs were randomly assigned to 10 verbs as trained and 10 as untrained stimuli. ANT, BNT and Discourse analysis scale were used for pre, mid, post assessment.

Procedure

The present study was carried into two phases. Where,phase 1 involved in assessing noun retrieval,verb retrieval,discourse quotient and Aphasic quotient.Phase 2 involved in delivering SCVTr approach.The approach involved series of steps. In step 1,PWA was asked to retrieve the verbs with the help of semantic cues(Minimal cue) which varies from broader cue to more specific cues. If PWA failed to retrieve, visual cue along with written cue, followed by maximum cue (1 target card along with 3 foils) were given. Further, PWA was encouraged to retrieve agent and patient associated with the verbs. However, same cueing strategy was used alike verbs training.In addition, researcher made PWA to elicit minimum of 2-3 pair of agent and patient for each verbs. In step 2,reading aloud the pairs was employed.In step 3, questioning pertaining to elicited pairs was asked. In step 4, semantic judgment for the pairs was carried out.In step 5, trained verb was retrieved either with prompting or without. In step 6,all the steps were carried out without cues. In addition, researcher gave therapy duration of 20 sessions (60-80 min for each session).

Summary & Conclusion:

Although preliminary, SCVTr approach seemed to be conceivable, because this illuminates the knowledge of semantic expansions, hebbain learning and so on. This study tries to comment on the prominence of semantic cues in word retrieval treatment in general.


  Abstract – LP600: Dysexecutive Aphasia a better aphasia taxonomy -A case study in left-handed young woman who underwent clipping of aneurysm in Right frontotemporal area. Top


Farheen Karim1 & Sonal V Chitnis2

1farheenkhoja@gmail.com &2sonalc123@gmail.com

1School of Audiology and Speech language Pathology, Bharati Vidyapeeth (Deemed to be) University, Pune - 411030

Introduction:

The incidence rate of stroke in India is 119-145/100,00 and there is a rise in incidence among young adults.(Mukesh Kumar Sharma & Naveen Meena,2019).Available research information on the epidemiology of stroke in India and post-stroke speech, language, swallowing, physical, cognitive morbidity is scarce. Post-stroke aphasia in dextral are well investigated and reported however Aphasia in young sinistral and their recovery are less reported even in western literature.Female survivors post-stroke have varied consequences, especially in Indian scenario where female takes a leading role in household responsibilities, cultural front, and connecting whole family and society at intraindividual & interindividual level.Dominant hemispheric stroke may lead to serious language and communication impairment.With the current advancement in early identification & timely intervention of stroke, life mortality is decreased however it has not eliminated stroke-related morbidities completely.Looking at important factors determining stroke intervention outcome with respect to neural plasticity in young patients they present with varied clinical picture of recovery pattern based on disrupted versus preserved neural networks in the brain.However, only the short-term prognosis has been widely evaluated and there have been few investigations about long-term functional recovery of young adults who experience ischemic stroke.However, the heterogeneous clinical picture of cognitive-behavioral profile in left-handed stroke aphasia is interesting and less speculated as many of them do not get categorized into traditional Boston aphasia classification with mixed fluent aphasia characteristics. Language organization is dependent on hemispheric dominance, the recovery pattern post brain damage is also influenced by that to some extent. The apraxia profile varies in right-handers and left-handers on some domains (Goldenberg,2013).There is scarcity of literature mentioning the limitations of Boston Aphasia Classification chalking spontaneous recovery pattern and evolution of aphasia among young left-handed persons with respect to handedness, dominance, language lateralization, reorganization & outcome measurement across microvascular surgical intervention versus tradition conservative stroke management groups, particularly in Indian bilingual scenario.

Need for Study:

Post-stroke intervention is less studied with reference to speech, language, cognition, motor skills & communicative effectiveness in young patients with cerebral vascular disorder particularly left-handed persons with aphasia.There is a need to reconsider differences and integrities while classifying aphasia in traditional system across right-handers and left-handers (Basso &Zanobio, 1990). Further studies are essential to have robust diagnosis and intervention in view of cognitive linguistic communicative outcome of post-stroke aphasia in sinistral & dextral. (Patidar, Gupta, Khwaja, Chowdhury, Batra & Dasgupta, 2013; Bhan & Chitnis,2010) Current research study focuses on profiling progress of an apractic aphasic left-handed literate bilingual lady who underwent clipping of aneurysm and followed up for speech-language therapy. It also speculates the recovery pattern of speech, language, communication, cognition with ongoing intervention without any objective pre and post neuroimaging correlates of language but subjective evaluation.

Aim & Objectives:

To investigate speech, language, communication and cognition in left-handed literate bilingual lady presented with loss of language following clipping of aneurysm at right MCA anterior branch and to investigate pre, mid and post-therapy language, cognition and communication profile in young left-handed woman post-acute right MCA stroke.

Method:

A 38-year-old left-handed female presented with sudden left side weakness, difficulty in speaking, throbbing headache and unable to walk and was admitted to Bharati hospital in ER on 20th April 2018. Detailed medical investigation was carried out:-MRI Brain revealed- Acute non-hemorrhagic infarct right MCA territory & well-ballooned aneurysm at right MCA branch.Post mechanical clipping of aneurysm patient was stable and presented with loss of speech and left hemiplegia.Detailed assessment included a Bedside evaluation when the patient was in hospital, Apraxia and Bedside swallowing protocol, Western Aphasia Battery (WAB) Marathi adapted version, Addenbrooke's cognitive examination-III (ACE-III) Marathi adapted version, Test of Communicative Effectiveness and verbal DDK to assess the language and cognition of patient.

Results & Discussion:

Pre-intervention, Mid therapy, and post one-year therapy findings are summarized to chalk out neural recovery and cerebral reorganization in left-handed aphasia in a subjective manner. Thorough assessment post 4 days revealed following scores:Western Aphasia Battery reveals aphasia quotient 18.3 which classifies her in Brocas Aphasia. Results of ACE-III reveals a total score of 9/100 (Attention-2/18, Memory-0/26, Fluency-2/14, Language-5/26, Visuospatial -0/16).Scores of test of communication effectiveness(CE) (Rao, Karbhari-Adhyaru& Chitnis,2016) was found to be 30th percentile. Impaired Verbal DDK revealed SMR: 3 per 5 seconds. AMR: 1 per 5 seconds.Based on the above clinical profile. Provisional diagnosis was made Bilingual Dysexecutive aphasia as with her clinical correlation of spontaneous speech, naming, fluency, repetition, much better reading aloud at word level contraindicates WAB Based diagnosis, adhering to Boston classification as Brocas aphasia. Premorbid and post morbid handedness is the same however, the patient is unable to write voluntarily, could only write her name and can copy.The therapy plan consisted of a goal of improvement in reasoning and evaluative thinking upto 70% in clinical and extra clinical situations. Based on the above profile cognitive linguistic based Aphasia intervention plan was carried out. The client has improved up to 50% whereas still needs some cues for syntactic structure. HELPSS (Helm Elicited Language Program for Syntax Stimulation, Helm-Estabrooks & Nicholas,1995) showed a 70-80% improvement on syntax. Modified Verbal DDK as a therapeutic tool ( Chitnis, 2018) V-NEST ( Edmonds, 2009) program demonstrated 70% progress in verb naming and use.Apractic component has been reduced with the help of 8 step continuum program showing 40-50% improvement, Change in CE was ascertained, it is improved to 65th percentile. Rhythm training was attempted but the client was not responding well. Post 6 months Bedside western aphasia battery screening tool revealed Anomic Aphasia, and the patient had significantly improved speech and language skills.Post 1 year, Bedside western aphasia battery revealed no aphasic component, ACE-III reveals a total score of 77 /100 (Attention-15/18, Memory-20/26, Fluency-10/14, Language-20/26, Visuospatial -12/16). The patient had significant difficulties only in generative naming.Left-handed Aphasia have shown different and faster language recovery than aphasia in right-handed individuals. Literature support Bilateral asymmetrical language representation in left-handed individuals which may be reason for better collateral language recovery.Boston classification has many shortcomings when it comes to atypical aphasia representation such as aphasia in sinistral, or crossed aphasia.Ardila (2010) gives a better insight in post-stroke aphasia recovery in young left-handed patient who shows dysexecutive function from verbal cognitive characteristics, this is unable to depict by Boston classification. Post 1 year of follow up patient is non-aphasic, mild hemiparesis evident, able to speak fluently however has poor verbal fluency on Controlled Oral word association, sensitive task to tap generative naming, prime verbal execution function essential in complex sentence construction at spoken and written discourse.

Summary & Conclusion:

The recovery pattern and effectiveness of therapeutic strategies in this sinistral patient are difficult to be fitted in the current classification of aphasia nomenclature. Advanced stroke intervention can help minimize stroke-related morbidities and improve QOL if intensive neurorehabilitation is carried out.It is essential to carry out good follow up of cognitive communicative functions throughout the recovery process every 3 to 6 months along with neurological examination to tap vascular cognitive health in stroke patients. More research in the field of Cognitive Linguistic & Communicative based therapeutic planning and recovery patterns in stroke in right vs left-handers & language recovery is required.Vascular cognitive impairment (VCI) is less explored in Indian scenario in stroke in young. Speech-language pathologist plays an important role in assessment and rehabilitation in VCI from mild to moderate degree.

[TAG:2]Abstract – LP601: Evaluating Spoken Language Output using Mean Length Utterances among Hearing Impaired Children using Behind the Ear Hearing Aid, Children with Cochlear Implant and Children with Normal Hearing in Bangla Language: A Comparitive Study[/TAG:2]

Vijaya Sinha1, Shalini Kumari2, Pamela Samaddar3 & Sujoy Kumar4

1piyush.sinha12345@gmail.com,211shalinisharma@gmail.com,3pamelas@rediffmail.com, &4kumarmakar@yahoo.com

1Ali Yavar Jung National Institute of Speech and Hearing Disabilities (Divyangjan), ERC, Kolkata - 700090

Introduction:

Language is the significant means of communication in everyday life; it begins at birth, even though children are not able to speak words.

Language has 5 major domains that are phonology, morphology, syntax, semantics, pragmatics and language skills acquisition and development are at different rates (Steph Klark, 2017).Morphological development is analyzed by computing a child's Mean Length of Utterances (MLU).MLU is a valid and reliable measure of morphosyntactic complexity up to school age ( Mimeau et al , 2015).Acquisition of syntax was found to be directly proportional to the MLU (Deepak and Karanth, 2009).Narrative speech elicited longer MLU than conversation and free play (Rezapour et al, 2011). Increase in sentence complexity is bound to occur with the increase in MLU score (Brown, 1998; Hammer, 2010). He found that the counting MLU in morphemes (MLU-m) was the most sensitive measure of language development for determining increased utterance length. Brown (1973) also divided the MLU-m into six stages. Stage I: 12-26(Age in Months ) 1.0-2.0(Mean MLU-m) ; Stage II: 27-30(Age in Months ) 2.0-2.5(Mean MLU); Stage III: 31-34(Age in Months ) 2.5-3.0(Mean MLU-m); Stage IV: 35-40(Age in Months ) 3.0-3.75(Mean MLU-m); Stage V: 41-46(Age in Months ) 3.75-4.5(Mean MLU-m) and stage V+: 47+(Age in months) 4.5+( mean MLU-m). Significant delays in spoken language acquisition have been found for children with hearing impairment (Nicholas et al., 2006. BTE hearing aid is beneficial for people who require significant amplification across a number of frequencies due to moderate to severe hearing loss. The CI is beneficial for children having severe to profound hearing loss and who are not getting benefit from hearing aids instead of having an intact auditory nerve. MLU in words improved after 18 months of implant experience as per the result of Bollard et al., 1999.

Cochlear implanted children before a considerable delay in spoken language development age (i.e., between 12 and 16 months), have high probability to achieve age-appropriate spoken language (Nicholas & Geers, 2006). Children with CI showed lag in onset at the beginning but appear to improve over time and may reach the level approximate or similar to those of peers with typical hearing in many cases (Faes& Gillis, 2015).

Need for Study:

As there was no study in Bangla language highlighting the difference in speech-language output using MLU among hearing impaired children using different amplification device (BTE hearing aid and cochlear implant) and children with normal hearing. So, need of this study is to establish the future document on the emergence of utterance length (in both words and morphemes) in hearing impaired children who are using hearing aids and cochlear implant.

Aim & Objectives:

This study aims to predict and compare the differences in MLU (in both words and morphemes) among the children with hearing impairment using BTE hearing aid and cochlear implant and children with normal hearing.

Method:

PARTICIPANTS: Total 60 participants who were native Bengali speakers of age ranges between 6 to 10 years were selected. Participants were divided into three groups; Group I- comprised of 20 normal hearing children with mean age 8.1 years(SD=1.41), Group II- comprise of 20 hearing impaired children using BTE hearing aid with mean age 7.9 years (SD=1.29) and Group III- comprised of 20 hearing impaired children using cochlear implants with mean age 7.6 years (SD=1.23).

TOOLS: 1.Topic cards (included topics like the picnic, the park, the bus street, the market etc.) 2. Brown(1973)stages of MLU assessment tool and its guideline. 3. SALT( Systematic Analysis of Language transcripts),Miller and Iglesias, 2016.

PROCEDURE: Step 1. Data recording was done in quiet situation and tested to produce 100 utterances (50 each for spontaneous and elicited speech) and responses were audio-taped. Step 2.Transcription of the Data was done using the SALT 16 software. Step 3. Data analysis was done based on the first 50 complete utterances from each spontaneous and elicited speech sample. Step 4. MLU values in words and morphemes were computed separately for spontaneous and elicited utterances. Step 5. Average MLU in word and morpheme were computed using formulae- (MLU word or morpheme for spontaneous utterances+ MLU word or morpheme for elicited utterances/2). Step 7 MLU ( in word and morpheme) comparison between all the three groups( I, II, III) was done. Step 8 The MLU-w and MLU-m comparison within a group was done.

The SPSS (statistical package for social sciences, 16) on the personal computer was utilized for statistical analysis.

Results & Discussion:

The statistical analysis was done to compare MLU-w and MLU-m in 3 bifurcated groups, the result shows the increasing trend in mean MLU score with increase of chronological age for all the three groups i.e. Normal hearing (Group I), Hearing impaired using BTE hearing aid (Group II) and Hearing impaired using cochlear implants(Group III), which means as the children grew up, produced a higher proportion of longer utterances than their younger peers (Mimeau et al, 2015). The MLU-m score found to be greater than MLU-w scores by an average of 29 % (with a range from 12% to 42%) with the variables at higher age ranges from 8 to 10 years. The correlation between MLU-m and MLU-w for Group I was found to be 0.727, Group II was 0.989 and Group III was 0.987 which were significant at 0.01 level. Inter-group comparison showed the mean difference between Group I and Group II was 1.73350 (p=0.0000) for MLU-w and 3.60800 (p=0.000) for MLU-m, between Group I and Group III was 1.25350(p=0.000) for MLU-w and 2.89250 (p=0.000) for MLU-m and between Group II and Group III was -.48000(p=0.0260) for MLU-w and -3.60800 (p=0.260) for MLU-m respectively at 0.05 level of significance, this findings could be explained by the fact that normal hearing children used more number of complex words i.e. words containing bound morphemes than the hearing impaired peers using BTE hearing aids and cochlear implants. The MLU score (both in words and morpheme) was maximum for normal hearing participants and minimum for BTE users, whereas children with cochlear implant had MLU score greater than BTE users and lesser than normal hearing children, which align with the study of Nicholas et al ( 2006) on delayed in learning language in children with CI . It has also been observed that the children from all groups produced complete sentences for topic cards used elicited speech. On the other hand they produced shorter utterances when they were engaged in simple conversation with their mother.

Summary & Conclusion:

This study provides a documentation on effect of various amplification devices either hearing aid (BTE) or cochlear implants on speech-language output for deaf children in terms of their MLU development.Children with cochlear implants( Group III) shows lagged onset but appear to improve over time and may catch up with peers with normal hearing many cases. The expressive vocabulary of cochlear implant users found to be more and the utterances were more grammatically constructed as compared to BTE user peers. Children with CI exhibited the use of more number of bound morphemes compare to children using BTE hearing aid. Inspite of using BTE hearing aid and cochlear implant from early age(3 years or below), children were still delayed in learning spoke language.


  Abstract – LP602: Relation between Phonological Working Memory and Phonological Awareness Skills in Kannada English Bilingual children Top


Sai Samyuktha1, Ashwini N2, Saraswathi Thupakula3 & Sunil Kumar4

1samyukthav408@gmail.com,2venkatasatyanp.16ec@saividya.ac.in,3saraswathi.aslp@gmail.com, &4rsunilkumar86@gmail.com

1Shravana Institute of Speech and Hearing, Ballari - 583104

Introduction:

Phonological awareness skills refer to the ability of children to decode the sounds in a word and are crucial for development of reading abilities among school going children. Language processing and acquisition involves several cognitive processes such as working memory, attention, executive control, etc. Working memory helps us in storing the information for temporarily and involves in manipulation of the information.

Need for Study:

Among the three subcomponents of working memory, the phonological loop (referred as phonological working memory, PWM) (Baddeley & Hitch 1974) is an important component for processing of auditory/sound information including identification, discrimination, and understanding the sounds and is known to play crucial role in acquisition and processing of various language components. However, the relation between PWM and phonological awareness skills in bilingual children is not well understood.

Aim & Objectives:

The present study was conducted with an aim of comparing the differences between Kannada and English on phonological awareness skills and to explore the relation between PWM and phonological awareness skills in Kannada-English bilingual children of 4th, 5th and 6th grades.

Method:

A total of 60 (20 from each of 4th, 5th, and 6th grades) typically developing Kannada-English bilingual children with Kannada as L1 and English as L2 were randomly selected from a normal school. All the children belonged to lower and upper middle socioeconomic status on Modified Kuppuswamy Socioeconomic Scale (Saleem, 2019) and all the participants were ruled out for speech, language and hearing problems through informal screening. All the children participated in phonological awareness tasks such as rhyme recognition, syllable deletion, syllable oddity (words), syllable oddity (non-words), phoneme stripping, and phoneme oddity in Kannada and English languages. The stimuli for Kannada were selected from Reading Acquisition Profile in Kannada (Prema, 1974) and the stimuli for English were selected from the Phonological Awareness Skills test (PAST: Catts & Scott, 1994). For assessing phonological working memory, non-word repetition (NWR) task and backward digit span (BDS) tasks were used. The stimuli for non-word repetition task in Kannada was selected from Word and Nonword repetition test in Kannada (WNRT-K; Swapna & Shylaja, 2012) and for English, stimuli were selected from Test of Early Nonword Repetition — Revised (Stokes & Klee, 2011). All the participants were seated comfortably and were instructed about the tasks before assessing phonological awareness skills and phonological working memory abilities. The responses were audio recorded and analyzed for further analysis using SPSS V20.0 software.

Results & Discussion:

The data was analyzed using paired samples t-test to compare the performance between L1 and L2 for each of the phonological awareness tasks among three groups. Among 4th grade children, the results of statistical analysis revealed significant difference on rhyme recognition task between L1 (M=7.15, SD=1.80) and L2 (M=8.7, SD=1.18), t(19)=-3.829, p<0.05; on syllable deletion task between L1 (M=7.5, SD=1.70) and L2 (M=9.5, SD=0.60), t(19)=-5.210, p<0.05; on syllable oddity (nonwords) between L1 (M=6.90, SD=2.19) and L2 (M=7.95, SD=1.95), t(19)=-2.465, p<0.05; on phoneme stripping task between L1 (M=7.80, SD=1.70) and L2 (M=8.90, SD=1.97), t(19)=-2.496, p<0.05; however, no significant difference was found on phoneme oddity between L1 (M=6.60, SD=2.06) and L2 (M=6.25, SD=0.71), t(19)=0.709, p>0.05; syllable oddity (words) between L1 (M=7.55, SD=1.79) and L2 (M=7.90, SD=1.73), t(19)=-1.116, p>0.05. Among 5th grade children, the results of statistical analysis revealed significant difference on syllable deletion task between L1 (M=8.45, SD=1.19) and L2 (M=9.30, SD=0.80), t(19)=-2.817, p<0.05; on syllable oddity (words) task between L1 (M=6.60, SD=1.53) and L2 (M=7.45, SD=1.05), t(19)=-2.203, p<0.05; and on phoneme oddity between L1 (M=7.20, SD=1.85) and L2 (M=5.70, SD=1.21), t(19)=2.727, p<0.05; however, no significant difference was found on rhyme recognition task between L1 (M=8.40, SD=1.78) and L2 (M=8.45, SD=1.35), t(19)=-0.123, p>0.05; syllable oddity (non-words) between L1 (M=6.40, SD=2.72) and L2 (M=6.75, SD=1.20), t(19)=-0.497, p>0.05; and on phoneme stripping task between L1 (M=7.65, SD=2.34) and L2 (M=7.75, SD=2.44), t(19)=0.916, p>0.05. Among 6th grade children, the results of statistical analysis revealed significant difference on rhyme recognition task between L1 (M=8.25, SD=1.65) and L2 (M=9.45, SD=0.82), t(19)=-3.335, p<0.05; on syllable oddity (words) task between L1 (M=7.60, SD=1.46) and L2 (M=5.85, SD=2.25), t(19)=2.8.7, p<0.05; and on syllable oddity (nonwords) between L1 (M=7.80, SD=1.76) and L2 (M=6.55, SD=1.84), t(19)=2.190, p<0.05; on phoneme stripping task between L1 (M=8.85, SD=1.59) and L2 (M=9.85, SD=0.48), t(19)=-2.814, p<0.05; and on phoneme oddity (nonwords) between L1 (M=7.65, SD=2.43) and L2 (M=6.10, SD=1.99), t(19)=2.443, p<0.05; however, no significant difference was found on syllable deletion task between L1 (M=9.7, SD=0.73) and L2 (M=9.90, SD=0.30), t(19)=-1.073, p>0.05.

Correlation analysis was carried out using Pearson's correlation coefficient and the results revealed moderate correlation between NWR and syllable deletion tasks (r=0.409); syllable oddity (nonwords) (r=0.482) in L1; and between NWR and syllable deletion task (r=0.475) in L2 among 4th grade children. Among 5th grade children, moderate correlation was found between phoneme stripping and BDS task (r=0.589) in L1, and in L2, moderate correlation was found between BDS task and rhyme recognition (r=0.434), syllable oddity (words) (r=0.362), phoneme stripping (r=0.551). Among 6th grade children, moderate correlation was found between BDS task and syllable oddity (r=0.462) in L1, and in L2, moderate correlation was found between BDS task and rhyme recognition (r=0.342).

Discussion: The results of the present study revealed a developmental trend from 4th grade to 6th grade on both phonological awareness skills and phonological working memory skills which do continue to develop till 6th grade and beyond in both L1 and L2 with better performance in English compared to Kannada. The better performance in English may be attributed to the training given in school in English language on phonological awareness skills. The results of the present study contradicts the notion that the transparency of language has an effect on development of phonological awareness skills which was reported by Anthony et al., 2011; Durugunoglu et al, 1993; Gorman & Gillam, 2003, etc. The results also indicated that the role of literacy instruction is an important factor in developing the phonological awareness skills (Bruck & Genesee, 2008) and supports the notion that teaching of phonological awareness skills in both L1 and L2 is important which will help the children to acquire reading abilities with much ease. (Stewart, 2004). The results of the present study revealed that there is a moderate level correlation between the phonological working memory and few of the phonological awareness skills in bilingual children as also reported by Gindri et al, 2007; Adams, 1995; Oakhill & Kyle, 2000; Milwidsky, 2008.

Summary & Conclusion:

The present study results revealed better performance in L2 compared to L1 and this can be attributed to the language exposure through formal training in school. However, the effects of phonological complexity of language, language proficiency and language exposure levels are not well understood in acquisition of phonological awareness skills in bilingual children. More studies are required in future on larger population to generalize the results and to identify the role of other cognitive factors affecting the performance in L1 and L2.


  Abstract – LP603: Knowledge, Attitudes, and Practices of Healthy Adults towards Cognitive Communicative Well Being Top


Annu Maria Thomas1 & Gagan Bajaj2

1annutom2525@gmail.com &2gagan.bajaj@manipal.edu

1Kasturba Medical College, Mangalore - 575001

Introduction:

The World Health Organization estimate, as in 2015, reported 46.8 million people as having age linked cognitive communicative disorder, like dementia. These numbers are expected to double every 20 years and reach the magnitude of 74.7 million by 2030 and 131.5 million in 2050. In the world economies, like India, where the growth of the elderly population is rapidly taking place, a higher increase in the incidence of dementia is expected (Prince, 2015). The dementia report of India projected 3.7 million Indians as having dementia in 2010, with a total societal cost of 147 billion Indian Rupees. The report further predicted that these numbers might double by 2030, and costs would increase three times (Shaji et al., 2010). This demands timely action to prevent or delay the onset of the disorder (Das, Pal, & Ghosal, 2012). One of the primary preventive measures in this direction would be improvement of the cognitive communicative reserves among healthy aging adults.

However, people would make efforts towards improvement of cognitive wellbeing, only if they possess the knowledge, attitude and practices in the right direction. Therefore, the first step for primary prevention of age linked cognitive communicative disorders would be to explore the KAP of healthy aging adults towards cognitive communicative wellbeing.

Anderson et.al. (2009), reviewed the national status regarding the public perceptions about cognitive health and Alzheimer’s disease in the U.S. population. These were assessed in four domains as of knowledge, beliefs, concerns and source of information among healthy adults. Findings reported that most of the adults are aware of Alzheimer's disease but were lacking in precise information regarding the disease and specific information about cognitive health. Another study related to KAP by Tan et.al (2012) found that public in the may not be ready for screening initiatives and early dementia diagnosis. On the similar lines, there exists limited Indian data to understand perceptions of public regarding cognitive communicative wellbeing.

Need for Study:

Considering the dearth of research data highlighting the KAP of healthy adults in India towards cognitive communicative wellbeing and the national urgency to take primary preventive measures for promoting cognitive communicative wellbeing among healthy aging adults, it is important to conduct a research which would focus of extracting information about what people know, think and practice about their cognitive communicative wellbeing. Present research is an initiative in the similar direction.

Aim & Objectives:

The present research aimed to explore the knowledge, attitudes, and practices of healthy adults towards cognitive communicative wellbeing.

Method:

The research followed a cross sectional study design and convenient type of sampling to recruit the participants. A total of 45 healthy adults including fifteen each from young adults, middle adults and old aged healthy adults were recruited. The participants were at least graduates from English medium and belonged to non-health professional background. Participants with history of any psychological or neurological disorders were excluded from the study. The study assessed the participants' knowledge, attitudes and practices towards maintaining the cognitive communicative wellbeing using an open ended knowledge, attitude, and practise (KAP) questionnaire developed by investigators.

Term 'Brain fitness' was operationally used to represent 'Cognitive communication' as it was easy to understand and follow this terminology by general public. Knowledge domain was assessed using questions pertaining to definitions of brain fitness, functions determining brain fitness and the factors that affect the brain fitness. Attitudes of the participants were checked by probing questions related to medical professional who deals with the rehabilitation of brain fitness, the importance of brain fitness check-ups and their self-satisfaction towards own brain fitness. Questions were also asked related to the activities that are good for brain fitness, quality time in engaging in these activities in order to evaluate the practices of these individuals.

Qualitative analysis was performed for the data gathered though the KAP questionnaire. For qualitative data analysis, an inductive approach was followed wherein the two researchers read the narratives several times and each one independently coded the data into themes.

Quotations representative of the theme were highlighted in the text. The two researchers compared the list of themes, and derived a common list of themes by mutual consensus.

Results & Discussion:

The healthy adults in various age groups had different perspectives towards the term 'brain fitness'. Young adults related the term brain fitness to more of thinking ability; middle aged adults defined it as acting normally, while older adults considered it as mental fitness. Brain functions such as thinking, memory and leading a proper life were reported to determine the brain fitness by young adults, middle and old aged adults respectively. Most of the young and middle aged adults opted internet as a primary source of information on brain fitness. At the same time, old aged adults relied on doctors for the same. Doctors were the mostly rated answer as the professional who provides information about brain fitness by young and middle aged adults, while old aged adults preferred psychologists.

Every participant had agreed to the concept that brain fitness can be improved. Middle and old aged adults advised reading as one of the majorly known activities for brain fitness and young adults preferred meditation. Meditation, puzzles and reading were the activities reported by young adults to keep their brain fit. Along with middle aged adults, old aged adults also mentioned reading as one of the activity mostly preferred. Engaging in spiritual activities such as prayer was also considered as an activity among old age adults. Old aged adults spent greater amount of quality time (more than 2 hours per day) engaging in the above mentioned activities, followed by young adults (1 hour per day) and middle aged adults (30 minutes per day).

All the three groups had concerns pertaining to memory issues. For one's mental wellbeing, young and middle aged adults considered brain fitness and physical fitness to be equally important. However, old aged adults gave more weightage for brain fitness. Uniform importance for brain fitness and physical fitness check-ups were given by all the adults. Participants in the middle aged often exchanged their views on brain fitness among their family, friends and colleagues. On the other hand there were no discussions among young and old aged adults related to brain fitness. It was found that all the participants were not aware about the availability of various brain exercising apps.

Overall, although participants were concerned about their own cognitive well being and aware of the cognitive decline associated with ageing, but they were neither aware nor initiated any measures for the further preventions of the same.

Summary & Conclusion:

The findings of the present research highlight the lack of appropriate knowledge, attitudes and practices among the healthy aging adults. It is extremely important for professionals like speech language pathologists to take community based initiatives to spread awareness about the age linked cognitive communicative decline and ways to enhance the cognitive reserves. This would be a significant national and global contribution in view of the alarming predictions about the incidence of age linked cognitive communication disorders.


  Abstract – LP606: Combined effect of Transcranial Direct Current Stimulation (Tdcs) and Naming Therapy to Treat Word Retrieval Defecits in Kannada-Tulu Bilingual Persons with Aphasia Top


Wasim Ahmed1, Shivani Tiwari2 & Gopee Krishnan3

1wasim.9@gmail.com,2tiwarishivani.2009@gmail.com, &3brain.language.krishnan@gmail.com

1Manipal College of Health Professions, Manipal, 576104

Introduction:

Bilingualism is a fascinating cognitive skill humans possess. More than half of the world's population is bilingual and bilingualism is the norm (in evolution) rather than an exception (Ansaldo et al., 2008). While most people enjoy the social and vocational benefits of being bilingual, stroke could potentially hamper the affected person's ability to use either language, leading to bilingual aphasia.

While conventional speech-language therapy (SLT) can improve the cortical neuroplasticity, it may be further enhanced by the application of neuro-stimulation techniques (Sanders et al., 2016). Transcranial Direct Current Stimulation (TDCS) is an established technique in stimulation of cortical neural structures associated with language processing.

Differing views exist on role of the Left Hemisphere (LH) and Right Hemisphere (RH) in the post-stroke recovery. Subsequent to LH injury, the homologous areas of the RH take over the lost LH functions (Warburton et al., 1999). Contrastingly, better recovery was reported when perilesional areas take over the compromised functions. While neuroimaging evidence has been reported for both these claims, the emerging consensus is that the RH activations after stroke is detrimental and maladaptive which would hinder recovery of the lost functions (Saur et al. 2006; Rosen, 2000).

When LH is damaged, the intact RH is disinhibited (which is generally under inhibition by the intact LH). This is known as transcallosal disinhibition (Martin et al., 2009). Several studies have provided supportive evidence for increased activations in the RH following LH damage, especially in the right inferior frontal gyrus (IFG) and superior temporal gyrus (STG) compared to healthy controls. Right IFG activations have been associated with errors of omissions and semantic paraphasias during picture naming (Perani et al., 2003). The rationale of TDCS primarily focuses on the suppression of the RH excess activity and thus maximize the LH functioning. In sum, TDCS led to long lasting beneficial language outcomes when the right frontal regions are inhibited through the stimulation.

In the Indian context, TDCS has been applied only in the investigations on a few psychiatric conditions (Praharaj, 2011; Chatterjee, Kumar, & Jha, 2012; Kumar, 2011). To date, no studies have been reported on the application of TDCS in the rehabilitation of (bilingual) aphasia in the Indian context.

Need for Study:

TDCS is a promising, noninvasive neuro-stimulation approach successfully applied in monolingual persons with aphasia. However, the facilitatory therapeutic effects of this technique in Bilingual persons with aphasia (BPWA) remains unexplored.

Considering the growing proportion of BPWA, it is imperative to investigate the effect of novel approaches such as TDCS in rehabilitation of word-finding difficulties in this population. This has considerable implication in a dominantly multi-lingual country like India. Beneficial effect (if any) would help reduce the dual impact of bilingual aphasia in terms of the time, personnel, as well as the financial burden of the affected persons.

Aim & Objectives:

To investigate the facilitatory effect of TDCS when clubbed with behavioural speech-language therapy to treat word retrieval deficits in Kannada-Tulu bilingual persons with aphasia.

Method:

Participants:

Five participants with post-stroke chronic aphasia were recruited for the present study. Time after onset of aphasia varied from 1 year to 20 years. All subjects were males (mean age 62.6 years), had suffered a left middle cerebral artery infarction, were right-handed and had full awareness of the deficits. All participants were native Kannada-Tulu bilinguals. Two of the subjects were graduates with good socioeconomic status whereas the rest had education only up to high school and belonged to poor socioeconomic status.

Procedure:

As this data was obtained as part of an ongoing project, Institutional Ethics Committee clearance, CTRI registrations and Neurologist's approval were obtained before recruiting any participant for the trial.

For baseline assessment, a trial naming test with 20 line drawn pictures from the Snodgrass and Vanderwart (1980) corpus was developed. This 260 picture corpus was standardized by Ahmed and Krishnan (2008) in Kannada. Name agreement scores were obtained for this picture corpus in Tulu language from 10 native Tulu speakers. Out of the 260 pictures, 80 were found to be non-cognate words in Kannada-Tulu language pair. Ten pictures with high name agreement scores, 10 pictures (5+5) having moderate and low name agreement scores were selected as stimuli for this test. This naming test obtained a mean score of 98.1 from 10 normal participants.

TDCS Procedure:

Participants were made to sit comfortably on a chair in a well-lit, ventilated room. It was ensured that none had applied any oil or cream on the scalp. The region of stimulation (F8) on the right hemisphere was identified using the 10-20 montage system. The measurements were taken using a flexible centimeter tape. First the length from the Nasion to Inion was measured. Then the 10% mark above the Nasion (FPz) and Inion (Oz) was marked. Fpz to Oz circumference was measured, then 5% to the right of FPz was marked (true FP2). From this true FP2, 10% of the total circumference to the right was identified as F8 the site for Cathode placement. The anode was placed on the contralateral supraorbital region. A total stimulation time of 20 minutes excluding ramp up period of 1 minute was given to each client once a day and totally 10 sessions were given.

SLT procedure:

Ten color picture cards consisting of 5 categories of nouns with two pictures in each category were chosen for training. The therapy incorporated Semantic Feature Analysis (SFA) and Phonological Component Analysis (PCA) for noun retrieval training. SFA was the first stage followed by PCA considering the cueing hierarchy. If a person could successfully name any picture card consistently for 3 consecutive sessions that particular picture would be replaced with another picture from the respective category. It was ensured that none of the pictures were similar to the concepts used in the outcome measure tool. The SLT session lasted for 40 minutes every day uninterruptedly for 10 days. After 10th session, the naming test was re administered.

Results & Discussion:

Paired samples t-test showed a significant difference [ t(4) = -5.184, p = 0.007] was obtained in the pre-post scores for Kannada (L1), whereas, for Tulu (L2) no significant difference [ t(4) = -0.837, p = 0.450] was observed. It can be inferred that in bilingual persons with aphasia, TDCS when combined with SLT (in L1) may result in improved word retrieval skills in that specific language. Also, the baseline scores on the naming test showed no significant difference [ t(4) = 1.423, p = 0.228] between L1 and L2 suggesting that a similar level of word finding difficulty existed in both languages for all subjects.

On the contrary, the post-trial scores on the naming task also showed no significant difference [ t(4) = -1.082, p = 0.340] between L1 and L2, indicating that the combined treatment program (TDCS & SLT in L1) had a positive effect on the naming abilities in Kannada as well as the untrained Tulu language. This suggests the possibility of a cross linguistic generalization that needs to be studied systematically.

Summary & Conclusion:

Though SLT has shown potential in treating word finding difficulties in the past, an introduction of a non-invasive, neuromodulation technique like TDCS may be used as an adjunct to the behavioural SLT in bilingual persons with aphasia to maximize the treatment outcomes in BPWA.


  Abstract – LP607: The Language Screening Test in Tamil for Patients with Acute Stroke Top


Vasupradaa M1 & Rajasudhakar R2

1vasupradaa.1995@gmail.com &2rajasudhakar.aiish@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Aphasia following an acquired neurological insult requires a very brief assessment of the primary language symptoms at an acute stage.

There was a high lack of aphasia screening tools for bedside evaluation of aphasics. Several authors have pointed out the need for language-specific tools for the assessment of aphasia. There is a number of aphasia assessment tools have recently been developed abroad, but some of these scales were not suitable for patients with acute stroke. The Language Screening Test (which includes two parallel versions [a/b]) in French has been proven to be an effective and time-saving aphasia screening scale for early-stage stroke patients. Of all the few available screening tools for aphasia diagnosis, a bedside language screening tool, the Language Screening Test (LAST), suitable for use in the emergency setting. (LAST; Flamand-Roze, 2011). It comprises 5 subtests and a total of 15 items, proved to be one of the most comprehensive bedside screening for describing the aphasia symptom complex. But in Tamil, the most prevalent language spoken in South India, there is no formal language screening tool was available to date for aphasia.

Need for Study:

Although there are many tests that assess one or more aspects of language such as disturbances of brain-damaged aphasic individuals, the numbers that have been adequately standardized is relatively few. LAST is one of the tests which is the most useful test in bedside for the early assessment of aphasia. However, the majority of the screening tools used in emergency acute stroke settings were published in English and other western languages. Yet, there are no studies have done on LAST in Dravidian languages. This would be the first study to develop, validate and standardize the language screening test in a Dravidian language Tamil.

Aim & Objectives:

The present study aimed to adapt the original LAST of English to Tamil language and validate it.

Method:

The study was conducted in three phases: development and administration of the language screening tool in Tamil.

Participants

  1. Group I -Control group (Normal): - 100 (50 males and 50 females) normal subjects in different age groups (20-70 years).
  2. Group II-Clinical group (Aphasics): 30 participants diagnosed with various types of aphasia (Brocas/Anomic/Wernicke/Global/Conduction/TMA/Isolation) in the age range of 30-70 years.


The participants of the clinical groups were diagnosed with aphasia (of various types) by the neurologists and speech-language pathologists. They should not have any associated disorders like dementia and other psychological illnesses. All the participants should be a native speaker of Tamil with or without the knowledge of any other language. The study will be conducted in two phases. They are

  1. Development of the test material
  2. Administration of the test


Development of the LAST Features of the LAST:

The Adaptation was based on the two versions and the items in LAST-T contains modification of LAST-A & LAST-B.

Modifications of sub-items (similarities and differences between LAST and LAST-T).

LAST gave us great many help, including translation and back-translation and cross-cultural adaptation of the items. By referring to the design principles of the LAST, and considering the difference in everyday familiarity (subjective verbal frequency) between the languages, we made some modifications based on the syntactic and semantic rules of Tamil. Based on the familiarity rating given by the five Neurologists and five Speech-Language Pathologists who were native Tamil speakers, a percentage was calculated for each stimulus. The picture cards were rated with respect to the size of the picture, color, and appearance, arrangement, and iconicity. The stimuli which had the most familiar rating of 80% and above were considered for the final test material. Each subsection is provided with instructions to be followed and appropriate picture manual while administering the test.

Administration of the test:

Tamil version of LAST was administered on 100 normal subjects and 30 aphasics who were the native speakers of Tamil. The scores obtained by the subjects on LAST-T (should administer by SLP) was considered for interpretation. The scores were coded and then subjected to statistical analysis. From the scores, mean cut-off was calculated considering all the sub-tests to compare the performance of both the groups (normal adults and individuals with aphasia).

Results & Discussion:

The results of the present study revealed that the mean cut-off scores of reception and expression scores were similar across 5 groups in both males and females. Though there were no differences, consistent correct responses noticed for all the groups as they were normal healthy older adults. And the mean cut-off scores were considered to be (9.97±0.17) for expressive index and 10.0 for receptive index. The total mean cut-off scores for both the tasks were (19±0.17). In, the fifth group that is the age range of 60-70 years, there were few incorrect responses on the complex repetition tasks observed, but not statistically significant difference. Mann Whitney U test was administered to check whether there was a difference between the performance of controls and aphasics on LAST-T. And the mean cut-off scores for healthy adults were considered to be (9.97±0.17) and for aphasics (5.46 ±3.1) expressive index. The healthy adults have got mean cut-off score of (10.0) and (6.46±3.8) by Aphasics for receptive index. The median scores of aphasics for receptive index were 9 and the expressive index was 5. Hence, the results clearly suggested that there was a significant difference (p>0.01) on the performance of the controls and aphasics on the different tasks. That is, healthy controls were more accurate in performing complex tasks compared to individuals with aphasia. Further pairwise analyses between controls and aphasics across different age and gender groups were not done due to a smaller number of participants with aphasics.

Discussion:

This test tool was developed and validated as a brief language screening scale in Tamil (LAST-T) for patients with acute stroke. It provides a quantitative clinical language examination in the emergency setting. The scale has good validity in Tamil population and quick to complete. Importantly, LAST does not need to be administered by a speech and language therapist. With a cutoff score of 19.97±0.17 from a maximal score of 20. The screening tool showed its sensitivity to diagnose Tamil speaking individuals with aphasia, thus helps in identifying language impairments of acute stroke patients early. Also, it may help to begin early language rehabilitation and the benefit of language therapy can be monitored, which may optimize long-term rehabilitation. LAST Tamil detected a language impairment in 30 of the 50 patients admitted in the hospital, which clearly explains that the aphasia was reported only in 60% of patients in the acute stroke group. It helps in differentiating from false-positive groups (non-aphasic) patients.

Summary & Conclusion:

Thus, the proposed new validated language screening tool for Tamil speaking patients with acute stroke can be administered at the bedside in approximately within 5 minutes. This Screening tool has a higher sensitivity to use in the emergency setting and this early testing (within 24 hours after admission), helps to diagnose patients quickly and they would go on to recover rapidly.


  Abstract – LP608: Gesture Identification in Children with Hearing Impairment (HI): A Comparative Study Top


Ankit Anand1, Mehulla Jain2, Deepshikha Kujur3 & Nikitha M4

1anandankit68@yahoo.com,2mehullaj@gmail.com,3deepshikhakujur056@gmail.com, &4nikitham25@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Human communication is multifaceted in nature which comprises of both verbal and non-verbal modes of information exchange. Verbal communication involves use of spoken language (McDuffie, 2013) while non-verbal communication includes anything not expressed using speech (e.g., gestures, signs, orthography etc) (Hess, 2016). Gestures are used during verbal communication naturally and spontaneously by individuals across age, culture and different background (Kelly, Manning &Rodak, 2008). Gestures and speech have a strong association and they shareproperties across temporal (Chui, 2005), structural (Kita &Ozyurek, 2003) and meaning aspects (McNeill, Bertenthal, Cole & Gallagher, 2005). Gesture, speech and language show tight developmental, neurological integration and interaction (Bates & Dick, 2002).

Gestures have been studied across varied population and hearing-impaired individuals to further understand gesture-speech link. Gestural use in HI toddlers has been at par with normal hearing counterparts. Thus, revealing gestural ability did not depend on the auditory ability whereas spoken language did (Ambrose, 2016). However, potential disconnect between gestural and speech modality in children with hearing impairment (Zaidman-Zait&Dromi, 2007) is also reported.Children with hearing impairment create their own gestural system to communicate in absence of formal sign language (Goldin-Meadow, Butcher, Mylander, & Dodge, 1994;Goldin-Meadow & Morford, 1985).Gestures they use are highly parallel with verbal language forms of their hearing counterparts. This was evidenced in major word classes (i.e., nouns and verbs)(Goldin-Meadow et al., 1994).In contrast, children with hearing impairment have also demonstrated delays in both gestureand spoken language developmentwherein the delays in gestural language predicted their speech delays (Vohr et al., 2008; Vohr et al., 2011). Further, gesture based auditory verbal therapy for children with hearing impairment has facilitated learning (Zamani, Weisi, Ravanbakhsh, Lotfi& Rezaei, 2016) indicating potential use of gestures in them.Therefore, importance of gestural studies in children with hearing impairment is evident.

Need for Study:

Literature has shown pronounced link between gesture and speech across varied studies. Gestural benefit has been reported in both normal and HI children. However, there has been convincing and contradictory findings of the potential link of gestures and speech in them. So, there exists a need to further study gestural ability HI individuals who are unable to process verbal language in the same way as hearing individuals. Further, literature has reported similarity of gestural system to verbal language system in HI children. It is a known fact that, nouns and verbs form major content of one's verbal language system. Gesture decoding abilities across word class in normal children and children with HI has not been explored through many factors, and therefore this necessitated the current study.

Aim & Objectives:

To determine gesture identification abilities ofnormal and HI children for a set of noun and verb gestures.

Method:

59 Neuro-typical children and 47 HI children served as participants. The neurotypical children considered for the study were in the age range of 7-13 years from State Boardand the HI children were in the age range of 7-14 years from special school. The language abilities of children with HI were poorer when compared to the neurotypical children. The neurotypical participants were studying in 3rd, 4th and 5th grades with Hindi as native language whilethose with HI were studying in 2nd, 3rd ,4th, 5th, 6th and 7thwith Kannada as native language. The children with HI had severe to profound hearing loss.

Stimulus: Total of 30 stimuli used was used for the study. The stimulus was presented through gestures. For checking the validity check, 36 stimuli (18 nouns and 18 verbs were used). Age specific nouns and verbs were taken from Vandert's list, these nouns and verbs were enacted by a trained classical dancer. The stimulus was video recorded. The videos were circulated to three judges, including 2 SLPs and a sign language teacher. The judges were asked to give their opinion on the gestures, considering the age range of the participants by employing a Likert's scale of 3 choices (Very Appropriate, Appropriate and Not appropriate) 3 nouns and 3 verbs which secured inappropriate was excluded from the final list.

Procedure: Stimulus was presented one by one. Order of presentation was counterbalanced between the participants. Participant were asked to name nouns and verbs (presented through gestures) in one word or indicate their response through writing. Each correct response was scored1 and incorrect responses (including no response, semantically unrelated response) were scored 0. Responses were tabulated separately for nouns and verbs. Maximum score was 15 each.

Results & Discussion:

Neuro-typical participants scored 10.67 for nouns and 13 for verbs, while HI children scored 7 on nouns, 9 for verbs.In neuro-typical group, in order to verify for significant difference between nouns and verbs, Wilcoxon's signed rank was used. The data was non-normal.

Wilcoxon's signed rank test was run thrice (for each grade separately). Z score obtained for the groups (based on grade) was 3.16, 2.98 and 2.93. Corresponding p values showed significant difference between nouns and verbs for all grades. Mean scores for verbs were better than nouns for all sub groups under the neuro-typical group.

Further in order to verify for any difference in identification of nouns and verbs in Hearing impaired group, Wilcoxon's signed rank test and Z scores for neuro-typical participants was 2.64, corresponding p value was 0.04, thus showing significant difference.

In order to verify if there was any significant difference between children with hearing impairment and neuro-typical group (with a combined median value for all its 3 sub groups) Mann Whitney U test was used. The data did not abide by properties of normal distribution; hence a non-parametric test (Mann Whitney U test) was used for comparison. Z scores obtained for nouns and verbs was 3.13 and 3.11, corresponding p value showed significant difference for both nouns and verbs.

The literature in regard to gesture identification has suggested that the gesture decoding abilities would be better in children with HI or would be in par with typically developing children. However the finding was strikingly different as children with HI performed poorly compared to normal children. This could directly depend on the degree of HI and the method of communication adapted in day to day communication. The children considered for the study were trained by using written language and sign language. The gestures used were different and the gestures were conventional, the children with HI were not able to decode the gestures used in the study. There was no individual difference in gesture identification, all the participants erred on the same items. Also, it is seen that neuro-typical children performed better in verbs compared to children with HI who performed better in nouns.

Summary & Conclusion:

The aim of the study was to investigate gesture identification in typically developing and children with HI. 15 nouns and 15 verbs depicted through gestures was used as stimuli. The task of the participants was to identify these gestures. HI children performed poorly compared to normal children as the gestures used were not familiar to them. Also, they performed better in nouns compared to verbs which was not the case for neuro-typical children, who performed well for verbs compared to nouns.


  Abstract – LP614: Alternate Fluency Skills as a Function of Age: An Exploratory Study Top


Haleema Shalbiya1, Nadba P2, Febida M K3 & Deepak P4

1haleemashalbiya@gmail.com,2nadbapoonthala@gmail.com,3febidamk@gmail.com, &4deepakaryan064@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Cognitive process like executive functioning is considered to be the higher level cognitive process,which requires lot of cognitive flexibility.Whereas, other cognitive processes such as single word repetition, digit recall etc use cognition at surface level (Stuss& Benson, 1986).However, verbal fluency is one of the extensively used executive functioning paradigms which assesses cognitive process both in typically developing individuals and in atypical individuals.This paradigm requires retrieval of words under strident criteria, based on phonemic and semantic aspects of the lexicon. In general, fluency tasks such as phonemic or semantic fluency task involve diverse search strategies.Specifically, the former might be involved in accessing the word from phonological output lexicon (POL) and latter might be involved in searching the lexicon from the semantic system. In addition, to understand the nature of higher level cognitive process and search strategies, alternating fluency paradigm will be used.Where, the subjects will be asked to retrieve the word starting with /pa/ and /ka/ alternatively. Similarly, when assessing semantic fluency, subjects need to switch between different categories of lexical items such as food items and animals.According to Downes et al. (1993), there are two major aspects to be considered when assessing alternating fluency.Firstly, intra-dimensional shifting where it requires individual to alternate between the probes with respect to the same domain and secondly, extra dimensional shifting which it refers to generating the probes across different domain. For example, semantic word first followed by phonemic word or vice versa.However, for these two major aspects, different rule strategies might be used where,intra-dimensional task requires individual to alternate between the rules with respect to same domain. Whereas, extra dimensional shifting requires individual to alternate between the rules with respect to different domain.Thus, the former task involves switching within algorithm wherein, individual needs to retrieve the word within the domain but latter task might require two algorithms due to switch between two different domains. In these lines, some of the research works were carried out.

According to Delis et al., 2001 and Parkin et al., 1995, researchers compare across phonemic fluency,semantic fluency and semantic intra-demsional alternating fluency task. These studies revealed that older adulthood performed poorer than younger adults for semantic intra-demsional alternating fluency task, when compare to other tasks. On the similar lines, study by Parkin and Java (1999) compared three groups i.e. young, young-old and old-old on the different executive function tasks.The study revealed there were significant differences in-terms of executive functioning tasks across different age groups. Interestingly, these differences were not present for phonemic fluency task.

Need for Study:

They are various cognitive tasks that are used to assess the executive functioning in typically developing younger individuals. However, alternate fluency task will stand one of the pivotal tasks among the other tasks.Because this task might help in tapping both surface and higher level cognitive process. And the nature of task which is used in alternating fluency helps to comment on different search strategies that will be used to search lexicons. In addition, there is the dearth of literature in Indian scenario, using alternating fluency task in typically developing younger individuals as one of the executive functioning measures.For this reason the present study incorporated the task mentioned above.

Aim & Objectives:

The aim of study was to investigate the alternate semantic fluency and phonemic fluency task in three different age groups

Method:

60 participants served as participants. The participants were divided into three groups. Grouping was based on the age of the participants.

The first group comprised of 20 children in the age range of 8 years to 11 years, while the second group comprised of 20 individuals in the age range of 18 years to 25 years and third group also comprised of 20 individuals in the age range of 50 to 75 years were selected as participants. Participants were selected on random basis. Alternating semantic fluency task and alternating phonemic fluency task was administered on the participants. In the alternating semantic fluency task the participants were expected to alternate between the lexical categories (ex, common object vs. vehicle, bird vs. vegetable and animal vs. fruit). In the alternating phonemic fluency task, the participants were asked to alternate between different syllables alternatively (like, /ka/ vs. /dza/, /pa/ vs. /ra/ and /tha/ vs. /ha/) in Malayalam language. A score of 2 was given when the participant was able to produce the correct responses in both the slots, partially correct responses was given a score of 1, while incorrect response was given a score of 0.

Results & Discussion:

The completely correct scores and partially correct responses were analysed separately. On alternating semantic fluency task, Group 1 participants secured an average score of 9, while group 2 participants secured a score of 12 and group 3 participants secured a score of 8. On alternating phonemic fluency task; group 1, group 2 and group 3 secured scores of 9,14 and 10 respectively. The partially correct scores for the three groups on alternating semantic fluency task was 7,9 and 6 respectively. While the partially correct responses on alternating phonemic fluency task for the three groups was 7, 9 and 7 respectively. Statistic was applied only for complete scores. Kruskal Wallis test followed by Mann Whitney U test was used for comparing across groups, the Z scores obtained on comparing group 1 vs. 2, group 1 vs. 3 and group 2 vs. group 3 was 3.13, 1.92 and 2.98. The corresponding p values showed significant difference when group 1 and group 2, group 2 and group were compared. Statistically significant difference was not seen when group 1 and group 3 was compare. All the three groups, performed well on alternating phonemic fluency compared to semantic fluency task as the task complexity was more for semantic fluency task compared to phonemic fluency task. Wilcoxon’s signed rank test was employed to verify if there any statistically significant difference across alternating phonemic fluency and semantic fluency task within group 1, group 2 and group 3. The Z scores for the three groups was 2.33, 1.98 and 3.18 and corresponding p value showed significant difference only for group 3. Group 3 were able alternate between phonemes but altering between lexical categories was complex for this group. Group 2 consisting of young neuro-typical adults could alternate between categories and phonemes easily.

Summary & Conclusion:

The study was carried out with the aim of examining alternating semantic fluency and alternating phonemic fluency task in adolescents, young neuro-typical adults and older adults. In alternating semantic fluency task, the participants were asked to alternate between the two lexical categories while they were asked to alternate between the two phonemes. Young neuro-typical adults performed better compared to children and older adults. The difference between the scores was significant statistically only when group 2 (young adults) were included. Adolescents could not perform in par with young neuro-typical adults as cognitive flexibility is still emerging while the older adults could not perform well as cognitive flexibility would deplete with age. On comparing, semantic fluency and phonemic fluency task, all the three groups performed well on alternating phonemic fluency as the task was more simpler than alternating semantic fluency task.


  Abstract – LP615: Working Memory in Children with Cochlear Implants- An Emerging Prerequisite to Cope Up with Normal Peers Top


Shruti Gupta1, Diya Nair2 & N Banumathy3

1shruti4386@gmail.com,271.aarti@gmail.com, &3banupallav@gmail.com

1Post-Graduate Institute for Medical Education and Research, Chandigarh - 160012

Introduction:

Working memory (WM) is the capacity of an individual to pay attention and process novel information even in the presence of distractions. Working memory is said to differ considerably with respect to listening and reading tasks (Geers et al. 2013). The effective and appropriate use of working memory is one of the most important aspects of language acquisition in normal hearing children. WM is thus very important for storing short term information, retrieval of words and meanings, and especially for good communication skills. A period of auditory deprivation thus can result in a compromise of WM skills such as speed of processing, attention, learning and executive control. Hearing loss thus levy heavy impact on speech a language and neurocognitive development in children, as these domains are vital for efficient speech perception and language outcomes.

Cochlear implantation, though being a boone to children with profound sensori-neural hearing loss, there is a lot of variability in speech perception, language comprehension, speech intelligibility and reading comprehension while evaluating the outcome measures post implantation. These unexplained variances in outcomes could be attributed to differences in underlying neurocognitive processes in children with variable degree of implant use.

Pissoni et al (2017), reported individual variability in the outcomes of cochlear implantees (CIs), consideration of pre implant predictors of outcomes and formulating unique therapy modalities for those with poor outcomes, as three major challenges faced by audiologists while working with paediatric implant users. Although product factors (device failure or medical and demographic variables) have been extensively studied so far for varying outcome measures, these have failed concurrently in explaining certain individual differences not attributable to any sort of. Hence process measures such as verbal rehearsal, scanning and retrieval of items in verbal short term memory and the rate of encoding the phonological and lexical information in the verbal working memory were analysed for verifying the variance among CI users and help in examining underlying elementary information processing mechanisms used to perceive and produce spoken language in CIs.

Need for Study:

Retrieval of detection of auditory information is just not sufficient for adequate speech and language development, post cochlear implantation. It is essential to investigate the differences in working memory of cochlear implantees from those with normal hearing to adequately explain the neural connections between pre attentive sensory memory to acquisition of higher language functions. Thus investigating the damage to cognitive processes, thinking, temporary storage of memory is a must to explore the WM skills in paediatric CI users. This in turn will enable the clinicians to consider working memory training as the priority in target goals for rehabilitation.

Aim & Objectives:

  1. To investigate working memory skills in paediatric cochlear implant users from those with normal hearing children
  2. To explore for differences in working memory skills in CI users with respect to implant age.


Method:

SUBJECTS

40 subjects, in the age range of 5-15 years, were enrolled in the study with 20 participants in each of the experimental (CI users) and control (an age matched control group of typically developing normal hearing children) groups.

Selection Criteria: Unilateral CI users with implant age of minimum 1 to 2 yrs, who are under regular rehabilitation program were enrolled as subjects for the study. The implant age of the children varied from two to five years (minimum usage of 10 hours per day). All the subjects had normal cognition and did not present with any other inner ear anomalies or co-morbid disorders.

MATERIAL USED AND PROCEDURE

A battery of assessment tools were used for subjects in both the experimental and control groups. Skills of audition, receptive language, expressive language, speech, cognition and pragmatics were assessed using the Integrated scales of development (Cochlear Ltd.) in the experimental group and Linguistic profile test (Pratibha Karanth, 1990) was used for assessing language in control group of participants. Cognition was assessed using Weschler’s Intelligence Scale for Children-III.

Hearing of the typically developing children were assessed using an audiological test battery, viz, pure tone audiometry (on 250, 500, 1000, 2000, 4000, 8000 Hz), Impedance Audiometry and OAEs to rule out any sort of hearing related difficulties.

Working memory was assessed in both the groups using Digit Span test (DST) originally devised by Hebb (1961), as it is reported to have a close linkage to language learning abilities and verbal phonological memory. DST comprises the digit span forward (DSF) and digit span backward (DSB) as sub tests.

A written informed consent was taken from all the parents of CI and normal hearing children enrolled in the study. The audiological test battery was carried out in a sound treated, 2 chamber setup with controlled permissible ambient noise levels. The stimuli were presented with female live voice, who had normal articulation, voice and fluency. A two channel audiometer (AC 40 by Interacoustics) was used coupled with a condenser microphone and a vu meter to monitor the live voice levels. Canton CD 220 speakers were used for stimulus presentation in free field, positioned at 45 degree azimuth. The subjects were seated comfortably at a distance of 1m from each of the speakers.

The investigator presented a series of numbers starting with 3 digits per set at a rate of 1 digit per second to 9 digits set, with 1 digit increasing in each set. The subject was instructed to repeat the numbers heard in the same order of presentation for forward digit span (DSF) test and in the reverse order for the backward digit span (DSB) test. Successful repetition at each level was scored as 1 and failure to retrieve the digits was scored as zero. The maximum length of the digit set retrieved was established as the longest length recalled. Test was terminated with two consecutive sets being retrieved incorrectly.

Results & Discussion:

The stimuli were presented to both the groups and the responses were scored as 0 and 1 for incorrect and correct sets retrieved respectively. The raw scores were collated and analysed statistically using SPSS version 20. Test of normality, Kolmogorov- Smirnov test was administered to check the distribution of variables. Since a few of the variables were found to be non-normally distributed, non-parametric Mann Whitney U test was adopted for analysis. Results of forward digit span test were found to be significant (0.01; p<0.05) for a set with length of 7 digits between the normal hearing and experimental groups. While in backward digit span test , significant difference ( p< 0.05) between controls and experimental groups were observed for digit span sets of 4, 5 and 6 digits. CI users had a mean score of 5.25 in DSF and 2 in DSB whereas the control group had a mean score of 7.08 and 5.91 in DSF and DSB respectively.

Analysis was carried out for variations in WM skills with implant age. It was found that a 75% retrieval score was observed for 8%(1/2), 33%(4/7), 16%(2/3) in children with implanted age of 2, 3, 4 and above years respectively.

Greater the implant age along with intensive therapy, greater the scores in the digit span test attributable to increased phonological awareness and vocabulary skills. Variability in this study could be due to the wide age range of subjects selected. Also with increase in implant age there was a trend observed with approved digit span retrieval scores for both DSF and DSB. Poorer scores obtained between paediatric implantation in DSF and DSB shows deficits in underlying phonological representation loop. This in turn has a critical effect on development of verbal phonological memory.

Summary & Conclusion:

This study aimed at measuring verbal working memory in CI users as compared to those without. The results revealed that the latter performed similar to implant users till a length of 4 digit set and showed a significant variance with further increase in length of digit set.

A clinical observation that almost every audiologist and speech language pathologist encounters is the striking disparity of results after regular and intensive speech and language therapy amongst children implanted with cochlear implants.

These deficits can be taken care of, if proper working memory training is targeted during the rehabilitation program by the clinicians in paediatric CI users.


  Abstract – LP616: Effect of Neurological Integration on the Linguistic Patterns of Impaired Language Learners Top


Swapna N1, Kavya Vijayan2, Mekhala V G3 & Animesh Barman4

1nsn112002@yahoo.com,2kavya.vijayan@gmail.com,3mekhalavg@gmail.com, &4nishiprerna@yahoo.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

The integrity of the nervous system is essential for processing and learning various aspects of language. A delay in the maturation of the nervous system can lead to corresponding deficits in language development. Language disorders in children are one of the most frequent causes of difficulties in communication, learning and academic achievement. Specific Language Impairment (SLI) is one such prominent language disorder seen in pediatric population wherein age-appropriate linguistic skills are not acquired in spite of having no obvious deficits in hearing abilities, non-verbal intelligence and cognitive abilities.

Children with SLI show a struggle to use language efficiently with increase in age and present predominant difficulties with syntax and morphology as well as related deficits in semantics and pragmatics (Bishop & Donlan, 2010; Tsimpli, Peristeri & Andreou, 2016; Tiwari, Karanth & Rajashekar, 2017; Priyadarshini, 2018). Previous evidence suggests that there is activation of cortical areas such as Broca’s area during the continuous integration between language and speech gestures. In children with SLI, malformations in the cortical development have been reported (Nishitani, Schurman, Amunts & Hari, 2005), while a few other studies have found only soft neurological signs indicating delays or reduced cortical neural connectivity (Van der Lely, 1997).

Need for Study:

Though studies have reported neurological issues in children with SLI, limited attempts have been made to assess the link between these neurological issues and language and literacy deficits. There also have been limited attempts in the Indian context to systematically document the linguistic deficits in the different domains of language including pragmatics in young Kannada speaking children with SLI.

Aim & Objectives:

The aim of the present study was to investigate the linguistic and neurological deficits in children with SLI using standardized tests. The objectives of the study were as follows:

  1. To compare the various language domains such as morphology, syntax and semantics between children with SLI and typically developing children.
  2. To compare the verbal and non-verbal pragmatic abilities between children with SLI and typically developing children.
  3. To compare the phonological awareness abilities between the two groups.
  4. To evaluate the effect of neurological integration of sensory and perceptual-motor skills in the SLI group and its effects on language.


Method:

The present study consisted of a total of 12 participants. Six Kannada speaking children diagnosed as SLI in the age range of 4 to 6 years formed the clinical group and 6 typically developing children who were matched for chronological age, gender and socio-economic status (SES) formed the control group. The participants were assessed for their language abilities by using Clinical Evaluation of Language Fundamentals (CELF-Preschool 2) for Kannada-speaking English language learners (Priya, 2017). This test consists of ten subtests to assess linguistic domains such as morphology, syntax, semantics and pragmatics. Various language measures such as Core Language Score, Receptive Language Index (RLI), Expressive Language Index (ELI), Language Content Index (LCI), Language Structure Index (LSI) and Descriptive Pragmatic Profile (DPP) score were estimated by calculating the total from the respective subtests. The Phonological Awareness (PA) subtest was administered to assess the knowledge of sound structure. In order to explore the effect of neurological and sensory integration on language patterns, Quick Neurological Screening Test (QNST) (Mutti, Sterling & Spalding, 1978) was administered which contains 14 subtests and provides information regarding the way in which the child integrates sensory cues and organizes voluntary motion from auditory, visual, tactile, proprioceptive and kinesthetic sources. The data was then subjected to appropriate statistical analysis.

Results & Discussion:

Mann Whitney-U test indicated a significant difference in the overall performance between the TD children (median=149) and children with SLI (median=58), wherein the TD children showed better performance. With respect to the language test, a significant difference was seen on 9 subtests among 10 (p<0.05). Only the sentence structure subtest did not differentiate the groups. The core language score (|z|=2.9; p=0.004) and indices such as RLI (|z|=2.7; p=0.006), ELI (|z|=2.9; p=0.004), LCI (|z|=2.4; p=0.016) and LSI (|z|=2.9; p=0.004) showed a significant difference between groups. All the language parameters depicted a corresponding increase with age as depicted through the mean scores and the patterns of language development have been discussed in detail in the full length paper. The domain of pragmatics assessed using the DPP also indicated a significant difference between the two groups (|z|=2.1; p=0.04), wherein the nonverbal communication skills showed a lesser difference than conversational skills. It is interesting to note that the PA subtest did not reveal a significant difference between the groups (|z|=1.18; p=0.24). With respect to QNST, it was noted that 7 out of 14 subtests such as figure recognition, sound pattern identification, double simultaneous stimulation, tandem walk, balancing on one leg and left-right discrimination showed a significant difference between groups (p<0.05). The Spearman correlation coefficient revealed a high negative correlation between CELF scores and QNST for both clinical (Ï•= -1.00, p=0.00) and control groups (Ï•=-0.868, p=0.025). No correlation was seen between PA and QNST for both groups.

The present study revealed deficits in linguistic areas of children with SLI. The reduced indices indicate difficulties in receptive and expressive development, semantic concepts, word and sentence structure, specifically in applying word structure rules, difficulties in using pronouns, reduced abilities in referential naming, among others. This finding is in consonance with other studies who also found that children with language disorders depict discrepancies in syntax and semantic domains with respect to the noun phrase and verb morphology, plural inflections, verb retention and knowledge of relationship between words (Rice, Wexler & Hershberger, 1998; Mainela-Arnold et al., 2010; Sheng & McGregor, 2010). Also pragmatic deficits in expressing intentions and verbal contextual communication were detected, which indicated some social communication deficits. This finding is supported by previous studies which suggest that conversational abilities are affected and non-verbal communication is relatively preserved (Osman, Shohdi & Aziz, 2011).

The PA subtest in CELF assesses the awareness of word, syllables and phonemes in children. The relation between phonological awareness and pre-reading abilities have been clearly established (Berninger & Abbott, 2003). Since the children of both groups were at the beginning stage of acquiring knowledge regarding phonemes, there was no significant difference although the control group performed better. With respect to the neurological aspects, the poor scores of children with SLI on QNST indicated an underlying deficit in the sensory, perceptual and motor areas which corroborates the findings of previous studies (Krishnan et.al., 2013). The results also clearly indicate that poorer scores on QNST signals linguistic deficits. The findings support previous evidence which suggests that there is a continuous integration between language and gestures and this neurological integration of sensorimotor and perceptual-motor performance abilities aid in learning speech, language, behaviour and literacy skills.

Summary & Conclusion:

The present study highlights the patterns of linguistic and neurological deficits in a small group of preschoolers with SLI in comparison with control group. The study also stresses on the need for implementing neurological screening tests as part of the assessment protocol for young children with SLI, as deficits in sensory-motor perceptual areas could be worked upon as part of early intervention in this population.


  Abstract – LP617: Amount of Media Exposure and Risk of Autism in Young Children Top


V Swati1 & S Jothi2

1swatibaslp2020@gmail.com &2jothi.slpa@gmail.com

1Holy Cross College, Trichy - 620002

Introduction:

Today's children and adolescents are immersed in digital media. Nowadays, television (TV) is a common form of electronic screen media generally used as a source for entertainment and information for older children and adults or even used as a babysitter in the household when caregivers have to do household chores or their own work. Research on traditional media, such as television, has recognized health concerns and negative outcomes that correlate with the duration and content of viewing of television. Over the past decade, the use of digital media, include interactive and social media, has increased, and research evidence suggests that these newer media offer both benefits and risks to the health of children and teenagers. This media has become an important environmental factor in the household that can have an impact on the daily lives of young children because they have often begun to watch television very early, by 3 months of age. In addition, more time is spent in front of the screen, with an average of approximately 3-4 hours a day, more than any other leisure activity besides sleeping.

Autism spectrum disorder (ASD) is a complex group of heterogeneous neurodevelopmental disorders with significant impairments in social and communication domains in addition to restricted, repetitive and stereotyped behaviors. It is an urgent public health concern in many countries.

About 1 in 100 children in India under age 10 has autism, and nearly 1 in 8 has at least one neurodevelopmental condition. The estimates are based on the first rigorous study of its kind in the country (Aug 27, 2018), yet the cause or causes of the condition are not well understood.

One of the current theories concerning the condition is that among a set of children at risk to develop the condition because of their underlying genetics, the condition manifests itself when such a child is exposed to a (currently unknown) environmental trigger.

Need for Study:

The effect of media over children's mental and social health is yet to be exported. The awareness among parents about the negative effect of children's health when exposed to television. Henceforth the current study focuses on evaluating and create awareness among the parents regarding the risk of autism spectrum disorder (ASD) with the increasing extent of television viewing among children.

Aim & Objectives:

To evaluate the attitude of the parents towards exposing their children to digital media.

To examine the pattern and extent of television viewing in children and the risk of autism spectrum disorder (ASD).

Method:

Total of 50 parents who had typically developing children ranging from 2-5 years of age were considered for the study. The attitude of the parents towards their child watching television was assessed using a standardized questionnaire Attitude of Parents towards Television Viewing by Ramin Ardalan. The questionnaire consisted of 10 questions focusing of the child's exposure towards television. The responses were elicited using a likert scale (strongly agree to strongly disagree).

A questionnaire was framed with reference to DSM V criteria for autism to examine the pattern and extent of television viewing in children and the risk of autism spectrum disorder (ASD). A total of 22 questions were included. The parents were asked to respond to the questions in likert scale form. Scoring from strongly agree to strongly disagree.

The results were analyzed using SPSS version 21. Spearman's correlation test was done to find out the correlation between the amount of exposure to television and the risk for autism spectrum disorder.

Results & Discussion:

The results was based on the pilot study done on 16 parents who had children of 2 to 5 years of age. The mean age range of typically developing children's taken for the pilot study was 4.0125 years. The average duration of the children watching television/ mobile phone per day is 4.94 hours and 14.67 hours per week. 37.6% agreed that television is a medium of instruction for children whereas 50.1% disagreed. The television provides the benefit of exposure to children 37.6% agreed to it, while 43.8% disagreed.37.6% agreed that viewing television is a positive activity for children's development, where 50% disagree. 60.87%of the parent's disagreed that their child often feels reduced sharing of interests, emotions, whereas 21.74% agreed. 56.52% of the parent's disagreed that their children often have a failure to initiate or respond to social interactions, while34.78% agreed. 17.74% of the parent's agreed that their child finds difficulty in adjusting behavior to suit various social contexts, whereas60.87% disagreed.34.78% of the parent's agreed that their child doing Stereotyped or repetitive motor movements (e.g., simple motor stereotypes, lining up toys or flipping object), where 56.52% disagreed. 30.43% of the parent's agreed that their child often have visual fascination with lights or movement, where 39.13% disagreed.

There was a strong positive correlation between the duration of child's exposure towards television and the questionnaire focusing the autism characteristics according to DSM V. That is child's amount of exposure to television has an effect over the risk for developing autism spectrum disorder.

Prevalence of usage of electronic screen media was high among children below 3 years, and tends to increase within a decade. Some studies suggest that increased screen time in young children is associated to negative health outcomes such as decreased cognitive ability, impaired language development, mood, and autistic-like behavior including hyperactivity, short attention span, and irritability (Bedrosian & Nelson, 2017). There is a relationship between early onset and high frequency of TV viewing and language delay (Chonchaiya &Pruksananonda, 2008). Too much of TV time for toddlers may trigger autistic features in them, according to a study by Cornell business professors. Currently, children all over the world spend more time with electronic screen media compare to children who were previously more engaged socially.

Surprisingly, almost all parents proudly reported that their child aged below 2 years has been able and enjoy electronic media in regular basis.(Hermawati,Rahmadi, Sumekar&Winarni,2018). Early exposure to mobile and television screens can cause neurochemical and anatomical brain changes. Reduced melatonin concentration has been found significantly in a group of individual who were exposed to screen(Figueiro, Wood, Plitnick&Rea, 2011).

Summary & Conclusion:

The current study reveals that if the children are exposed to television at early stages of life then it may lead to high risk of developing autism spectrum disorder. Educating or creating the awareness among parents of young children regarding the negative impact of exposing their children to digital media may help prevent children from developing Autism spectrum disorder.


  Abstract – LP618: New Word Learning: Nouns Vs Verbs Top


Irene Tom1, Kripa Maria2 & Jyotsna K3

1irenetomanna@gmail.com,2kripamaria1702@gmai.com, &3jyotsna.k7@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Novel word learning has been a topic of interest across linguists and speech pathologists, alike. A prominent phase of new word learning in infants is fast mapping to comprehend new words. (Bloom, 1973) This learning in the developmental years has been documented to happen in a systematic manner; the idea that the simpler words precede the more complex; and the more essential word classes i.e.; nouns, occur before the fringe classes such as the verbs, adjectives, is widely supported. (Strokel, 2003) Further support for the word class effect has been garnered from studying the neural bases of acquisition of sounds. Nouns have been found to be activating the frontal region, while tasks associated with verbs are seen to trigger the temporal region. (Yu, Law, Han, Zhu, and Bi, 2012) This underlines the importance of studying the distinction in the acquisition of word classes.

The existing literature is suggestive of the new word learning in childrenbeing variable. (Golinkofe, Hirsh-pasek, Bailey, and Wenger, 1992) There has been a long standing debate regarding which of these two word classes are acquired first. Older schools of thought claim that nouns trump verbs to be the easier word class to be acquired first. (Gentner, 1982)Piccin& Waxman, 2007, made a strong case for precedence of nouns over verbs due to numerous factors, including-reduced complexity and effort required to learn, due to reduced number of resources that need to be allotted, in the case of nouns. The other side of the debate has also garnered support over the decades - works of literature such as those by Tardif (1996), Tomasello, Akhtar, Dodson &Rekau (1997)try to pin the possible discrepancy in acquisition on the plausible variance in definition of the two word classes in different contexts. Thus, it is essential to investigate the findings that have been found to be scattered across literature.

Need for Study:

Exhaustive literature on retrieval of nouns and verbs is available globally as well as in Indian context. Studies in the purview of new word learning in children are limited. The current study is a preliminary attempt at investigating the new word learning in children.

Aim & Objectives:

Aim of the study: To assess the effect of word class on new word learning in children.

Objective: To compare the number of nouns and verbs learnt post training.

Method:

A total of 20 children from grade three were selected as participants. Participants were selected on a random basis from an English medium CBSE school. Equal number of boys and girls were selected. The operational definition of new word considered in the study was that the word should be novel to the children and should not be there in the vocabulary of the children. 20 nouns and 20 verbs were considered initially and these words were taken from higher class text books and in order to ensure that the word was not there in the childrens' vocabulary, baseline testing was carried out. It was noticed that few children knew certain words and 15 nouns and 15 verbs which were not known by any of the children were shortlisted for training.

The participants were given training for 15 nouns and 15 verbs. The duration of each stimulus was thirty seconds in such a way that in the first five seconds the picture of the noun/verb was presented followed by a recorded voice of the same for five seconds. Live voice of the noun/verb was then given for five seconds followed by its orthographic form for ten seconds and again live voice of the noun/verb was given for five seconds. The training was given in English language in order to constrain the performance further. Followed by training, after a gap of three days naming task was assessing for retrieval of nouns and verbs was carried out. In this phase, the participant was asked to retrieve the label of the corresponding noun and verb. Each correct response was given a score of 1, and an incorrect response was given a score of 0.

Semantically unrelated responses and phonemically unrelated responses were given a score of 0, much like the no and incorrect responses.

Results & Discussion:

The objective of the study was to compare the number of nouns and verbs learnt after training. Training was imparted through the fast mapping method where the participants were exposed to the stimulus for a shorter duration succeeded by the exposure to its label and orthographic form. The baseline scores were 0 for all the participants. The mean score for nouns were 9 and the mean score for verbs was 6; the individual difference was very wide and the standard deviation was high. The data did not abide by properties of normal distribution; hence non parametric statistics was used. Wilcoxon's signed rank test was used and the Z score obtained on comparison was 3.22 and corresponding p value showed a statistically significant difference. The results indicated that there was an effect of the word class on new word learning.

The participants could name the nouns better compared to the verbs, this could be because the nouns are easier to comprehend due to their imageability and the participants could remember the nouns better than verbs. In addition to the quantitative analysis, qualitative analysis was carried out. Both nouns and verbs shared no responses in common, as the commonly made error; the distinction amongst the word classes occurred with probing the errors after this level.The order of errors in the case of verbs was: 1- semantically related errors and 2- incorrect responses. The distinction in errors, in the case of nouns was: 1- incorrect responses and 2- semantically related responses. This result showed that the participants erred more on verbs compared to nouns as they were not able to retrieve the correct label. Thus, the results in the current study are in line with Gentner’s (1982) view: Nouns are seen to have precedence over verbs and other word classes due to the ease with which they are acquired.

Semantically related errors indicated that the children could map to the concept but failed to retrieve the right label.The data was collected 3 days post training, to counteract momentary learning effect. The number of words learnt signified the fact that the words could be learnt through fast mapping also.

Summary & Conclusion:

The study was carried out with the aim of assessing the effect of word class on new word learning. 20 children of Grade 3, served as participants. All these children were bilingual, with Malayalam as L1 and English as L2. The testing was carried out in L2 to constrain the performance. 15 novel nouns and verbs were used for the study; training was given through fast mapping. Post training was carried out after 3 days and the task of the participants was to name the nouns and verbs. It was found that the participants could name nouns better than verbs, indicating that the number of words learnt could vary as the effect of word class.


  Abstract – LP619: Role of Syntactic Complexity Resolution in Indexing Cognitive-Linguistic Flexibility: An Exploratory Study Top


Shreya Mahesh1, Divya Rawat2 & Vijay Kumar3

1shreyamahesh5795@gmail.com,2divya.rawat20182019@gmail.com, &3vkumar@ggn.amity.edu

1Amity University, Haryana - 122413

Introduction:

A hallmark of human intelligence is flexible cognition. This is enhanced and expressed by language, which permits the encoding and expression of innovative representations of present, absent, and imagined events, entities, and relations (Deák, 2003). Cognitive flexibility is the activation and modification of dynamic processes such as working memory, attention, and response selection in response to the information (i.e., similarities, cues, relations) selected from the tasks demands of a linguistic and nonlinguistic environment (Eslinger&Gratten,1993). Cognitive linguistic flexibility (CLF) is rarely defined beyond general statement about the ability to shift cognitive linguistic sets. Flexible language processing, develops concurrently with language. CLF requires selecting and encoding information from a dynamically changing environment, based on contextual demands that must be periodically evaluated and updated (O Deák, 2003).

Identifying the importance of assessment of CLF, Cragg and Chevalier,2012 investigated CLF with tasks such as attention switching in typically developing pre-schoolers revealing greater preservation errors between the age of 3-5, and best scores were between the ages 7-9 and thus it was inferred that cognitive flexibility was at its peak. Similarly Chapey,1994 hypothesised that this ability seemed to diminish in persons with Aphasia, Parkinsonism and also in geriatric population using reactive and spontaneous flexibility tasks. Moreover, in Indian context CLF was tested by Kumar & Rao, 2007, in typically ageing individuals and they found significant difference across aged population. By summarising these findings it is interpreted that assessment and enhancement of CLF is an important strategy to estimate and enhance cognitive and linguistic profiling of the person. It is notable that studies on developing cognitive flexibility has only been reported since the 1990’s and assessed on English, French and Dutch speaking population whereas it is inconspicuous on population speaking Indian languages.

Need for Study:

This study is motivated based on the findings of Kumar and Rao(2007), where they evaluated CLF using Picture naming task in ageing population across interval of 10 years. They observed that identification of non-matched lexicon out of 4, where 3 of them were from the same lexical category. In this study, we plan to capture CLF using more complex tasks hypothesising CLF as a dramatically complex construct.

Context, experiment, personal and environmental variability further complicate assessment ofyCLF. Therefore, we realise a need for CLF assessment using appropriately simple to complex task design.

Aim & Objectives:

The Primary objective of this study was to investigate the developmental and maturation pattern of CLF from paediatric to geriatric population. Secondly, to test, off syntactic complexity task and Wisconsin Card Sorting Test, which is the sensitive predictor to index CLF.

Method:

48 healthy participants with no past history of any neurological illness, all residing in Gurugram, Haryana were selected and included in the study, out of which 16 were children between the age of 5-15(Mean= 9.56±3.01), 16 were college student between the age of 20-30(Mean= 23.42±3.50) and 16 were older adults between the age of 60-85(Mean=73.31±9.57). All participants could fluently speak and write Hindi and also received formal education in English medium institutions. Cognitive flexibility is most often examined using task-switching paradigms, measuring the ease of switching between different sets of sorting rules. Reaction time(RT) which is the time taken for the response to occur on presentation of the picture experiments are also effective and widely used in measuring the neurophysiological integrity. The participants were asked to perform two computer-based tasks, in a quiet environment with the participants sitting at a distance of 2 feet from the laptop screen.

  1. Picture Naming task which was developed using 20 syntactically complex line drawings adopted from 'With a Little Bit of Help'; Early language training aids (Karanth et al,2007), programmed on the software DmDx(Jonathan and Kenneth Froster) and presented with a presentation time of 3500ms and an interval of 2500ms between pictures. 10 line drawings which were relatively simple were chosen as practice trial. The participants were asked to describe the pictures in full sentence and the RT to the verbal response was recorded.
  2. Wisconsin card sorting test(WSCT)(Grant and Berg, 1948): Â In WCST, the subjects have to classify cards according to three different analogy, 1.according to shape, 2.according to colour and 3.according to number, and the only feedback received is whether the classification is correct or not. The classification rule changes every 10 cards, and the participant has to figure out the rule by trial and error method. The task measures how well people can adapt to the changing rules, thus cognitive utility. 60 trials of sorting was provided and the Total number of errors, Preservative errors and Non-preservative errors were noted.


All the data was statistically analysed using the SPSS statistical ® package, version 20.0 (SPSS inc., Chicago, IL, USA) for Windows ®.

Results & Discussion:

Response Time using DmDx software was obtained for syntactically complex sentences for the categories, (i)children, (ii)adults and (iii)older individuals. One-way analysis of variance(ANOVA) was used to analyse significance at the level of p=0.05 and CI of 95%. Significant difference F(2,45)=154.883, p=0.000 was observed for RT across varying categories. Tukey HSD Post Hoc Analysis was done to check variability between groups and it was observed that every category, significantly differed with respect to each other.

Similarly, the scores obtained from WCST for the above mentioned categories were also analysed using One-way ANOVA and it revealed that Total Error Count[F(2,45)=25.28, p=0.000] and Non-Preservation Errors[F(2,45)=17.114, p=0.000] were found to be statistically significant.

Similar significance was observed in Tukey HSD Post Hoc test upon assessing variabilities between groups.

From the outcome of the study, it is evident that RT development occurs in systematic and chronological order like other cognitive linguistic abilities as RT for adults was 84.67ms shorter than children, similarly the adults responded to the task around 634.39ms faster than older individuals. These findings are also as per the pattern of Kumar&Rao, 2007. However, between groups, significance was observed much

stronger than the earlier study. The total error counts obtained from WCST for children were found to be lesser by a number of 9.56 when compared to older individuals. Similarly, for adults the error was found to be lesser than a number of 13 as compared to older individuals. One of the reason for such finding could be attributed to the range of tasks selected for this study.

Summary & Conclusion:

Morphosyntactic and semantic complexity of the stimuli used in the study appeared sufficient to gauge CLF. Outcome of this study would be further tested using larger sample size, using tasks of diverse complexity and by adding more categories of age groups. In future assessing both reactive and spontaneous CLF in tasks involving higher level of language function can be performed and also CLF deterioration among clinical population can be analysed.


  Abstract – LP621: Role of Word Imageability on Reading Performance in Kannada-English Bilingual Individuals with Fluent aphasia Top


Sunil Kumar1, Gopi Kishore Pebbili2, Shyamala K Chengappa3 & Rashmi J4

1rsunilkumar86@gmail.com,2gopiaslp@gmail.com,3shyamalakc@yahoo.com, &4rashmiyuvashree@gmail.com

1Shravana Institute of Speech and Hearing, Ballari - 583104

Introduction:

Aphasia is a multifaceted disorder which affects multiple domains of speech, language, reading and other cognitive functions. Reading disorders are commonly reported in persons with aphasia however, the type and severity of these reading impairments vary depending upon several factors including the severity of aphasia, type of aphasia, extent of lesion, premorbid reading abilities, bilingualism, orthographic properties of language, lexical organization of semantic features, linguistic properties of the target words such as simple versus complex words, imageability of words high versus low frequency words, and many others. Among the linguistic factors, imageability of words is reported to have a greater influence on lexical access. Imageability is defined as “the ease with which a word gives rise to a sensory mental image― (Paivio, Yuille &Madigan, 1968).Words can be classified into high mageable words (HIW) or ow imageable words (LIW) in a specific language.

Need for Study:

Individuals with fluent type of aphasia (FA) mainly exhibit comprehension deficits indicating poor lexical strength in these individuals.

Reading errors in these individuals is also attributed to impaired lexical or semantic system. Imageability might also be a contributing factor for lexical access in individuals with FA compared to individuals with non-fluent aphasia. Few studies on monolingual population have reported that Imageability plays a major role on reading abilities in individuals with aphasia. However, there is a dearth of studies in Indian context on the role of Imageability on reading performances in both mono and bilingual individuals with fluent aphasia.

Aim & Objectives:

The present study was conducted with an aim of investigating the Effect of word imageability on reading in Kannada (L1) and English (L2) bilingual individuals with fluent aphasia.

Method:

The study recruited 10 Kannada-English bilingual individuals with fluent aphasia (FA) in the age range of 23-65 years with Kannada (L1) as native language and English (L2) as their second language.Participants with reading abilities at single word level and participants with above vocational proficiency level in second language on International second language proficiency rating (ISLPR: Wylie & Inghram, 2006) were only included in the present study.Informed consent was taken from all the participants before conducting the study.

Instrument & procedure Study included high imageable synonym matching (HISM) and low imageable synonym matching task (LISM). A list of 20 high imageable words (10 synonym pairs and 10 non-synonym pairs of words) and 20 low imageable words (10 synonym pairs and 10 non-synonym pairs of words) in Kannada were selected from the test material Analyzing Acquired Disorders of Reading in Kannada (Karanth, 1984). Also, list of 20 high imageable words ( 10 synonym and non-synonym pairs and 20 low imageable words (10 synonym pairs and 10 non-synonym pairs of words) in English were selected from word list developed by Coltheart, 1984. These stimuli were programmed in HP laptop using DMDX software which was used for visual presentation of stimuli and to measure accuracy and reaction times of the responses. Presentation duration of the stimulus was programmed for 8000ms with an inter-stimulus interval of 3000ms. The participants were instructed to press right arrow key for correct response (synonym pair) and left arrow key for indicating a wrong response (non-synonym pair). The accuracy and reaction time (RT) were measured and analyzed using SPSS V.21.0. Mean accuracy and mean reaction times for each group on each task were calculated and further statistical analysis was done to compare the performance in two languages, across tasks.

Results & Discussion:

The results revealed that mean accuracy scores were higher for HISM task (16.10± 4.14) compared to LISM task (15.20± 4.84) were lesser in L1. Similarly, the mean accuracy scores were higher for HISM task (13.80± 3.22) compared to LISM task (11.50± 3.37) in L2.The results of Wilcoxon signed rank test on accuracy measures revealed no significant difference in performance across LISM and HISM task in L1, |Z| = -0.938, p>0.05 however in L2, there was a significant difference across LISM and HISM task, |Z| = -2.506, p<0.05. On comparing the performance of the participants between L1 and L2 revealed a significant difference on LISM task, |Z| = -2.507, p<0.05 and also, significant difference was found between L1 and L2 on HISM task, |Z| = -2.363, p<0.05.

The mean RT was lesser for HISM task (3894.92 ± 1131.53) compared to LISM task (4302.48 ± 1254.19) were lesser in L1. Similarly, the mean RT was lesser for HISM task (4258.45± 1273.89) compared to LISM task (4569.43± 1110.38) in L2.The results of Wilcoxon signed rank test on reaction time measures revealed a significant difference in performance across LISM and HISM task in L1, |Z| = -2.090, p<0.05, while there was no significant difference across LISM and HISM task in L2, |Z| = -1.580, p>0.05. On comparing the performance of the participants between L1 and L2 on LISM task revealed no significant difference, |Z| = -1.070, p>0.05. Similarly, significant difference was not found between L1 and L2 on HISM task |Z| = -1.274, p<0.05.

Discussion The above findings indicate that although, there is no significant difference on reaction time measures, accuracy measures revealed a significant difference between HIW and LIW and also between L1 and L2. The higher accuracy and lesser RT measures for HIW indicates the presence of an additional unit for storing and processing images of the words along with its semantic unit for HIW which has lead to faster processing of the words among the participants. In contrast, as LIW lack the property of imageability, one has to depend only on semantic unit for processing of words, resulting in slower processing of the words. The results are in consensus with literature and with dual coding theory (Paivio, 1986).The results also revealed better performance in Kannada than in English which is attributed to the differences in the orthography. Kannada language which has a transparent orthography has a direct grapheme to phoneme correspondence compared to English orthography which is non transparent where there is lesser grapheme to phoneme correspondence. These differences may have an effect on the reading performance in persons with aphasia as transparent orthographies are easier to read and this is supported by orthographic depth hypothesis (Frost, Katz, &Bentin 1987).

Summary & Conclusion:

Present study highlights that, imageability has an effect on reading performance in individuals with FA. These results hypothesize that inclusion of more number of high imageable words during management of these individuals can improve their lexical strength and also in faster lexical access. The study also discusses on the application of orthographic depth hypothesis in Indian context focusing on alphasyllabary versus alphabetic structures. Further studies are also required to explore the effect of other lexical factors on reading performance in persons with fluent aphasia in Indian context.


  Abstract – LP622: Effect of Language Input on the Language Skills of Children in Orphanages Top


Dhruvi Narsana1, Harshada Mali2 & Navya Rathi3

1dhruvicnarsana.aslp@gmail.com,2harshada0117@gmail.com, &3navyarathiaslp@gmail.com

1Ali Yavar Jung National Institute of Speech and Hearing Disabilities (Divyangjan), Mumbai - 400050

Introduction:

The population of children in India is 20 million, 4% percent of which are orphans (International Children's Charity, 2011). Children raised in orphanages appear to be deprived in various aspects of life, making them vulnerable for negative cognitive, social, behavioral, etc. outcomes (Beckett et.al., 2002). These factors may appear to have an effect on speech-language development/skills too.

The SARVA SHIKSHA ABHIYAN of Indian government makes it mandatory for children in the age range of 6 to 14 years to undergo elementary education. Basic education requires adequate speech-language development and appropriate reading/ writing skills. There are a number of factors which affect speech-language development; e.g. environmental influences, linguistic background, cognitive skills etc. One of the important factors which has an influence on language development is the amount and nature of language input. Children from orphanages also may be adopted in in foster homes, may attend regular schools where they get adequate speech-language stimulation from teachers and peers. They also look at their vocal hygiene.

Need for Study:

There are a number of evidences which show that less stimulating environment and the lack of warmth, security and family affects the global development of any child which includes speech-language development especially when they are raised in adverse conditions during early stages of life (Kirekar, 2006). Lack of adequate language stimulation/ input and not following the vocal hygiene is an important cause of speech-language impairment and voice disorders.

A study by Mokashi, (1999) indicates that institutionalized children show developmental lags in social, cognitive and language skills. The results of study by Routray, Mehaer, Tripathy, Parida, Mahilary and Pradhan (2015), indicated that there was language developmental delay in 52.1% of 188 children from three orphanages in Bhubhaneshwar. Sayegh (1965) studied children of age 3 years from Romanian orphanage and observed that children who received standard care in the orphanage had limited expressive and comprehensive language. According to Goldfarb, (1945) children in age range of 9 to 15 years living in orphanages in the US were unable to recognize common objects. It was anticipated that the children growing in the orphanages show delays across lexical-grammatical and phonological aspects of language.

However, children who were given increased stimulation by trained caregivers were reported to be good verbal imitators, socially responsive and having adequate receptive semantic skills. As far as the second language acquisition is considered, children who spend relatively less time in their adopted families and have less exposure to the second language, so, language delays are shown. Thus, it is observed that, both in the Western and Indian context, primary emphasis of studies is on effects of deprivation on global and speech-language development of children from orphanages.Thus, it is observed that, both in the Western and Indian context, primary emphasis of studies is on effects of deprivation on global and speech-language development of children from orphanages and following their vocal hygiene is essential. Hence, the need for the present study.

Aim & Objectives:

The present study aims to study the influence of language input on language skills and voice quality of children from orphanages.

The objectives are to study the influence of language input on a). language skills, b). ability to learn an additional language c). reading and writing skills, d). voice quality.

Method:

30 children (6 to 10 years of age; 11 girls, 19 boys) from 3 selected orphanages participated in the study. They did not have any behavioral problems, physical and/ or mental handicap as per the report of their Supervisor. The children attended regular school and regular input was given by their caretakers and tuition teachers. Initially, the general speech-language assessment of the children was done using the speech-language proforma of the Institute. The language skills of the children (6 years of age) were assessed using Receptive-Expressive Emergent Language scale (REELs) and Milestones of Early Communication Development (MECD) for children between 6 to 10 years. The outcomes of the REELs were computed in terms of Receptive Language Age (RLA) and Expressive Language Age (ELA); whereas those of MECD were noted down in terms of ages of Semantics, Syntax, Pragmatics and Phonology.A questionnaire comprising of 10 open ended questions was constructed by the researchers and validated by 5 SLPs. These questions were designed to elicit responses LIKE- number of caretakers, speech-language stimulation and it’s nature given by caretakers, peers and teachers in school, awareness of vocal hygiene program, age of admission to the orphanage/ school.

Results & Discussion:

  1. Language Skills- 60%, 55%, 70% and 65% of the children had age appropriate Semantics, Syntax, Pragmatics and Phonological skills, respectively. This is in congruence with the study by Sayegh (1965) which showed that when language input is adequate, children have age appropriate language skills. From the questionnaire it was observed that in the orphanages weekly outdoor visits , daily reading of holy books,etc. was present. This appears to have provided additional language input.However, for Semantics 40% were delayed which attributed to the fact that some children had language issues related to mother tongue and medium of instruction in schooling; admitted at later age in the orphanage; etc. 45% delay in Syntax can be attributed to the fact that children were not given additional input in terms of tuitions.

    Semantics and Syntax delay is supported by the studies of Goldfarb (1945) and by study of Routray et al. (2015) on children in orphanages. 30% and 35% of children had delay in Pragmatics and Phonology respectively. Other factors need to be assessed.
  2. Ability to learn additional language: All children were able to learn an additional language apart from their mother tongue.

    Children are exposed to 2-3 languages.


  3. Reading and writing skills: Adequate language input is given to the children in area of reading and writing with exposure to 2-3

    languages. However, some children showed difficulties which can be attributed to the factors mentioned above.


  4. Voice quality: 30% of the children had hoarse voice quality and the parameters were assessed on MDVP.


Summary & Conclusion:

Speech and Language skills of 30 children (6-10 years of age) from 3 orphanages were assessed. The results were analyzed in terms of their scores on MECD and REELS. Results indicated that adequate language input leads to appropriate language skills (approximately 60% of the children). Lack of language input in terms of various aspects has resulted in delayed language skills, reading and writing (approximately 40% of children). Voice quality of the children was assessed perceptually and instrumentally where some children showed a deviated voice quality from the normal ranges (30% of children).

Hence, the present study enables us to conclude that adequate language input leads to adequate and age appropriate speech-language & reading / writing skills and ability to learn additional language. Also, the lack of following the vocal hygiene lead to voice disorders in children in the orphanages


  Abstract – LP625: Single-Word Picture Responses from Two-Year-Old Tamil Speaking Children with and without Language Delay: Profile of Response Types and Phonetic Inventory Top


Rosanna Ecclescia1, Harshini Subbusamy2, Roshini L L3, Adhirai Garibaldi4 & Lakshmi Venkatesh5

1petite.rosanna@gmail.com,2harshi2953@gmail.com,3roshinijl10@gmail.com,4adhiraigaribaldi@gmail.com, &5lakshmi27@gmail.com

1Sri Ramachandra Institute of Higher Education and Research, Chennai - 600116

Introduction:

Single-word picture articulation tests which require a child to produce single words spontaneously in response to picture stimuli have been an integral part of articulation and phonological assessment in children. Typically, children's responses may be analysed in relation to adult target productions or independently to profile the phonological skills of a child. (Bauman-Waengler, 2009). Several single word articulation tests have been developed in Indian languages including the Test of Articulation in Kannada (Babu, Bettagiri, & Rathna, 1972) standardized by Tasneem Banu (1977), Tamil Articulation Test (TAT, Usha, 1986), Telugu Test of Articulation and Discrimination (TTAD, Padmaja, 1988), Telugu Test of Articulation and Phonology (TTAP, Vasanta, 1990), articulation test battery in Malayalam (Maya, 1990) among others. The last decade has witnessed attempts at re-standardization of articulation tests in Kannada (Deepa, 2010) and Malayalam (Divya, 2010; Neenu, 2011; Vipina, 2011; Vrinda, 2011).

Recent work in Tamil by Perumal (2019) reported on the acquisition of Tamil consonants in children aged between 2-5 years of age. Data from 75 children in the youngest age range of 2 years to 2 years; 5 months provided norms for acquisition of different consonants in Tamil.

Children’s naming responses to picture stimuli or repetition of words and sentences/phrases were analyzed for accuracy of production of different target sounds. Majority of the sounds were produced by children in the youngest age range.

Need for Study:

Although single-word picture stimuli have been popular for articulation and phonological analyses in young children, there is a need to analyse the types of responses obtained from children to better understand the context of the samples collected for speech analyses. This is especially applicable when single-word picture stimuli are used to assess young children as early as two years of age. There is an increasing need for data on speech sound production among younger children speaking Indian languages, both typically developing and those with language delay. Stoel-Gammon (2011) described bi-directional relationship between phonological development and lexicon acquisition.

Indeed, children with relatively more extensive phonetic inventories produced larger vocabularies than children with limited phonetic inventories (Thal, Oroz & McCaw, 1995). Similarly, an increase in phonetic inventories of children with language delay was noted when the parent-based intervention focused on strengthening the lexical acquisition by mere stimulation by frequent presentations of target words without requiring responses (Girolametto, Pearce, & Weitzman, 1997).

The current study aimed at 1) characterizing the nature of responses of two-year Tamil speaking children to a single-word picture stimuli and

2) comparison of the phonetic inventories of two- year old children with and without language delay.

Aim & Objectives:

The current study aimed at 1) characterizing the nature of responses of two-year Tamil speaking children to a single-word picture stimuli and 2) comparison of the phonetic inventories of two- year old children with and without language delay.

Method:

A group of 192 children visiting the clinic around 24-months of corrected age for an ongoing research project served as participants for the current study. All children were exposed to only Tamil language at their homes. Exposure to English was limited to loan words from English commonly used in conversations (e.g., car, bike, etc.). Children were assessed for language using the Modified 3-Dimensional Language Acquisition Tool (3-D LAT; Vaidyanathan, personal communication) a criterion-referenced checklist to profile receptive and expressive language skills. Children with a delay in both receptive and expressive language skills were grouped into the Language Delay group (LD).

Children with age appropriate receptive and expressive language skills were grouped into Language Normal (LN) group. A group of 140 children with typical language development (LN) and 53 children with language delay (LD) were enrolled for the study.

Procedure: Children were assessed by two Speech-Language Pathologists (SLPs). Written informed consents were obtained from the caregivers of children participating in the study. Phonological skills were assessed using a list of 50 words in Tamil which included the word list developed by Perumal (2019) with seven additional words. Children were shown the pictures, one at a time on a tablet PC screen and were asked to name each picture Spontaneous responses obtained in Tamil or English were marked as Tamil-spontaneous and English-Spontaneous respectively. On account of spontaneous English responses or incorrect Tamil responses, prompts to say the word in Tamil or semantic and gestural cues were given by the clinician to elicit target responses in Tamil. Responses obtained through such elicitation methods were marked as elicited responses. Failure to obtain targets in Tamil despite various cues, children were asked to imitate the word produced by the clinician. Such responses were marked as imitated responses. Instances when responses were not obtained even on the imitation task, a no response label was given. The entire assessment session was video recorded for offline analysis.

Analysis: The number of responses obtained from children ranged from five to 47 words. For the purposes of obtaining a phonetic repertoire, samples which contained at least a minimum of 10 responses were only considered for analyses. Twenty-two samples were excluded based on this criterion. Samples of 15 children who could not complete the task due to fatigue or disinterest also were not included. The resulting 165 samples (125 LN; 30 LD) were subjected to further analyses.

Response for each target word obtained for a picture stimulus was parsed and transcribed using International Phonetic Alphabet (IPA) symbols representing the different phones in the Tamil language into the Phon software (Rose & Stoel-Gammon, 2015). Independent phonetic analyses was carried out. The size of a consonant inventory, i.e., number of different consonants present in a child’s inventory was calculated by the software. A consonant was included if it was observed two or more times in different contexts (Stoel-Gammon, 1987).

Statistical Analysis: Mann-Whitney U test was used to examine the group differences in phonetic inventories.

Results & Discussion

Analyses of the responses of children to the 50 picture word stimuli revealed that only two words - /sa:vi/ and /kai/ resulted in spontaneous naming responses in Tamil by more 54% and 51% children. Eighteen words were produced spontaneously or on elicitation by nearly 30-50% of the children. 10-30% of the children produced Other 25 words were produced spontaneously or on elicitation by around 10 to 30% of children. Five pictures resulted in spontaneous or elicited response in less than 5% of children. Fours out of 50 words elicited English-spontaneous response from nearly 25% of the children eg; /kaʈ/ for /pu:nai/, /bu:ʈ/ for /kappal/. Among 155 children, only 19 children produced more than 50% (25 words) of the words either spontaneously or through elicitation.100 children produced 30 % (15 words) of the words spontaneously or through elicitation. Remaining 36 children produced less than 20% of words spontaneously or through elicitation.

Mann-Whitney U test revealed that the number of different consonants produced by LD group (Mdn = 8, IQR = 6.5-10) was significantly less in comparison to that of LN children (Mdn = 13.8, IQR = 12-14), U = 202.0, p = 0.000, r =0.86). The consonant inventory size of the LN group ranged from 10-17 different consonants; 55 of 155 subjects in the LN group demonstrated the median inventory size of 14 or more. The consonant inventory size of LD group ranged from 4-12, and only 16 out of 30 children demonstrated the median inventory size of eight or more. Nine out of 16 LD children who had median inventory size of more than 10 consonants produced imitation response for 90% of total words.

Overall, the findings reveal a significantly reduced number of spontaneous or elicited responses and a preponderance of imitated responses of children to the picture stimuli. These findings have implications for understanding the context for obtaining samples for phonological samples from young children. Similar distributions in the size of consonant inventories were reported by Carson et al. (2003) among 24-month-old children with typical language and delay in language. Children with typical language development produced a mean of 15 different consonants (range 12- 21) in contrast to the children with language delay who demonstrated a mean of 9.5 phonemes (range 2-14).

Summary & Conclusion:

Single-word responses to picture stimuli among two-year-old Tamil speaking children demonstrated increased imitation responses in comparison to spontaneous naming or elicited responses. Tamil speaking children with language delay showed significantly reduced phonetic inventories in contrast to children with typical language development. The current study adds to the much-needed data on early phonological data from a large sample of Tamil speaking children. Such information will have clinical implications for identification of atypical developments and also plan interventions for children with language delay.


  Abstract – LP626: Stress, Anxeity, Quality of Life and Optimism Index among Parents of Children with Intellectual Disability Top


Vijay Kumar1 & Samiksha Gaur2

1vkumar@ggn.amity.edu &2samikshagaur11@yahoo.com

1Amity University, Haryana - 122413

Introduction:

Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity (WHO, 2007). Quality of life refers to how well we live, i.e., the general well-being of people and societies. It is the standard of happiness, comfort, and health that a person or group of people experience. There are various tools to assess quality of life and to assess the quality of life of the parents of children with Intellectual Disability we used following tests. WHOQOL BREF: It was given by WHO (1991) and analyses physical health, psychological, social relationships and environment. The scoring is done by calculating the four domain scores denote an individual's perception of quality of life. Domain scores are scaled in a positive direction which means higher scores denote higher quality of life. The mean score of items within each domain is used to calculate the domain score. DASS-21: it was given by Lovibond & Lovibond, (1995) and it helps in analyzing the stress, anxiety and depression. Scores for depression, anxiety and stress are calculated by summing the scores for the relevant items. PEROMA: it was developed by Banerjee & Swati (2016) and it helps in finding the optimism index. The test contains 60 question and the sum of all the question give us the Peroma score. Intellectual disability is a disability characterized by significant limitations in both intellectual functioning and in adaptive behavior, which covers a range of everyday social and practical skills (American Association on intellectual and developmental disability, 2007). Among Indian population, based on IQ level, around 2% accounts for mild mental retardation and 0.5% for severe mental retardation (Srinath & Girimaji, 1999). High prevalence of children with intellectual disabilities compelled to investigate various psychological parameters.

Need for Study:

As per the literature available on PUBMED, Google Scholar and other index sources there are numerous studies based on DASS 21 and WHOQOL BREF. However studies based on use of these tests are fewer in the Indian population as compared to foreign population.

Moreover, no empirical data is available on optimism index. After further review it was observed that no literature was available where use of DASS, WHO-QOL and Optimism index have been studied on parents having children with Intellectual Deficit and Language Delay. This justifies the need for assessment of quality of life by DASS 21, WHOQOL BREF and Peroma in parents of children with Intellectual Deficit.

Aim & Objectives:

The primary objective of the study was to test whether stress, anxiety, depression of parents having Intellectual Deficit and Language defers compared to parents of typically growing children using DASS 21. Physical health, psychological, social relationships, environment was compared using WHOQOL BREF and optimism index was obtained and compared using Peroma in parents having Intellectual Deficit and Language Delay child and parents having normal growing child. Final objective of this study was to investigate quality of life, optimism index, stress, anxiety, and depression across parents of children with Intellectual Deficit and Language Delay with respect to parents having typically developing children.

Method:

Participants: 15 parents having children with Intellectual Deficit and Language Delay and 15 parents having typically developing children participated in this research and final results were obtained by doing a comparative study between these two groups. Tests Used: DASS 21, WHOQOL BREF and Peroma was used to perform this study and to calculate the quality of life, optimism index, stress, anxiety, and depression across parents of children with Intellectual Deficit and Language Delay with respect to parents having typically developing children. Procedure: DASS 21 having 21 questions was performed to investigate stress, anxiety and depression. Questions were asked from the parent and the rating was done from 0 to 3 on the basis of response obtained. Each question determined different component. Sum of each component was multiplied by 2 to obtain the final score. WHOQOL BREF having 26 questions was performed to analyze physical health, psychological, social relationships and environment and the domain score was obtained by taking the mean score of items within each domain. Higher the domain score means higher quality of life. Peroma with 60 questions was administered to find out the optimism index of the parents and the rating was done from 1 to 5 where 5 stands for strongly agree and 1 stand for strongly disagree. The sum of all the responses was considered as the final score.

Results & Discussion:

Stress, Anxiety and Depression were assessed for parents of children with ID and Language Delay with parents of typically developing child. Two way analysis of variance test was performed across condition mention above and significant difference was observed F (1,29) =6.6, p= 0.02. WHO QOL BREF was done to assess physical health, psychological, social relationships and environment. Two-way analysis of variance was conducted and significant difference was obtained F(1,29)=3.08, p=0.037. Further descriptive study revealed that out of various domains of QOL, psychological factors were majorly affected across two groups. The result of optimism index based on the score of Peroma was also analyzed. Two way analysis of variance revealed that no significant difference F(1,29)=13.2, p=0.64.

In this study it was observed that DASS21 and WHO-QOL were noticed as a robust tool to assess difference across parents having normal and special children. These findings suggest that DASS21 and WHO-QOL would have more sensitivity content validity as compared to other tools. Being the first study of its kind restricted the outcome of these finding to compare with some normative values. As there are specific test batteries such as QOL for Aphasia (LaPointe LL, 1999) justifies the need for development and validation of such tools for parents having special children such as ID, Autism and Language Delay, etc.

Present findings also strengths the hypothesis of development of sensitive tool for assessment of psychological factors along with quality of life and optimism for various clinical population specially in the context of socio cultural diversity and dynamics at regional levels within India.

Psychometric of such tool is warranted.

Summary & Conclusion:

A comparative study was done among parents having children with Intellectual Disability and Language Delay using DASS 21, WHOQOL BREF and PEROMA and It is inferred from the result that DASS21 and WHOQOL appeared more sensitive tool compared to Paroma. The application of such findings can be used in clinical practice to investigate such psychological factors before the intervention and target population with communion disorders.


  Abstract – SO160: To Compare the Relative Efficacy of the Pause and Talk and Prolonged Speech Techniques in School-Age Children Who Stutter Top


Rakesh C V1 & Santosh Maruthy2

1cvrakesh.sphg@gmail.com &2santoshm79@gmail.com

1All India institute of Speech and Hearing, Mysuru - 570006

Introduction:

Stuttering is a complex, speech motor disorder involving core behaviours such as repetitions, blocks, prolongations, and secondary behaviours. The onset of stuttering is typically seen in early childhood between 2 to 5 years of age, and its incidence is highly widespread across children and adults. The literature reports the highest prevalence rate in preschool children (3.4%) followed by school-going children (0.84%) and adults (1-2%). More considerable gender differences have been reported in children and younger adults (4.6:1).

Since the onset of stuttering is during early childhood, stuttering has a negative effect on the quality of life. Children who stutter (CWS) may experience bullying and teasing by peers, struggle to make friends, difficulties like fitting-in at school, and limited social acceptance. Also, CWS are reported to have negative speech associated attitude, detrimental quality of mood and experience of embarrassment and shame. Studies report CWS exhibits loss of self-confidence, depression, low self-esteem, and have social, emotional, and behavioural difficulties to a greater extent when compared to their fluent peers with an early onset of 3 years of age. Additional to these CWS are reported to have elevated anxiety. Hence, early intervention of stuttering needs to be done to minimize the impact of stuttering on the individuals' quality of life. Early intervention is necessary because the success rate is significantly higher in CWS (>90%) when compared to adults (50%-70%). Additionally, for CWS, the duration of treatment is lesser, and the relapse is closer to 0% when compared to adults (50%).

In the current study, we aimed to investigate the relative efficacy of Pause and Talk (PT) and Prolonged Speech (PS) treatments. PT procedure employs contingency to induce the fluency, whereas the PS induces fluency through prolonged and slow rate of speech. Both the procedures follow the direct approach philosophy to reduce or eliminate dysfluencies successfully in CWS. Further, they are reported to maintain better fluency compared to adults. However, these direct procedures lack evidence to support the effectiveness of the school-aged population (6 to 12 years).

Need for Study:

As mentioned earlier, there is a negative impact of stuttering on CWS quality of life and to minimize the same early identification and treatment is essential. Extensive research has been undertaken for preschoolers and adults. However, there is a dearth of research-based evidence that has focused on building fluent speech in the school-aged population. Though the clinical impression of PS and PT reported being promising in maintaining better fluency, there is no evidence-based research to support this statement. Hence, it is crucial to evaluate the treatment efficacy of PS and PT treatment procedure in school-aged CWS and to generate an evidence base for the same.

Aim & Objectives:

To compare the relative efficacy of PS and PT treatment techniques procedures in school-aged CWS.

Method:

The present study used the modified pre and post design' to compare the treatment efficacy between PS and PT treatment techniques. Twelve Kannada speaking CWS (6 to 12 years) participated in the study. Following the procedure of purposive sampling, set of six participants were randomly assigned to the PS and PT treatment procedures. From each child, speech sample recordings were carried out at 4-different time points: (a) pre-treatment base rating, (b) at the end of the treatment program, (c) during the follow-up sessions after one month, and (D) three months of discharge. Each child was discharged when they achieved less than 3% dysfluency in the clinical setting, less than 5% dysfluency in natural settings, and a severity rating of 1 or 2 on a 10-point rating scale for two to three consecutive sessions. At each time points, within-clinic and extra-clinic audio-video recordings of story narration samples were obtained. A minimum of 300 to 350 syllables speech sample was considered for analysis after an orthographic transcription. Primary outcomes include the percentage of syllables stuttered (PSS), stuttering severity rating (SSR), rate of speech (SPM), speech naturalness rating (NR), Stuttering Severity Instrument Score (SSI). The secondary outcomes comprise of Kannada version of Communication Attitude test (CAT-K) and Speech Situation Checklist-Emotional reaction-Kannada (SSC-ER-K).

Results & Discussion:

Repeated measures ANOVA was done with four-time points (Pre-therapy, immediate post-therapy, 1-month, and 3-month follow-up) and setting (within-clinic and beyond clinic) as within-subject factor and treatment techniques (PS and PT) as between-subject factor. The results indicated a significant main effect for the four-time points {PSS [F(3,30)=25.905, P=0.00,n2p =0.721], SSR [F(3,30)=87.201, P=0.000,n2p = 0.897], NR [F(0,30)=55.290, P=0.000,n2p = 0.847], SSI [F(3,30)=134.744, P=0.000, n2p = 0.931]}. Further, Bonferroni post hoc analysis suggested that for PSS, SSR, NR, and SSI measures, there was a significant difference between pre-therapy scores with other three-time point scores, whereas there was no significant difference across the rest of the timelines. Further, there was no significant (p > 0.05) main effect across settings (within and extra-clinic). Besides, there was no significant interaction effect (p > 0.05) between any of the factors. For SPM, there was no significant main effect or interaction between any of the factors.

Signficant main effect (p < 0.05) across two treatment techniques was found only for SSR [F(1,10)=7.125, P=0.024 n2p= 0.416] and NR measures [F(3,30)=55.290, p=0.000 n2p = 0.847]. Hence, separate RANOVA was done for PS and PT group. For SSR the results indicated that there was a significant difference across time points in both PT [F(3,15)=27.705, p=0.000 n2p = 0.847] and PS [F(3,15)=72.199, p=0.000 n2p = 0.935]. For the parameter NR, the results again indicated significant difference across time points in both PT [F(3,15)=23.415, p=0.000 n2p = 0.823] and PS [F(3,15)=32.374, p=0.000 n2p = 0.866] groups. Similar to overall main effect across time points, in both the treatment techniques, significant difference (p<0.05) was found between pre-therapy and other time point scores.

Paired sample T-test was done to check the difference in CAT-K (attitude) and SSC-ER-K (anxiety) scores in CWS, if any, between pre and post-therapy. The results indicated a significant difference to be present for attitude [ t(11)=18.353, p=0.000] and anxiety [t(11)=5.921, p=0.000] measures. Both speech associated negative attitude and anxiety scores decreased in pre and post-therapy conditions.

Thus, the results indicate that PS and PT techniques are effective in reducing disfluencies in school-age CWS and also to maintain the same up to 3 months. Furthermore, there was a reduction in negative attitude and anxiety in both the groups following fluency therapy. The effectiveness of PT results was similar to the results of the laboratory effectiveness timeout study. Both the treatment techniques were effective to a similar extent in school-age CWS.

Summary & Conclusion:

The preliminary results of the current study highlight that both PS and PT treatment procedures enhanced fluency to a similar amount in the school-age CWS. Further, the achieved fluency was maintained at one-month, and three-month follow visits. Besides, both treatment techniques reduced speech associated negative attitude and anxiety in our CWS. Overall PS and PT techniques are effective in reducing stuttering like disfluencies and maintenance of the fluent speech. As the sample size was small (n = 12), further studies need to be done with larger sample size. Further, studies may be carried out to investigate the long-term maintenance of fluency in school-age CWS.


  Abstract – SO162: Communication Concepts in Ancient Tamil Epic 'Thirukkural' and its Relevance for Today's Healthy Life Top


Narendiran K

kovairehab@yahoo.com

1Kovai Rehabilitation Center, Coimbatore - 641002

Introduction:

Though many of the communicative languages have Linguistic Universals in it, differential growth patterns are also seen among them (Friedman et al 2006:456). The Thamizh (Tamil) language has many finest literature like Thirukkural which consists of 1330 poems, distributed in 133 chapters with ten poems in each topic. Each poem contains just seven words aligned in two lines and was written about two thousand years ago by Thiruvalluvar. It has explored all facets of human existence including communication concepts like Speech, Language, Hearing, Sensory inputs etc. This study was undertaken to find the relevance of communication concepts in Thirukkurala for our present day healthy life.

Need for Study:

Though Thirukkural explains about many communication topics, no known modern research has been done to explore these concepts. Savithri S.R (1978:113) also indicated that many of the present-day Speech and Hearing professionals are still ignorant about the scientific information available in Indian languages. Hence this study was undertaken to highlight the Indian ancestral wisdom about communication science and to find the relevance of Thirukkural for today's healthy life.

Aim & Objectives:

To test the following hypotheses-

  1. Thirukkurala explains scientific facts about communication science.
  2. Selected Thirukkurals are equally familiar among cross section of population.
  3. The applicable familiar Thirukkkurals are equally relevant among the subjects.
  4. Thirukkural concepts help to achieve a healthy life and human excellence.


Method:

Thirukkural poems considered to have direct meaning or implied reference to communication concepts were chosen, and were presented with Tamil meanings and English translation. For easy reference they were re-grouped on modern scientific topics and their explanations were compared with current scientific concepts. In order to avoid the bias of the researcher and to project the comparative opinion of the public, two survey studies were undertaken by using simple random sampling methods. By using different questionnaires, the level of familiarity and relevance of selected Kurals were found out. The familiarity study contained three response choices: more familiar, less familiar and not familiar. The relevance study had following five options: strongly relevant, relevant, undecided, not relevant, and strongly not relevant. Ten categories of Tamil speaking people like Doctors, Special Educators, Speech / Language Pathologists, patients and others who were familiar with communication concepts were the subjects. The obtained rating scale data in percentage along with demographic details were analyzed. Results gave objective information and explained how Thirukkural concepts were supporting or differing with modern science viewpoints.

Results & Discussion:

One hundred and twenty five Thirukkurals were focusing on Communication which explained about listening, Multi-Sensory Stimulation, Sensory deprivation, development and power of Speech, Social and Pragmatic Language, Teaching skills, Reading etc. Kurala familiarity ranged from 4.25% to 100% and the pattern revealed that many of the subjects were not very familiar with the given poems. For those fifty percent or more of familiar Thirukkural poems, relevance levels were found out in different categories. Results showed that a little over eighty six percent of subjects stated that Thirukkural was still relevant to today's healthy life.

Since Thiruvalluva concepts are practical, relevant and commron in nature, it becomes valuable for human excellence. Though few poems have lower relevance scores, they cannot be totally rejected as not relevant for modern life. Latest technical advancements would have influenced this altered perception. For example Thirukkural 420 questions the living status of a person with hearing impairment. Though modern view is different, the interesting fact is that other contemporary civilizations like Roman, Egyptian, Greek, and European also had same notion.

Summary & Conclusion:

Of the total 1330 Thirukkural poems, 125 couplets explained different concepts of modern communication science. Kural Chapter 42 titled Listening and Chapter 65 titled Speech with many others explained well about Speech and Hearing subjects. Kural 411 tells about the importance of hearing. Poems 1261 and 9 tell about sensory deprivation, while 1101 explains multi-sensory stimulation. Couplet 66 talks about babbling and 645 emphasizes on semantic structure. Poem 191 speaks about appropriate pragmatics and the effects of its failure. ˜Kurals 1, 392, 643, 648 and 642 tells about the importance of letters, numbers, speech and language. Non-verbal communication is explained in Kural 1100, 1274, 1271, 701, 1253, 1203, 1312, 1317, 1318, 271, 1040 and 621. Kural 354 explains the importance of cognition and insight-knowledge of past experiences in sensory input regulation.

A vast majority of Survey subjects (81.45%) rated less than 50% of Kural familiarity scores. But on the basis of merit and content-value of Thirukkural, measure has to be taken to familiarize them to the public. On the relevance study, 86.38% of survey subjects (49.77% as strongly relevant and 36.61% as relevant) stated that Thirukkural is still relevant for today's healthy life. Even if it does not give any significant negative deviation a few poems like Kural 420 may be disputed. Reduction in relevance scores for those certain Kurals may be due to modern advancements in Science and Technology.

It is amazing that Thirukkural poems containing just seven words have given great concepts on communication. In many poems, the analogy given for different expressions are not only apt, but also wonderful. In addition to scholarly contribution on medical, psycho-social,

linguistic, technical and political aspects of communication science, his contents are enriched by moral precision and ethical values. Even after 2000 years of its existence, since 'Kural' concepts empower us to tackle many challenges of communication; they are valued to be useful to maintain a healthy life. Thus it has stood the test of time to prove its merit and usefulness. Hence Thirukkural knowledge will definitely enhance inter-disciplinary thinking and to improve the academic, clinical and research work of Speech and Hearing professionals. Though 'Thirukkural' reflects the cultural heritage of ancient Tamil community, it has an international outlook and will be useful for universal peace and global harmony. Since 'Thirukkural' concepts on communication are highly relevant even today, it needs to be familiarized not only among Speech and Hearing professionals but also with general population. Kural concepts will definitely help to achieve a healthy society.


  Abstract – SO163: Coarticulation in Children with Cochlear Implant: A Locus Equation Study Top


Ravali P. Mathur1, Sarita Rautara2 & Smriti Upadhyay3

1ravali.p@rediffmail.com,2srautara@yahoo.com, &3smritiaslp2395@gmail.com

1Ali Yavar Jung National Institute of Speech and Hearing Disabilities (Divyangjan), Mumbai - 400050

Introduction:

In normal speech, sounds occur as integral elements of a multisegmental utterance, influencing other adjacent elements within the utterance which is known as a segment or a phoneme, noticed as a continuous stream of articulatory movements. Each of these segments act as a target which need to be reached by the articulators, however, it is not necessary, that they will reach in a uniform manner, as one target may be drawn off quickly (within milliseconds) aiming at the next one. During the production of one articulatory movement (related to one segment), another movement (related to an adjacent segment) begins. These kinds of discrete elements, in speech occurs as the overlapping of articulatory movements which is referred as co-articulation. Co-articulation is described as the process by which consecutive sounds are produced with overlapping movements caused by anticipating and planning the succeeding sounds (Gick, Wilson, & Derrick, 2013). When one speech sound or phoneme overlaps with another considered as a result of the way one sequence and organize the articulators to efficiently produce consecutive consonants and vowels in fluent speech.

According to Recasens (1999), high/low and front/back tongue displacements dimensions are the essence of lingual coarticulation, for example, the back tongue body position is required for a brief period time for the articulation of [k], which is again overlaid by the front tongue body for the position of [l].

Coarticulation can be measured in statistical terms called locus equation where a straight-line regression fit to coordinates formed by plotting onsets of F2 transitions in relation to their coarticulated F2 midpoint i.e. target frequencies (Sussman et al. 1993). Each production of a CV syllable yields one data point consisting of an F2 value measured at vowel onset and an F2 value measured in the middle of the vowel. The maturity of the motor speech planning system can be derived from the measures of coarticulation (Barbier, Perrier, Ménard, Payan, Tiede, & Perkell, 2013; Zharkova & Hewlett, 2009; Zharkova, Hewlett, & Hardcastle, 2011).

Need for Study:

Evaluation of coarticulation aspect through locus equation slope and intercept gives us information about degree of coarticulation and the acoustic representation of place of articulation. Prelingual hearing impairment especially of severe to profound degree have huge impact on speech sound production of children leading to poor speech intelligibility (Monsen, 1976). Studies indicating that coarticulation is affected in children with hearing impairment are ample in number in western countries, however in India, there is a dearth of studies on coarticulation in children with hearing impairment. Therefore, studying coarticulation in children with hearing impairment using cochlear implant, will give a lot of clinical applicability information. Hence the need was felt to study coarticulation in children with cochlear implant.

Aim & Objectives:

To evaluate and compare coarticulation at syllable level across vowels /a, i, u/ in children using cochlear implant. Objective is to compare the effect of coarticulation in voiced phonemes /b, d, and g/ across the vowel /a/, /i/ and /u/.

Method:

A total of 20 participants (10 males and 10 females) fitted with cochlear implant with a mean age of 77.22 months and 36.88 months of implant use participated in the study. The sample was collected using Sony digital recorder in a sound treated room where one child at a time was demonstrated and asked to produce CVCV words, where C is bilabial /b/, dental /d/ and velar /g/ and V was /a/, /i/, and /u/. hence a total of 9 samples each child was obtained. The recorded sample were converted into wave form and acoustical analysis was done using PRAAT software to measure F2 onset and F2 midpoint to apply locus equation method for each place of articulation as a function of three vowels, in order to evaluate and compare the difference in the coarticulation.

Statistical Analysis: The data was subjected to descriptive analysis and local equation measures i.e. slope, regression analysis, R2 values.

Results & Discussion:

The result of comparison for slope value of locus equation obtained from children using cochlear implant for voiced stop syllable across vowel /a/ i.e. for /ba/, /da/ and /ga/ were 0.10, 0.60 and 0.62 which indicate that degree of coarticulation was in the following decreasing order i.e. /ga/ followed by /da/ and then /ba/ (/ga/>/da/>/ba/). R2 values for /ba/ /da/ and /ga/ are 0.009, o.362 and 0.108. The higher R2 values indicates fairly high amount of coarticulation which is also suggesting that maximum coarticulation for velar /ga/ and minimum for bilabial /ba/. This result can be coincided with study done by Sreedevi et al., (2014) showing reduced coarticulation when compared to slope value of 0.82 for /ba/ from typically developing children. Sreedevi et al., (2014) found the slope value for /da/, in typically developing children is 0.55 which is similar with the value of 0.60 for children with cochlear implant in the present study whereas for /ga/ in typically developing children has a slope of 0.58 similar to the values found in the present study. The possible reason for /ga/ having higher coarticulation compared to /ba/ could be due to vowel /a/ being a low back vowel and /g/ being a velar and the shift of consonant to vowel is much easier compared to labial to low back vowel.

Slope value of locus equation obtained for syllable across vowel /i/ for /bi/, /di/ and /gi/ were 0.96, 1.23, and 0.45. The R2 values for /bi/, /di/and /gi/ are 0.622, 0.91 and 0.43 which is indicating that dental stop /d/ is more coarticulated with /b/ followed by /g/ for vowel /i/ (/di/>/bi/>/gi/). For /u/, The R2 values for /bu/, /du/ and /gu/ are 0.006, 0.481 and 0.13 suggesting the coarticulation of dental stop /du/ is more as compared to velar/gu/ followed by bilabial /bu/ (/du/>/gu/>/bu/) where the slope value of locus equation for /bu/, /du/ and /gu/ were 0.1, 0.83, and 0.23. hence, for vowels /i/ and /u/ higher coarticulation was for dental sound /d/ indicating the higher influence of these vowels on the articulation of dental consonant compared to the others.

However, there are no studies regarding the evaluation of coarticulation effect using slope values for vowel /i/ and /u/ for the bilabial dental and velar stops. Hence, there is a need for more indepth studies to identify and evaluate the coarticulation effect across all the vowels.

Summary & Conclusion:

The present study was aiming at evaluating and comparing the coarticulation effect across the syllables of bilabial (/b/), dental(/d/) and velar(/k/) stops across the vowels /a/, /i/ and /u/ in children using cochlear implant. The result of the present study had focused on the locus equation for measuring the slope values and R2 for the syllables across vowels /a/, /i/ and /u/. the results suggested that the coarticulation is more in velars as compared to dentals followed by bilabials for /a/ whereas for /i/, dentals have more coarticulation as compared to bilabials followed by velars. However, for /u/, dentals have highest slope values while bilabials are the lowest. In Indian context, studies are available for vowel /a/ across the stops whereas no proper studies have been there to support the results of vowels /i/ and /u/ for the same, which indicate the need of more studies regarding the same across all the vowels.


  Abstract – SO164: Construction and Validation of a Short Version of the Impact Scale for Assessment of Cluttering and Stuttering (ISACS-s) Top


Pallavi Kelkar1, Jyotsna K2 & Santosh Maruthy3

1pallavi101185@gmail.com,2jyotsna.k7@gmail.com, &3santoshm79@gmail.com

1All India Institue of Speech and Hearing, Mysuru - 570006

Introduction:

Stuttering and cluttering are both complex and multifaceted fluency disorders that affect all areas of life (Yaruss, 2010). The mosaic of factors related to fluency disorders vary across situations, speech content, and communicative partners, for the same individual. It would therefore beneficial to have a tool to quantify environmental, cognitive and personal variables while assessing persons with fluency disorders (PWF).

The advent of the International Classification of Functioning, Disability, and Health (ICF) (WHO, 2001) aided the development of impact assessment tools for stuttering around the world (Yaruss & Quesal, 2006). However, none of these included cluttering.

Though culture highly influences speech and language assessment (Ndungau & Kinyua, 2009), no standardized impact assessment tools were developed in India in the context of fluency disorders.

In view of the above, the Impact Scale for Assessment of Cluttering & Stuttering (ISACS) (Kelkar & Mukundan, 2015) was developed to measure the impact of stuttering and/or cluttering, with respect to PWF and their significant other persons (SOP). Part (A) of the ISACS assesses impact as perceived by the PWF while Part (B) from the point of view of the SOP. The extensive 100 item Likert type scale explores body functions, personal contextual factors, activities, environmental factors, participation and quality of life.

Need for Study:

Assessment of fluency disorders would be incomplete and lack face validity unless it includes quantification of impact using a culture specific tool. While the ISACS is certainly a step in this direction, its length might prove to be a drawback for its use in India owing to a low doctor-patient ratio (Deo, 2013) and poor public awareness resulting in low motivation for repeated assessments (TISA & Speak: Stammering Foundation, 2018).

The original ISACS, though exhaustive, might induce response fatigue, especially in adolescents (Kelkar, 2017) or persons with cluttering (PWC), who might have a short attention span (Daly & Burnett, 1999).

Creating a short version of the ISACS could overcome these barriers; making the ISACS a quicker tool to administer and significantly increasing its utility in India while continuing to retain the positive aspects of the original version (Ware, Kosinski & Keller, 1996).

Aim & Objectives:

The primary aim of the present study was to convert the ISACS into a shorter version. The secondary aim was to examine of the psychometric properties of the same.

Objectives:

  1. Creation of a short version (ISACS-s)
  2. Establishing equivalence of both versions
  3. Assessing reliability and validity of the ISACS-s
  4. Exploring trends in responses


Method:

Creation of the short version:

An exploratory factor analysis based on the responses of 100 participants was conducted for the items of the original version of the ISACS (A). Items that did not load on the factors identified were removed. The remaining items were regrouped based on the factors that they clustered on, so that the four subscales measured mutually exclusive but related constructs. The resultant ISACS-s consisted of 25 statements, from the initial 100.

The equivalence ISACS-s (A) to the original ISACS (A) was determined by administering both versions to each of three persons with stuttering (PWS) and three typical speakers and comparing the means of the two versions using the Wilcoxon signed ranks test.

Psychometric evaluation of the short version:

The ISACS-s (A) was administered to a purposive sample of 97 PWF (95 PWS, 2 PWC; 88 males, 9 females) within the age range of 13- 45 years (mean= 22.5 years), and the ISACS-s (B) to 34 of their SOPs with their informed consent. The ISACS-s (A) was also administered to 58 typical speakers. Survey research was employed for data collection. After responding to the ISACS-s (A), each PWF was asked to give a usefulness rating to the tool on a five point scale, where 1=not at all useful, and 5=extremely useful.

Internal consistency (Cronbach's alpha) and split half reliability (Spearman- Brown coefficient) of the ISACS-s (A) was assessed. Discriminant analysis, cross validation and a comparison of ISACS-s (A) scores of typical speakers and PWF through an independent samples t test was carried out to ascertain the construct validity of the ISACS-s. A mean usefulness rating was computed to estimate face validity.

Corresponding ISACS-s (A) and (B) scores for the same individual were compared using a paired t test. Trends in ISACS-s (A) scores across the degree of severity of stuttering (0=very mild; 4=very severe) were explored using one way ANOVA.

Results & Discussion:

Creation of the short version:

An exploratory factor analysis using the Principal Component Analysis (PCA) method yielded four principal components. Items which clustered on any one of these components and showed correlation coefficients of >0.60 were retained.

A Wilcoxon signed ranks test revealed no significant differences (Z= 0.000, p= 1.000) between long and short versions administered to the same individual, suggesting that the two versions were equivalent.

Psychometric evaluation of the short version:

Cronbach's alpha and Spearman- Brown split half reliability coefficients for both, the PWF (r=0.749; rkk= 0.778) and typical speaker groups (r=0.74; rkk= 0.779) indicated good reliability. Discriminant analysis revealed that the ISACS-s can successfully discriminate 97.9% of PWF and 98.3% of typical speakers. Discriminant analysis correctly classified 98.1% of the sample, while cross validation of the groups correctly classified 92.9%, indicating high construct validity. Construct validity was also evaluated by the independent samples t- test which revealed that the mean impact scores of PWF (M=57.19; SD=13) were significantly higher [t (153) = 8.13, p<0.001] than mean impact scores of typical speakers (M=40.14; SD=11). Mean usefulness rating for the ISACS-s was 3.89, indicative of high face validity.

While a paired t test across the mean scores of ISACS (A) (M= 55.39, SD= 11.56), and ISACS-s (B) scores (M= 53.60, SD=14.36), revealed no statistically significant difference between the scores of the PWF and SOP [t (66) = 0.563, p= 0.575], the trend seen was the same as that seen for the original ISACS, (A) scores being higher than (B) scores (Kelkar, 2013). This reaffirmed the equivalence of the short version to the original, a finding in line with previous evidence of short versions of tools accurately replicating longer ones (Ware, Kosinski & Keller, 1996). Although the mean impact scores did show an upward trend with an increase in stuttering severity, a one way- ANOVA revealed no statistically significant difference in impact scores across degree of severity [F (2, 92) =2.300, p =0.106]; a finding similar to that of Chun, Mendes, Yaruss & Quesal (2010), suggesting that severity alone may not give a complete picture of the disorder.

Summary & Conclusion:

The ISACS-s seems to be a reliable and valid tool to measure the impact of fluency disorders, with the added advantage of availability of indigenous normative. While retaining the qualities of the original ISACS, it could make the tool easier to administer, score and interpret (Millard & Davis, 2016), thus adding efficacy to assessment of fluency disorders.

Implications of the tool extend from its use in assessment camps where large numbers need to be assessed in a limited amount of time; to assessments done during tele-rehabilitation or long distance surveys, where the short length of the tool would reduce the number of non-respondents (Chlan, 2004). Future research could work towards adding data from PWC, as well as validating the tool across other languages in India.


  Abstract – SO165: The Cluttering Stereotype and its Origins through Anchoring and Adjustment: Preliminary data Top


Pallavi Kelkar1, Bhargavi Atre2, Snehal Purkar3 & Urvi Mahajani4

1pallavi101185@gmail.com,2bjatre98@gmail.com,3snehalpurkar23@gmail.com, &4urvismahajani@gmail.com

1School of Audiology and Speech language Pathology, Bharati Vidyapeeth (Deemed to be) University, Pune – 411030

Introduction:

Cluttering is a fluency disorder often accompanied by poor self-monitoring, inattentiveness, or impulsivity. This might lead to persons who clutter (PWC) being perceived as confused, stressed, insecure, scared or unintelligent - a possible cluttering stereotype (Weiss, 1964).

Stereotypes directly impact the target population in the form of reduced opportunities (McKinnon, Hall & McIntyre, 2007) and indirectly as stereotype threat (Steele & Aronson, 1995). Many hypotheses have been put forth regarding formation of the stuttering stereotype, prominent among them being the anchoring-adjustment hypothesis (Guntupalli, Kalinowski, Nanjundeswaran, Saltuklaroglu & Everhart, 2006). In this two-step process, a typical speaker first anchors his judgement based on his own disfluent experiences. He then adjusts his perceptions to make them less negative when he realizes that stuttering in the person who stutters (PWS), unlike his own disfluency, is chronic in nature.

McKinnon et al. (2007) tested the anchoring- adjustment hypothesis for PWS. Analysing ratings obtained for 25 adjectival pairs, they confirmed the presence of a stuttering stereotype reflected in significantly negative ratings given to a PWS as compared to a typical speaker. They also found that ratings given to PWS were similar to those given to a person experiencing temporary disfluency, though slightly less negative, thus confirming the anchoring- adjustment hypothesis for stuttering.

Need for Study:

While stereotypical perceptions towards PWS have been researched extensively, few studies till date have explored the cluttering stereotype (Farrell, Blanchett & Tillery, 2015), and none from India.

In addition to the degree of self-awareness of cluttering (Reichel et al., 2014), an important determinant of the need to seek intervention would be the extent to which cluttering is stigmatized or considered unacceptable. This makes it necessary to explore the presence and nature of a cluttering stereotype.

Given the dire need for public education about cluttering (St Louis et al., 2010), a baseline for the present level of awareness among the general population can be obtained by investigating the cluttering stereotype.

Aim & Objectives:

The primary aim of the present study was to investigate the presence, strength and nature of the cluttering stereotype. A secondary aim was to evaluate if the anchoring- adjustment process can account for the formation of such a stereotype.

Objectives:

  1. To compare respondents' perceptions of a typical male and a PWC (male)
  2. To correlate respondents' perceptions of a PWC (male) and a male who clutters temporarily
  3. To compare respondents' perceptions of a PWC (male) and a male who clutters temporarily


Method:

Participants:

The study design replicated that of McKinnon et al's (2007) study on the stuttering stereotype through anchoring-adjustment. 60 students (27 males, 33 females; 16-24 years) from fields excluding speech language pathology consented to participate in the study. Participants were randomly assigned to Group I (N=30; mean age= 19.8 years; SD= 0.48; 14 males, 16 females) and Group II (N-30; mean age= 19.6 years; SD= 1.8; 13 males, 17 females).

Material:

A 25 item 7 point bipolar adjectival pair scale (Woods & Williams, 1976) was used to elicit participant ratings for an adult male with an uncontrollable clutter (trait cluttering; TC) (Cronbach's alpha= 0.80); a normally fluent adult male speaker who suddenly begins to clutter for a short time (state cluttering; SC) (Cronbach's alpha =0.81 ); and a typical male (TM) (Cronbach's alpha =0.78). The scale was a balanced Likert type scale with 1 and 7 signifying extreme ratings, e.g. extremely shy or extremely b, while a rating of 4 was considered neutral.

Procedure:

Group I rated the TC and SC presented in randomized order, while Group II rated the TM. The only addition to the original study design was a short paragraph on cluttering which was given to participants in Group II before the task, to briefly orient them about the symptomatology of cluttering.

Statistics:

A comparison was made between mean scores obtained for each of the 25 adjectival pairs for TM and TC using a multivariate independent samples t test to investigate if a cluttering stereotype exists. After repolarising the scales to minimize variances, Pearson's product moment correlation coefficient was computed for the ratings given to the SC and TC to test the anchoring phase of the hypothesis. Finally, ratings for SC and TC were compared using a multivariate paired t test to evaluate the adjustment phase of the hypothesis.

Results & Discussion:

The multivariate independent samples t test revealed a significant difference between the ratings given to the TC and the TM [Pillai's trace=0.824; F (25,34 ) = 6.38; p<0.001, partial eta2 = 0.824]. On univariate t tests after applying a Bonferroni adjustment, a statistically significant difference between the TC and TM was seen on 19 of the 25 traits (p<0.002). The TC was thus perceived as significantly less confident, more nervous, tense, self-conscious, shy, withdrawn, hesitant, insecure, uncooperative, unfriendly, unintelligent, careless, rigid, introverted and unpleasant. These traits might be grouped under three broad themes- high anxiety, low likeability, and poor capability, and termed the cluttering stereotype; a finding in agreement with previous literature (Kvenseth, 2007; St Louis et al., 2010); and in contrast to the stuttering stereotype which comprised largely of anxiety related traits (Mc Kinnon et al., 2007).

A correlation analysis between the ratings for the TC and SC revealed a strong positive (r=0.925) and statistically significant (p<0.001) correlation, suggesting that the SC and TC were rated as very similar. This could represent the anchoring portion of the anchor-adjustment hypothesis. However, a multivariate paired t test revealed no significant difference between the SC and TC ratings [Pillai's trace=0.90; F (25, 5) = 194.82; p<0.001, partial eta2 = 0.99], indicating absence of any adjustment in perceptions occurring.

This could be due to lack of adequate knowledge about cluttering, which might have led to the adjustment process stopping too soon (Epley, Keysar Bovan & Gilovich, 2004) so that the SC and TC ratings were almost similar. Alternatively, the respondents did not consider cluttering to be as unacceptable as stuttering (Kelkar & Mukundan, 2016) and therefore did not feel the need to adjust for a chronic condition•, leaving the trait ratings almost the same as the state ratings. Analogous to the Anticipatory Struggle theory (Bloodstein, 2005) for stuttering, then, PWC might not feel motivated for therapy probably because their speech is not considered unacceptable.

A third possible reason for these results could be that unlike stuttering, respondents had never experienced cluttering in their own speech, because of which they could not anchor their judgments based on their own experiences. In other words, though the cluttering stereotype does exist, it might not form through the anchoring- adjustment process.

Summary & Conclusion:

The study provides evidence of a strong cluttering stereotype involving negative judgements about capability, likeability and confidence. Its formation through anchoring-adjustment, unlike stuttering (McKinnon et al., 2007), however, cannot be unambiguously confirmed based solely on the findings of the present study.

Future studies could explore if variables like gender, culture, socioeconomic status, or exposure to educational material on cluttering affect the cluttering stereotype. Educating the target population about cluttering more adequately than the present study and testing the anchoring- adjustment hypothesis for specific professional or cultural groups in which these traits are likely to be considered unacceptable could take researchers a step closer to concluding about the contribution of anchoring-adjustment in stereotype formation for cluttering.


  Abstract – SO166: Voice and Speech as an Early Biomarker in Assessing Changes in Cognition in Elderly: A Pilot Study Top


Anindita Arun1 & Gayatri Hattiangadi2

1aninditabanik20@gmail.com &2gayatrislp@gmail.com

1Topiwala National Medical College, Mumbai - 4000008

Introduction:

The word dementia describes a group of symptoms that may include memory loss, difficulties with thinking, problem solving or language, and often changes in mood, perception and behavior, multiple cognitive deficits causing significant impairment in social and occupational functioning. Most common types are Alzheimer's disease (AD), vascular dementia, dementia with lewy bodies and front temporal dementia, mixed dementia and Mild cognitive impairment (MCI). A biomarker is defined as a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacological responses to therapeutic intervention (Ther,2001). Patients also have a range of communicative impairments from cognitive, language to speech impairments, affecting one or more processes of speech.

Need for Study:

There are a range of changes in speech and voice which happen in various cases with dementia. Hence, the need is to evaluate and understand the changes in voice and speech in patients with acquired cognitive impairment and consider if voice and speech could be a biomarker to cognitive decline.

Aim & Objectives:

To identify the speech and voice as a biomarker in individuals with cognitive changes i.e. MCI and dementia patients.

Method:

It is a cross sectional observational study.

Subject selection criteria

Inclusion criteria for Dementia and MCI patients and healthy controls:

Males and females of age more than or equal to 55 years, educated up to 8th grade and above were included. Dementia (major neurocognitive disorder) and MCI (mild neurocognitive disorder) patients diagnosed as per the DSM-5 criteria. Similar age range was considered for healthy controls.

Exclusion criteria for all participants:

Patients with acute psychiatric illness & not willing and able to participate in the study along with language comprehension deficits- e.g. Wernicke's/ Brocas aphasia, traumatic brain injured patients, right hemisphere impairment, subcortical aphasia, primary progressive aphasia.

Tools used:

  1. Case record form.
  2. Clinical dementia rating scale (CDRS) - for the evaluation of staging severity of dementia.
  3. Ace III (ACE) –Addenbrooke's cognitive examination encompasses tests of 5 cognitive domains: attention/orientation, memory, language, verbal fluency and visuospatial skills.
  4. Montreal cognitive assessment (MOCA) - It is a rapid screening instrument for mild cognitive dysfunction.
  5. For perceptual analysis - GRBAS scale and CAPE V. These are scales used to evaluate the voice characteristics perceptually by the clinician.
  6. For acoustic analysis- Computerised Speech Lab- Multidimensional Voice Program (MDVP), Motor Speech Profile (MSP).


Procedure: It was a collaborative study between the Psychiatry unit (geriatric) and Audiology and Speech Therapy department in a multidisplinary tertiary care hospital. Participants with Dementia, MCI and healthy controls with their relatives were approached and were explained about the study & were included after written informed consent who met the inclusion criteria. The interviewer from the psychiatry department performed the cognitive examination to diagnose the severity of dementia clinically using MOCA, ACE-III and CDRS on patients. After the diagnosis done by psychiatry department; speech and voice assessments were done using perceptual scales and acoustic analysis. Perceptual analyses were done using GRBAS scale and CAPE V and the acoustic analysis were done using the computerized speech lab software i.e. Multidimensional Voice Program, Motor Speech Profile.

Statistical analysis method:

Data analysis was done with the help of SPSS software version 21. Qualitative data was analysed using descriptive statistics; appropriate parametric and non-parametric tests for association such as ANOVA, correlation analysis were done using Pearson's correlation.

Results & Discussion:

Socio demographic profile:

There were total 10 participants equally included in 3 groups i.e. Healthy controls, MCI and dementia. Pearson's chi-square test showed a significant association of age and severity of cognitive decline amongst MCI and Dementia (p= 0.045). There was no significant association in marital status, education, working status, hobbies, and exercises.

Gender comparison with cognition:

There was a significant association on Pearson chi square (p= 0.015) amongst the 3 groups.

Population distribution:

Based on Pearson's correlation test, significant association was observed as per ACE, MOCA, CDRS scores. Age of the participants ranged from 55 years to 81 years with the mean age being 59 years, 63 years & 65 years for Healthy population, MCI and dementia patients respectively. 17 were females and 13 were males. All the participants were married, 13 being educated till 8th grade and remaining educated till 10th grade and above. There were more individuals having dementia above 70 years of age and with MCI in 60 to 64 years of age. ACE scores distributed the population (n=20) who were likely to have dementia. As per CDRS scores 24 patients had questionable impairment, (n=3) had very mild dementia and (n=1) each with mild and moderate dementia. ACE III showed a significant association in individuals who would likely to have dementia. On MOCA, educational status showed a significant association in dementia group.

Acoustic parameters:

On MDVP, it was found that there were significant differences in Fo, jitter percent and syllabic rate. One- way ANOVA showed significant difference (p=0.02907) in MCI (mean Fo= 143.090Hz) and Dementia (mean Fo=186.485 Hz), showing a higher Fo average in Dementia group. There was a significant difference in Jitter percent (p=0.0167) in MCI (mean = 2.919%) and Dementia (mean =2.701%), showing a higher jitter percent in MCI group. There were differences observed clinically in soft phonation index, noise to harmonic ratio, voice turbulence index degree of voiceless and degree of voice breaks; however, were not statistically significant.

On MSP, there were statistically significant differences (p=0.04232) seen in syllabic rate, with a mean of 4.157/s in MCI and 5.882/s in Dementia, which showed a lower syllabic rate than healthy normal group (mean=6.018/s). Also it was seen that there were clinical differences in values of F2magn, F2 rate and intonation profile in running speech; however, not statistically significant. Clinically indicating having slower tongue motility with lower F2 rate and more duration on F2 magn.

Post hoc analysis of significant parameters revealed the average Fo was significant between the Healthy and MCI group (p= 0.038); jitter percent was significant between Healthy and MCI group (p < 0.05 ) and between Healthy and Dementia group( p<0.05). The syllabic rate was significant between Healthy and MCI (p<0.05).

Perceptual analysis:

On GRBAS; there were no significant findings and co relation observed in and among the three groups. On CAPE V, there were significant findings in the roughness parameter with a p= 0.013. Post hoc analysis of roughness scale showed a p value of 0.036 on applying the Pearson chi square test. Roughness scale of CAPE V is a perceptual correlate.

The Fo changes with ageing is due to change in laryngeal structures. When compared with the healthy population the MCI patients' Fo was low which can help us in very early detection of dementia. Higher the jitter percent more is the variation in cycle to cycle changes in frequency of voice; which was highest in MCI. There was a significant difference in jitter percent can thereby state that slight decline in cognition (MCI) can be picked up with the help of this acoustic parameter. Syllabic rate which had lowest value in MCI group when compared to healthy and dementia population. Since it is lower as compared to healthy population which means they took more time in production of syllables in sentence level. Similar findings were seen by Juan J et al, 2012 variables relating to F0, the main parameter for analyzing the intonation and melodic curve of any vocalization as well as fluctuations in voice frequency and amplitude (Jitter) were obtained which were correlating with the cognitive status of the participants. Also increase in voiceless segments is related to AD and to some of the cognitive impairments associated with it.

The voiceless segments in speech have recently been used as one of the most important parameters in distinguishing between normal voice and voice pathology (Paulraj et al, 2010).

The speech of elderly individuals is perceived as slower and more imprecise, with long pauses, is less intense and has less pitch, and more hoarse and shaky (Linville, 2001) which can be correlated with the lower syllabic rates in MCI and Dementia; owing to fluctuations in the amplitude of the fundamental frequency (shimmer) and spectral noise (Noise-to-harmonics ratio) (Brückl, & Sendlmeier, 2003).

Summary & Conclusion:

We can infer that acoustic parameters are of significant values in diagnosing the voice & speech pattern changes in elderly population transitioning from healthy cognition and in individuals with cognitive impairment. These parameters can help us in detection of early cognitive impairment leading to early intervention; thereby improving the quality of life hence can be used as early clinical biomarker.


  Abstract – SO167: Evidence based Dysphagia Management Post Neck and Cardiac Surgery: A Retrospective Case Series Top


Anindita Arun1 & Gayatri Hattiangadi2

1aninditabanik20@gmail.com &2gayatrislp@gmail.com

1Topiwala National Medical College, Mumbai - 4000008

Introduction:

Swallowing is a highly complex neuromuscular event. Frequent causes of dysphagia include stroke, neurodegenerative disorders, chemoirradiation, cricopharyngeal dysfunction, malignancies, esophageal stenosis, neck surgeries, traumatic brain injuries (Papadopoulou et al, 2012). Vocal fold paralysis cause dysphagic symptoms such as coughing, choking, and aspiration pneumonia post thyroidectomy and cardiac surgery due to RLN injury. However, non-surgical options such as speech and swallowing rehabilitation help in improving voice and swallowing functions in post-surgical cases.

Need for Study:

Dysphagia management requires intensive therapy in severe cases of dysphagia post neck and heart surgeries. The SLP needs to give such a therapy strategy a fair trial on a case by case basis and thus collect ongoing evidence base of a therapy regimen. This is important in the Indian context where there is a paucity of evidence based studies.

Aim & Objectives:

To administer traditional dysphagia therapy and to compare the pre and post effects of traditional dysphagia therapy in individuals post neck and cardiac surgery.

Method:

This is a retrospective study which included 7 cases admitted in an acute care set up for neck and cardiac surgeries and discharged when stable; were referred for speech and swallowing therapy. The patients were assessed for speech & language skills. Swallowing assessment to determine the severity was done using The Nair hospital bedside swallowing assessment (NHBSA) and Nair hospital swallowing ability scale (NHSAS). Severity was measured before and after therapy. FEES/Barium swallow were done after completion of therapy not before therapy owing to the severity and risk of aspiration. Therapy plan included first few (4-5) sessions with only improving swallowing mechanism on dry swallow followed by trial feed. Session wise reports were based on the case history, therapy goals, no. of sessions, effect of traditional therapy on swallowing mechanism. The parameters assessed and compared were duration for swallows using Four Finger Test for swallowing, the ability to swallow/ duration of swallow for different consistencies of food i.e. solids, semisolids, thin liquid and dry swallow. Comparison were done before and after therapy based on swallowing skills.

Results & Discussion:

Pre therapy assessment on NHSAS and NHBSA for all cases revealed, moderately severe to complete pharyngeal dysphagia. Those with severe to complete dysphagia had difficulty in swallowing any consistency, on Ryles tube feed, inadequate hyolaryngeal excursion,wet, gurgly voice, coughing before and after swallow, no volitional attempt to clear throat, poor management of secretions. Those with moderately severe dysphagia, had difficulty swallowing with audible aspiration, modifications of consistency leading to aspiration, could manage secretions with difficulty, inadequate hyolaryngeal excursion, wet, gurgly voice, few volitional attempts to clear throat. The therapy regimen included rehabilitative maneuvers, compensatory stratergies, oromotor exercises, pushing pulling technique for vocal cord adduction. Electrical stimulation was not done due to abnormal ECG. Post therapy these patients were decannulised from trachesotomy and Ryles tube was removed.

Case 1: N.S 35years female. c/o bilateral vocal cord palsy post total thyroidectomy, on tracheostomy and Ryles tube feed. A total of 20 sessions of traditional dysphagia therapy. After 5th session, she could produce voicing with pushing technique, MPD =8 seconds. 6th session onwards, there was complete hyolaryngeal excursion on dry swallow. There was reduced amount of secretions, frequency of suctioning had reduced, no wet gurgly voice, no secretions. After 15th session, she was able to have all consistencies orally, swallowing duration was within normal limits. FEES showed no pooling of saliva, bilateral vocal cords fixed, no evidence of aspiration or penetration.

Case 2: H.H 27 years female. c/o left vocal cord palsy post total thyroidectomy on Ryles tube feed. A total of 7 sessions, helped in improving voice and swallowing mechanism. With pushing and pulling, she could produce voicing, MPD of 10 seconds, could speak in sentences. There was complete hyolaryngeal excursion on dry swallow with Masakao, was able to swallow thin, thick liquids, semisolids and solids full spoon quantity with chin tuck head rotation to better side. There was no aspiration, swallowing duration was within normal limits.

Case 3: R.J. 48 years female. c/o left vocal cord palsy post papillary Ca thyroid total thyroidectomy, on Ryles tube. A total of 9 sessions of therapy helped in improving loudness of voice using pushing pulling technique could phonate till 10 seconds, no breathiness. Post 3rd session, she could swallow all consistencies of full spoon quantity with head rotation to the better side with chin tuck posture, complete hyolaryngeal excursion with Masako. With no aspiration, no wet gurgly voice, complete hyolaryngeal excursion. Post therapy, FEES showed bilateral vocal cord mobile, no aspiration and was eventually on oral feed for nutrition.

Case 4: C.P 25 years female. c/o left vocal cord palsy post open heart surgery in a k/c/o Rheumatic heart disease with severe cardiovascular dysfunction of multiple sclerosis, on Ryles tube feed. After 5th session, she could produce voicing with pushing technique, MPD increased. There was complete hyolaryngeal excursion on dry swallow with Masako and guided Mendelsohn’s; was able to swallow all consistencies of full spoon quantity post 8th session without any maneuver with no aspiration; swallowing duration was within normal limits. FEES showed no pooling of saliva, bilateral vocal cords fixed, no evidence of aspiration or penetration.

Case 5: P.P 18 years male. c/o right vocal cord restricted mobility post open heart surgery due to atrial septal defect, on Ryles tube feed. Post 3rd session, he could swallow all consistencies of full spoon quantity with head rotation to the better side with chin tuck posture and increased loudness of voice with pushing technique. With no aspiration, complete hyolaryngeal excursion. Post therapy, 70O scopy showed bilateral vocal cord mobile, no pooling of saliva and was eventually on oral feed for nutrition.

Case 6: S. B. 35 years female. c/o bilateral vocal cord palsy post open heart surgery in k/c/o congestive heart failure and Rheumatic Heart Disease on tracheostomy and Ryles tube. She attended total of 20 sessions of traditional dysphagia therapy. After 5th session, she could produce voicing, MPD upto 8 seconds. 9th session onwards, there was complete hyolaryngeal excursion on dry swallow with Masako and Mendelson. After 15th session, she was able to have all consistencies orally; swallowing duration was within normal limits. FEES showed no pooling of saliva, bilateral vocal cords fixed, no evidence of aspiration or penetration. Barium swallow showed rapid passage of barium into esophagus.

Case 7: R.S. 23 years male. c/o right vocal cord palsy post open heart surgery, Mitral valve replacement and peridirectomy and post intubation, on Ryles tube for nutrition. A total of 10 sessions, helped to produce voicing using pushing exercise with increase in loudness, no breathiness. Post 4th session, he could swallow all consistencies of full spoon quantity with head rotation to the better side with chin tuck posture with no aspiration and swallow duration within normal limits. Post therapy, 70O scopy showed bilateral vocal cord mobile.

RLN injury is a major concern in thyroid and parathyroid surgery due to injuries to the recurrent laryngeal nerve (Hazem et al, 2011). Mechanisms of injury to the nerve include complete or partial transection, traction, or handling of the nerve, contusion, crush, burn, clamping, misplaced ligature, and compromised blood supply (Steurer et al, 2002). In unilateral vocal cord injury, traction injury of the nerve and damage of axons may result in dysphonia lasting up to 6 months. Bilateral RLN injury is much more serious, because both vocal cords may assume a median or paramedian position and cause airway obstruction and tracheostomy may be required (Marcus et al, 2003). These could be attributed to 2 cases with bilateral vocal cord palsy had tracheostomy and ryles tube feed has severe to complete dysphagia whereas others had only ryles tube feed had moderately severe dysphagia.

The left side is usually more affected than the right side in view of its long intrathoracic segment. Few cases of right vocal cord paralysis post open-heart surgery are reported (Hamdan et al, 2010), which is similar to this study and lead to dysphonia and dyaphagia. A holistic management using vocal cord adduction techniques improves voicing; swallowing rehabilitative maneuvers such and Masako as it helps to increase the tongue base and throat muscles range of motion and Mendelsohn’s would help to close the airway at the vocal fold level before and during swallow, to increase tongue base retraction and pressure generation, and to clear residue after the swallow and in hyolaryngeal excursion (Michelle, 2012). So, in the present study a combination of all the swallowing exercises have helped in improving the mechanism and eventually helped all patients to have oral feeds successfully.

Summary & Conclusion:

There is an advantage in utilizing traditional dysphagia therapy, in surgical cases and it is imperative to establish whether or not it has greater efficacy. The emergence of data from more rigorous and well designed clinical outcomes studies will surely advance our understanding of this technique and contribute to collection of data towards evidence based therapy.


  Abstract – SO169: Immediate Effects of the Straw Phonation Exercise on Vocal Loading in Carnatic Classical Singers Top


Devika Vinod1, Usha Devadas2 & Santosh Maruthy3

1devikavinod1624@gmail.com,2usha.d@manipal.edu, &3santoshm79@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

In the past, several attempts have been made to understand the anatomical and physiological changes that happen in the larynx after prolonged and consistent usage of voice. One of the approaches used in these studies is challenging the optimal functioning of the larynx called vocal loading. Vocal loading typically refers to ‘any condition that stresses or challenges the optimal functioning of the laryngeal system will lead to diversity in the range of the function of the system and physiological limits (Fujiki & Sivasankar, 2017). Just like how persons were made to run on a treadmill to stress their physiological system and asses their cardiac functioning, the larynx can also be made to stress by manipulating both internal and external factors. The most commonly used vocal loading task was loud reading with various intrinsic factors, such as reading at different intensity levels, using forced mouth-breathing while reading, loud reading using pressed voice quality at high and low range level, and reading aloud for 40 minutes in a multi-talker-babble noise. Other tasks used in the literature were the repetition of sustained vowels, 45 minutes of child-directed speech in the presence of 65 dB multi-talker babble background noise, phonating vowels, and singing.

Need for Study:

Since singers are elite vocal performers and singing demands healthy vocal mechanism, it is necessary to study the effect of vocal loading in singers. In doing so, we can identify those singers who are susceptible to develop voice problem. As their vocal folds have to regularly traverse up-down the scale, and sing in great intensities, we hypothesize that singers, in our case Carnatic classical singers, may have different vocal loading effects when compared to other professional voice users and non-singers.

Likewise, the regular and disciplined practice of vocal warm-up exercises is an essential component of singing. Several studies in the literature have reported, semi-occluded vocal tract exercises (SOVTs) such as straw phonation and humming yield optimal vocal fold adduction and improve singing voice. However, the immediate effects of SOVTs such as straw phonation on vocal loading in singers are not well understood.

Aim & Objectives:

  1. To compare acoustic, electroglottographic, aerodynamic and self-perceptual measures before and after vocal loading task in Carnatic classical singers
  2. To investigate the immediate effects of straw phonation on vocal loading in Carnatic classical singers.


Method:

A total of 11 active Carnatic classical singers (2 males & 9 females), between the age range of 17-45 years, participated for two sessions. Counterbalancing technique was followed for the present experimental study, where each of the two sessions represented a non-treatment (Day-1) and treatment (Day-2) condition. The sessions included a vocal loading task of continuous singing for an hour in the presence of background noise (multi-talker babble of 65-70dB), without water intake. During the treatment condition, just before the vocal loading task, the participants followed an eight-minute straw phonation exercise as demonstrated by the investigator, while for the non- treatment session participants were subjected to the vocal loading task without any treatment.

All the participants were evaluated before and after the vocal loading challenge for acoustic, electroglottographic, aerodynamic, and self-perceptual voice measures and identical data collection procedures were carried out for both sessions. Phonation and reading samples were obtained from each participant on both occasions. Among the acoustic measures; Fundamental Frequency (F0), Lowest Fundamental Frequency (LF0), Highest Fundamental Frequency (HF0), Absolute Jitter, Jitter Percent, Shimmer in dB, Shimmer Percent and noise to the harmonic ratio (NHR) were considered. Among the electroglottographic measures; Contact Quotient (CQ), Open Quotient (OQ) and Speed Quotient (SQ) were considered. The Cepstral Peak Prominence (CPP) and smoothened Cepstral Peak Prominence (sCPP) were the included Cepstral measures. The Perceived Phonatory Effort (PPE), Perceived Vocal Tiredness (PVT), and the Evaluation of the Ability to Sing Easily (EASE) were the self perceptual rating scales included in the study. The Maximum Phonation Duration was calculated for /a/, /i/, /u/, /s/ and /z/, as aerodynamic parameters.

SPSS version 21 was used, and descriptive statistics were done to obtain the mean and standard deviation of the extracted parameters. As the data were non-normally distributed, Wilcoxon signed ranks test was carried out to find out the significant changes, if any, on the extracted acoustic, electroglottographic, aerodynamic and self-rated voice measures.

Results & Discussion:

The first objective of the present study was to compare acoustic, electroglottographic, aerodynamic, and self-perceptual measures before and after vocal loading task in Carnatic classical singers. It was noted that none of the acoustic and electroglottographic parameters showed a significant difference (as p>0.05) before and after vocal loading task. This included; F0, F0 related measures, Shimmer, NHR, Cepstral, and electroglottograhic measures. The results are in agreement with the earlier studies on vocal loading. Previously, authors have noted parameters like jitter, shimmer (Verstraete, Forrez, Mertens, & Debruyne, 1993) and CPP (Buekers, 1998 & Gorham- Rowan et al., 2016) to have limited sensitivity to laryngeal changes, induced as a result of vocal loading task. Studies have also shown no change in the individuals F0 with vocal loading tasks (De Bodt at el, 1998; Neils & Yairi, 1987).

However, the self perceptual measures like PPE (Z=-1.95, p=0.02), PVT (Z=-2.04, p=0.04), and EASE (Z = -2.67, p=0.00) were noted to increase with vocal loading, showing significant difference at p < 0.05. Current results are in consensus with several other studies where PPE is considered to be a major measure sensitive to vocal loading (Södersten, Ternström & Bohman, 2005; Enflo et al., 2013; Chang & Karnell, 2004). Maximum phonation duration of /a/ alone was significantly reduced, showing a significant difference of p=0.03 (Z =-2.13).

The second objective was to investigate the immediate effects of straw phonation on vocal loading in Carnatic classical singers. It was observed that except for PVT (Z=-2.40, p=0.01), no other measures showed a significant difference before and after vocal loading task. It could be understood as; there is no change in the voice measures except for PVT, before and after a vocal loading task, when the singers were subjected to an eight-minute straw phonation exercise. In other words, there was an immediate effect of straw phonation exercise on vocal loading among Carnatic singers. This could be attributed to the various benefits of straw phonation into water. With straw phonation, the vertical laryngeal position is known to lower, which is considered to reduce muscle tension (Wistbacka, Sundberg, & Simberg, 2016). Through straw phonation, bubbles are produced at the surface of the water; this, in turn, generates a pulsating oral pressure which could probably act as a massage to the laryngeal and pharyngeal tissues (Guzman et al., 2017).

Summary & Conclusion:

The results of the present study highlight the immediate effect of straw phonation on vocal loading. Straw phonation could thus be considered as a warm-up exercise among singers before they participate in a concert, which could also be considered as a vocal loading task. However, the results of the present study need to be strengthened considering larger sample size and across different vocal loading tasks.


  Abstract – SO170: Acoustical Analysis of Vowel Production in Native Kannada Speaking Individuals with Aphasia Top


Vasupradaa M1 & Hema N2

1vasupradaa.1995@gmail.com &2hema_chari2@yahoo.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Aphasia is an impairment of language, affecting the production or comprehension of speech and the ability to read or write due to brain injury such as stroke, head trauma, brain tumors, or infections. It mainly affects an aspect of language use. However, multiple aspects of speech are also impaired, such as prosody. Speech production in aphasia is described as effortful, nonfluent, dysprosody (Goodglass & Kaplan, 1972). The acoustic parameters like fundamental frequency, formant frequencies, duration, and intensity were the perceptual correlates of melodic contour and rhythm of speech. These are the several aspects of temporal parameters being affected because of poor speech intelligibility in disordered speech and that can be assessed using their vowel production. Vocal tract shaping filters the glottal source of sound to produce different vowel sounds. Different speakers with different sized vocal tract and different articulatory habits would produce different formant frequencies and different durational patterns. Specifically, the center frequency of the lowest resonance of the vocal tract, called first formant frequency or F1, corresponds closely to the vowel height. The second formant frequency F2 reflects the placement of the tongue during the production of the vowel. F3 is more responsive to front and back constriction. Concerning the speech intelligibility of Aphasia, the present study investigates formant frequencies and vowel durations of Kannada speaking individuals with aphasia in comparison with the neuro-typical.

Need for Study:

Only few major studies have been done on acoustic analysis of vowel production in aphasia. There is a need to identify and understand the major acoustic aspects of vowel production in Kannada speaking individuals with aphasia which further paves the way for better assessment and management of them. Hence the present study was executed to study the acoustic analysis of vowel production in Kannada speaking individuals with aphasia.

Aim & Objectives:

The aim of the present study is to investigate the acoustical aspects of vowels in Kannada speaking individuals with aphasia.

Objectives

  1. To identify and compare the vowel duration between individuals with aphasia and the neuro-typical.
  2. To compare the duration of the vowel preceding the voiced and voiceless consonants production of target words by the individuals with aphasia.
  3. To identify and compare the formant frequencies (F1, F2 & F3) of vowels (/a/, /i/,/u/) in the target word productions of individuals with aphasia in comparison with neuro-typical.


Method:

Five individuals with the diagnoses of aphasia (Brocas aphasia-3 and Anomic aphasia- 2) and five neuro-typical in the age range of 20-50 years participated in this study. All participants were asked to phonate vowel /a/, /i/, /u/ in their optimum rate with their relaxed position to find out the formant frequencies and were also asked to read the target words consisting short Kannada vowels (/i/, /a/, /u/) preceding voiced /g/ and voiceless consonants /k/ in a CVCV combination (Kapali & Savithri, 2011) which was printed on 4 x 6-inch cards in large font and presented three times each in quasi-random order. The initial vowel in the word was truncated and analyzed to find out the duration of vowels were as second vowel would be considered as the supporting vowel. If the participants were not able to read the target word, they were asked to repeat the target word read by the investigator. An average of 10 words (out of 18) was repeated for the individuals with aphasia. All these tasks like Phonation, Reading or Repetition were audio recorded using Olympus LS-100 voice recorder. From these audio recorded speech samples, the formant frequencies and vowel durations of the short Kannada initial vowels /i/, /a/, /u/ of a preceding voiced /g/ and voiceless /k/ consonants in a CVCV combination were truncated, measured and analyzed using Praat Software. The effect of voicing on vowel duration was also investigated.

Results & Discussion:

The descriptive statistics showed a clear demonstration of the vowel duration (mean value) being higher for vowel preceding voiced consonants compared to vowel preceding voiceless consonant for both the group of participants and higher for individuals with aphasia compared to neuro-typical individuals.

The further statistical analysis was done using Mann Whitney U test to compare between the groups. The results showed that there was a significant difference (p<0.05) between the individuals with aphasia and neuro-typical on vowel duration of a preceding voiceless consonant. And the vowels which were preceded by a voiced consonant did not show any significant statistical difference between neuro-typical individuals and individuals with aphasia. Thus, the result of the first objective is the longer vowel duration in individuals with aphasia compared to neuro-typical. Wilcoxson signed rank test was used to check the difference within aphasic group for the duration of vowels preceded by voiced verses voiceless consonant. The results showed no significant difference in the duration of vowel preceded by voiced and voiceless consonants answering the second objective. The descriptive statistics mean values demonstrated that the F2 of /i/ in controls had a greater formant frequency compare to individuals with aphasia whereas F1 & F2 values of /u/ in aphasics were relatively higher compare to controls. The Mann Whitney U test demonstrated no significant difference between the two groups for the formant frequencies (F1,F2 & F3) of vowels /a/,/i/,/u/, except F1 of vowel /u/ (p<0.01) and F2 of vowel /i/ (p<0.01) and /u/ (p<0.01).

Discussion:

The results reveal that the individuals with aphasia had a more variable vowel duration compared to neuro-typical individuals using Kannada language. Specifically, this result was in agreement with the statement discussed by Ryalls (1986), that there was a main effect for vowel duration mean and standard deviations and the results were significantly greater for the anterior aphasics compared to normal speakers, but not significantly different from posterior aphasics. When compared for voiced and voiceless consonant effect, the results reveals that there was a significant effect on vowel duration in Kannada speaking individuals with aphasia. The third objective revealed no significant difference between formant frequencies of vowels between controls and aphasics except F1 of /u/ and F2 of /i/ and /u/. This result is in support with Haley and colleagues study (2001) demonstrating vowel formant variability and greater variance in the participants with aphasia compared to the controls but not statistically significant.

Summary & Conclusion:

Finally, this study concludes on acoustic aspects of timing deficits of individuals with aphasia which would contribute to the overall speech intelligibility. The findings may also have clinical applications concerning speech intelligibility as a separate goal to be considered along with the everyday speech and language therapy.


  Abstract – SO171: Acoustic and Videolaryngoscopic Changes Post Vocal Abuse at College Fest Top


Anuradha1 & Sanjay Kumar2

1anuradha2ks@yahoo.com &2sanjaymunjal1@hotmail.com

1 Post-Graduate Institute for Medical Education and Research, Chandigarh - 160012

Introduction:

Straining or continuous shouting leads to trauma to laryngeal structures. Higher vocal demand leads to structural as well as physiological alterations of the vocal folds, in turn resulting in various vocal pathologies such as nodules, haemorrhagic cysts, edema and phonatory gap.

Previous researchers have established that duration and intensity of voice usage are the most critical loading factors of the voice. In addition to vocal demands, various other specific risk factors such as loud background noise, poor posture, vocal cord dryness increase the chances of a vocal disorder. College fests are a feast for students and they tend to make sudden, excessive and unrestricted contact of vocal cords, which results in phonotrauma.

Need for Study:

To the best of our knowledge there are no studies in literature reporting effect of short term abusive vocal usage among youth during concerts and college fests and consequently its repercussions on voice.

Aim & Objectives:

The present study aimed to study the changes in vocal characteristics following a vocally demanding situation (college fest).

Objectives:

  1. To determine whether short term vocal abuse has an impact on the voice, via video laryngoscopic changes.
  2. To assess the Acoustic changes in vocal parameters pre and post college fest.
  3. To record any perceptual voice changes on GRBAS after excessive vocal usage at fest.


Method:

In this study, a total number of 30 undergraduate students, within the age range of 18-27 years were taken. All the participants taken in the study were students (females) in any one of the paramedical courses running at the tertiary health care centre. Students with history of any vocal pathology, GERD, hearing loss, systemic disease, drug use were excluded from study. All the participants attended the college fest and were observed to be shouting/hooting in the fest as well. A video laryngoscopic examination was also done prior to the fest to ensure the absence of any pre-existing laryngeal pathology. A vocal fold analysis in terms of vocal fold free edge, its periodicity, glottal closure and symmetry was carried out. A pre fest voice evaluation with GRBAS scale along with acoustic analysis using Multi Dimensional Voice Programme(MDVP) in CSL software (CSL Model 4500.Ink) was carried out. Subjects were asked to phonate the Vowel /a/ for 5 seconds from a distance of around 15cm from the microphone. Three trials were taken, and best of three was considered. Pre and post analysis in terms of perceptual parameters was also carried out. Following college fest, the students underwent a repeat GRBAS, MDVP analysis and video laryngoscopic examination to measure any changes.

Data analysis: Three experienced judges analyzed the perceptual data of participants' voice quality following the GRBAS scale. Independent rating was given by every member of panel of listeners for each parameter of GRBAS scale.

Data Processing: The pre and post test data were processed by comparing the scores obtained for each participant. Comparison of pre and posttest median values for acoustic parameters was determined using the Sign test (Dixon and Mood, 1946).A p-value was calculated for each parameter to determine whether change that occurred was statistically significant.

Results & Discussion:

Results of the laryngovideoscopic (LVS) findings reflect that out of 25 subjects, 3 were diagnosed with vocal nodules post vocal trauma. LVS assessment showed a change in glottis closure (incomplete closure) in 13% of participants, assymetric vibrations on both sides (10%), aperiodicity (18%) subjects. The difference in pre and post observation of periodicity and vocal pathology was found to be statistically significant.

GRBAS parameters in which a change occurred were grade (hoarseness) increased in 36.67%, breathiness in 33.34%, roughness in 20%, asthenia in 10%, and strain increased in 30%. The p- value was statistically significant only for hoarseness and breathiness parameters.

Acoustic analysis: A p-value estimation from all the measured acoustic parameters revealed a significance difference in Jitter percent, Shimmer percent, soft perturbation index, harmonics to noise ratio.

Pinarbasli MO et al (2019) conducted a study on soccer fans and reported differences in jitter , shimmer ,and normalised noise energy pre and post soccer match(7) .Literature has various studies on teachers, One of them conducted by Baiba (2017) indicated a high risk of voice disorders among Latvian teachers(8).

In our study group, the increase in post-fest jitter and shimmer values could be attributed to the change in vibration characteristics of the vocal folds due to phonotrauma, less likely to the increased mass.

Summary & Conclusion:

It can be concluded that students who use their voice beyond the limits by shouting, hooting or cheering in the college fest days are exposed to voice changes, and if the abuse persists, it may cause permanent changes in the vocal physiology.

The noise levels at fests are very high making the vocal abuse even worse as people tend to speak louder than normal in the presence of loud ambient noise. Further studies could be done with a larger sample size and evaluating voice after one month to see recovery pattern in these subjects.


  Abstract – SO172: Speech and Language Profile of Acquired Perisylvian/ Opercular Syndrome following Perinatal Asphyxia: Case Series Top


Preethi R1 & Srinivasaraghavan Rangan2

1preethiaslp@gmail.com &2drsrirag86@gmail.com

1Christ Medical College, Vellore - 632004

Introduction:

The perisylvian-opercular cortex which is located on either side of the Sylvian fissure and encloses the insula, is the area on the motor homunculus responsible for motor control of the face and mouth. The region is also referred to as the Operculum (meaning ‘mouth’). Perisylvian syndrome or Opercular syndrome is a rare cortical form of pseudobulbar palsy due to bilateral lesions of the perisylvian cortex. In children, the lesion can be of developmental origin due to neuronal migration disorder or acquired due to injury in the perinatal or postnatal period.

Involvement of the perisylvian regions results in paralysis of the facial, pharyngeal, masticatory, tongue, laryngeal, and brachial muscles. The clinical features are lack of speech, severe drooling, difficulty in vegetative skills such as biting, chewing, sucking, blowing, swallowing and decreased muscle tone in the face and tongue (Darras et al, 2015; Desai et al, 2013). These symptoms may also be accompanied by motor difficulties, epilepsy and intellectual disability.

Need for Study:

Although this syndrome is relatively common among children with perinatal complications there are only a few descriptive studies of the condition. There is a paucity of information on the speech and language aspects related to this condition (Braden et al, 2019). Moreover perisylvian syndrome must be distinguished from other conditions which affect speech and language like akinetic mutism, oral buccal apraxia, Brocas aphasia and bulbar palsy. Appropriate and early recognition of the disorder is also essential for appropriate management.

Aim & Objectives:

To describe the speech, language profile and oromotor functions of children with Acquired Perisylvian syndrome following perinatal asphyxia.

Method:

This was a cross sectional study among children who had difficulties in speech, language and oromotor functions, with history of perinatal complications. The study was done in the Developmental Paediatrics Unit of a large tertiary care hospital. All the children were initially assessed by developmental paediatricians. Perisylvian syndrome was diagnosed based on the essential criteria described by Kuzniecky et al - (i) Oropharyngoglossal dysfunction, (ii) Moderate to severe dysarthria, (iii) Bilateral perisylvian lesions on neuroimaging. The children were then referred for further evaluation by the Speech Language pathologists and the Developmental Psychologists.

The participants underwent a detailed evaluation by the Speech Language pathologists. The language was assessed using Receptive-Expressive Emergent Language Scales (REELS). The oromotor functions were assessed using COMDEALL Oromotor checklist. Modified Drooling Checklist (Job A, et al, 2018) was used to assess the severity of drooling. The AYJNIHH speech intelligibility rating scale (Developed by Speech Language Pathology department, AYJNIHH, 1984) was used to rate the speech intelligibility (Vaswani et al, 2015). In addition the children underwent a detailed cognitive assessment, by Developmental Psychologists, using standardized developmental or intelligence tests. Each child was seen over 5 to 10 therapy sessions (each session lasting approximately 45 minutes duration) over two weeks. Along with the assessment, therapeutic interventions and parental counselling were also done. Oromotor and feeding strategies were emphasized. Where appropriate, Augmentative and Alternative Communication (AAC) strategies were introduced through low and high technology devices and aids.

Results & Discussion:

Seventeen children with a median age of 76 months (48 - 88 months) were studied during the period from January 2019 to August 2019. Among the 17 participants, 10 of them were non-verbal. Among the verbal children, the median Receptive Language Age was between 54 - 60 months and Expressive Language Age was between 27 - 30 months. The median Receptive Language Age was between 20 - 22 months and Expressive Language Age was between 9 - 10 months for participants who were non-verbal. The median AYJNIHH score for verbal children was 3 on the scale, indicating slurred speech with sub-optimal intelligibility. All the participants had drooling and the median drooling score was 26 (range 18-30) indicating significantly severe drooling. The median OPME score was 15 (range 8- 30) indicating poor Oromotor functions and reduced range and motion of articulators. In all the children cognition was mild to moderately affected.

Lesions which involve the perisylvian cortex, subcortical areas or pyramidal pathways bilaterally, affect the voluntary use of the facial, pharyngeal and masticatory muscles. In our study, all children with perisylvian syndrome had severe dysarthria with difficulties in chewing, drooling and swallowing. Majority of the children had impairment in their expressive language abilities. The receptive language skills were much ahead of their expressive language abilities. Children who were non-verbal were able to communicate using gestures. Even among those could speak, the speech was slurred and slow with reduced speech clarity. Drooling was severe in most children. This aspect had a significant impact on the quality of life as all parents felt it was a major issue. Most of the children had reasonably good comprehension but because of the impairment in expressive language they were often considered as severely retarded.

Intensive therapy for oromotor and speech functions were provided; however there were no major gains noted over the course of the two weeks of intensive training. Most of the children were started on Augmentative and Alternative Communication (AAC) to enhance their communicative functions; acceptance to AAC was good amongst non-verbal children.

Summary & Conclusion:

Perisylvian syndrome should be considered when children present with dysarthria, difficulties in Oromotor functions and when there is history of perinatal complications. The diagnosis has to be confirmed radiologically. The Speech Language Pathologist must be aware of this condition in order to distinguish it from other disorders like Apraxia and Mutism so that appropriate management can be provided. Since chances of recovery of speech function is very poor in this condition, early introduction of AAC must be considered.


  Abstract – SO173: Vocal Fatigue in Beat Boxers: A Preliminary Study Top


Thejaswi D1, Andria Johnson2 & Arpitha Mariyam3

1thejaswi07@gmail.com,2navamikrishna6@gmail.com, &3arpithamariam369@gmail.com

1NITTE Institute of Speech And Hearing, mangalore - 575018

Introduction:

Beatboxing is a modern art form of vocal imitation of musical instruments. In beatboxing culture, the performer, who is popularly known as the beatboxer, mimics limitless sounds that are specific to, and not restricted just to, musical instruments, humans, animals, and machine (De Torcy et al., 2014). Beatboxing is emerging as a new cultural currency and music frontier among the youth of Indian metropolitan cities and its roots are spreading deep into semi-urban cities. Commercial success of social media platforms (Instagram & Facebook) and video sharing website (YouTube) has garnered beatboxers name, fame, and financial stability (Greenburg, 2012). Hence, numerous people have converted from being a beat boxer by passion to being a beat boxer by profession.

Beatboxing and its inherent mechanism have amazed researchers. Studies performed till date have used functional endoscopic analysis (Sapthavee et al., 2013; De Torcy et al., 2013), imaging studies (Proctor et al., 2013) and acoustic analysis (Stowell & Plumbley, 2008) to understand the intricate mechanism of beatboxing. Interestingly, beatboxers manipulate both inhaled and exhaled air to produce polyphnic sounds from multiple vibratory sources (Stowell & Pumbley, 2008). Overall, beatboxing is complex and it requires controlled manipulation of speech sub-systems.

Need for Study:

Often beatboxing is self-taught, and the beatboxer masters the art of beatboxing by trial and error, interaction with fellow beatboxer(s) and watching YouTube tutorials. Such unconventional methods of learning vocal art without awareness of vocal care and hygiene expose beatboxer(s) to voice problems, like vocal fatigue, encountered by other professional voice users and perpetually result in phono-trauma (Bastian & Thomas, 2016). Vocal fatigue is a self-perceived increase in phonatory effort that escalates throughout voice use and recovers with sufficient voice rest (Solomon, 2007). Vocal Fatigue Inventory (Nanjundeswaran, Jacobson, Gartner-Schmidt & Verdolini, 2015) is a well-accepted self-perceived rating scale proposed to quantify vocal tiredness.

Unlike ordinary layman, beatboxing has high vocal demands, and the mastery over this vocal art sets them apart from the non-professional voice users. However, research performed in practitioners of this young vocal art is scanty. Researchers have mainly adopted visualization techniques, like real-time MRI and functional endoscopy, to objectively understand the production of beatboxing sounds. To our knowledge, no study has assessed vocal fatigue in beatboxers although beatboxers are at high risk for vocal tiredness and vocal fold injury. Hence, the present study was undertaken.

Aim & Objectives:

The objectives of the study were to measure vocal fatigue in professional beatboxers by Vocal Fatigue Inventory (VFI) and compare their rating of vocal tiredness with age and gender-matched untrained singers and non-singers.

Method:

The present study is a cross-sectional survey using snow-ball type sampling. Professional beatboxers, untrained singers, and non-singers within the age of 18-30 years were personally contacted through email communication and social media platforms and briefed on the outline of the study. Out of 170 probable participants, 117 (37 beatboxers, 40 untrained singers and 40 non-singers) responded with consent to participate in the survey. The 117 participants also met the following inclusion and exclusion criteria. Professional beatboxers and untrained singers were of one year experience and literate in the English language. Participants with a history of substance abuse, head and neck surgery, sensory-motor ailments, and undergoing voice therapy did not perform the survey.

The online survey consisted of a subject information form and standardized VFI. Google Form platform was used to create and circulate the online survey. Subject information form comprised of demographic data, experience, the intensity of training and details of stage performance, and medical history. Within VFI, nineteen questions are subdivided into three factors eleven items on vocal tiredness and voice avoidance (factor 1), five items on physical discomfort related to voice production (factor 2), and three items on improvement in voice after a period of voice rest (factor 3). Participants had to rate according to how frequently they experience the symptom using a 0-4 rating scale.

Responses of 117-participants responders were tabulated and statistically analyzed using SPSS (Version 17). Descriptive statistics calculated the mean and standard deviation of vocal fatigue and inferential statistics measured test of significance.

Results & Discussion:

Shapiro-Wilks test of normality revealed no statistical significance at p>0.05 suggesting the data to be normally distributed. Mean VFI scores of non-singers (4.36±3.86) were least in factor 1, while beatboxers (12.54±8.04) scored lesser than untrained singers (15.22±8.87). Similar results were observed in factor 2 and 3. In factor 2, non-singers, beatboxers, and untrained singers scored 1.58(±1.74), 4.29(±3.11), and 7(±4.44) respectively. Similarly, non-singers, beatboxers, and untrained singers scored 2.1(±1.72), 7.37(±2.97), and 7.67(±2.54) respectively for factor 3. Overall, the results suggest beatboxers have vocal fatigue compared to non-singers but, the severity of vocal tiredness is not at par with untrained singers.

Multivariate analysis of variance (MANOVA) was performed to measure between-group significance. Results of MANOVA suggest statistically significant difference at p<0.05 across all three factors in beatboxers, untrained singers and non-singers. Bonferroni post-hoc analysis also indicates a statistically significant difference at p<0.05.

The present study had beatboxers, untrained singers and non-singers completing standardized VFI. On analysis, the results of the study suggested significantly higher VFI scores in beatboxers than untrained singers, while non-singers rated lowest scores of vocal tiredness. Findings of the present study can be attributed to differences in biomechanics of sound production in beatboxers and untrained singers. In beatboxing, a beatboxer perfectly amalgamates vocal tract resonance with voice source characteristics that are similar to a singer (Echternach et al., 2010). But what sets beatboxer different from other professional voice users is the number of vibratory sources. Singers use true vocal folds to control pitch, loudness and voice quality but, in beatboxers, the vibratory sources are several i.e., vocal folds, pharynx, velum, tongue, lips, and cheeks (Buescher & Sims, 2011). Therefore in beatboxers, the vocal loading is shared by multiple oro-pharyngeal structures which in turn do not tax the laryngeal apparatus. This unloading of vocal folds is achieved by maneuvering the vocal tract and it is a common strategy used by beatboxers and/or other professional voice users to protect vocal folds from voice strain. Few other factors that help beatboxers reduce vocal strain are the amplification of low bass sounds by close or lips against mic technique and maintaining a voiceless glottal configuration for percussive sounds (Sapthavee et al., 2013).

Summary & Conclusion:

The present study quantified vocal fatigue inventory measures in beatboxers and compared with that in untrained singers and non-singers.

The state of evidence suggests that beatboxers experience statistically significant vocal tiredness across all three factors of vocal fatigue inventory. However, their scores were lesser than untrained singers, implying that the multiple vibratory sources in beatboxing delineate them from singers. Findings of the study have practical application in monitoring long term effects of beatboxing on voice.


  Abstract – SO174: Effect of Age, Bulb Position and Visual Feedback on Lip Strength and Endurance in Typically Developing Indian Children Top


Swapna N1 & Talla Santhoshi2

1nsn112002@yahoo.com &2santhoshithalla94@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Evaluating lip function is important in the rehabilitation of speech and swallowing disorders. Integrity of the lip musculature can be examined in terms of strength and endurance measures. Strength refers to the amount of force a muscle can produce with a single maximal effort and endurance refers to the ability of the muscles to exert force against resistance over a sustained period of time. Traditionally measures of lip strength examinations are made subjectively in the clinical practice during the oromotor assessment. However using Iowa Oral Performance Instrument (IOPI), these measures can be obtained objectively. IOPI measures the strength and endurance of lips and tongue which aids professionals in objectively documenting the deficits involved in swallowing and speech disorders.

Need for Study:

A review of the existing literature revealed that studies to measure the lip strength and endurance in healthy as well as disordered paediatric population are scanty. Most of the studies have been carried out in the healthy adults and elderly which majorly study on the strength and endurance of the tongue rather than the lips [Crow & Ship, 1996]. Therefore, there is a clear need to conduct studies to assess lip strength and endurance, particularly in a paediatric population. Besides, few studies on children that have been conducted are in the western population. This data cannot be used with children from other ethnic/linguistic backgrounds, as the developmental patterns and physiology could vary. Further, very few studies have been examined the effect of bulb position on lips and also the effect of visual feedback using IOPI device in a paediatric population. Keeping this in view the present study was planned to investigate lip strength and endurance in typically developing Indian children in the age range of 6-8 years using IOPI.

Aim & Objectives:

The present study aimed to investigate the changes in strength and endurance of the lips, if any, that occurs across different age groups and across various positions on the lips and also to assess the effect of visual feedback .Objectives of the Study were: 1) To investigate the changes in strength and endurance of the lips, if any, that occurs across different age groups. 2) To assess variations in lip strength and endurance, if any, at various positions on the lips, i.e., (Right, Left and Middle) across different age groups. 3) To assess the effect of visual feedback on lip strength and endurance measures across age groups.

Method:

A total of 30 typically developing children in the age group of 6-8 years were selected for the study. These participants were further divided into four different age groups with 6 months of age interval with equal numbers of males and females (6-6.6years, 6.6-7years, 7-7.6years and 7.6-8 years). The typically developing children were selected from the regular schools in Mysuru. The participants were included in the study after obtaining a signed consent from their parents and school teachers before the testing. All the participants were screened to rule out the presence of any disorders. IOPI model 2.2 was used for the study. Lip strength was measured by placing the bulb between the upper and lower lips at the middle (tubercle of the upper lip and groove of the lower lip). Lip strength on the right side was measured by placing the bulb towards the right corner between the upper and lower lips and on the left side by placing the bulb towards the left corner between the upper and lower lips. The participants had to press the bulb between the lips as hard as possible for about 2-3 seconds. Three trials were conducted without visual feedback across three bulb positions. Following this, three trials were performed with visual feedback across three bulb positions, in which participants were shown their values displayed on the device.

The endurance of the lips was measured by asking the participant to hold the bulb between the upper and lower lips at three bulb positions i.e. middle, right and left side of the lips. Each participant performed with and without visual feedback across three positions. For this, the instrument was set to 50% of the participant's peak pressure obtained without visual feedback (with respect to the bulb position) using set max arrows on the device. The participants had to press the bulb as hard as possible and squeeze as long as possible. The value was measured in terms of duration (seconds). Test-retest reliability, was assessed on 10% of subjects of the total population. The strength and endurance measures of lips obtained from all the participants were totalled and subjected to statistical analysis using SPSS version 20 software.

Results & Discussion:

The Cronbach's alpha varied between 0.92 and 0.98, which indicated high test-retest reliability for the obtained data. The results of the study indicated that there was no significant difference across age groups in the strength and endurance measures of the lip. However, it was found that the overall mean values of lip strength and endurance measures were higher for the older age group which indicated that developmental changes in children between 6-8years. This indicated some kind of fine refinement in development occurring at the lip level which is in consonance with the studies which investigated the developmental aspects of the lip. The studies conducted on orofacial development concluded that the basic development in oromotor movements especially in jaw and lower lip takes place up to 4 years and in the later stages, the oromotor system will undergo a process of fine refinement [Sharkey & Folkins, 1985].

The mean value of lip strength and endurance was highest in the middle position for all age groups. Further, the results of the comparison of lip strength and endurance across bulb positions also revealed no significant differences. The results of the present study is in agreement with the other studies [Nakatsuka, Adachi, Kato, Oishi, Murakami, Okada, & Masuda, 2011] indicating that lip is a sphincter muscle surrounding the area of lip, where the entire muscle works as singular so that right and left sides are unable to act independently of one another.

Two-way repeated measure ANOVA revealed a highly significant difference between the conditions with and without visual feedback on strength and endurance measures of lips across the age groups and bulb positions [F(1, 26)=71.63, p <0.01]. Thus visual feedback maximized the task performance in children. This suggests that feedback would enhance performance, which is further supported by few studies [Yeates, 2013].

Summary & Conclusion:

It can be concluded that lip strength and endurance exhibit small refinements with age. Though visual feedback aids performance, the bulb position does not influence the lip strength and endurance values. The present study could be considered as a first of its kind, in the Indian context that attempted comparison across age group, bulb position on lips as well as the effect of visual feedback in typically developing Kannada speaking children in the age group of 6-8years. The study would add on to the body of literature on the use of IOPI specifically since the studies to quantify lip strength and endurance in paediatric population are scanty.


  Abstract – SO175: Comparison of Cepstral and Spectral Measures of Infant Cry among Normal and High Risk Neonates using Praat: A Study in Indian Context Top


Shruti Kabra1, Vijaya Sinha2, Himadri Bhagat3, Tannu Priya4 & Indranil Chatterjee5

1shrutikabra416@gmail.com,2piyush.sinha12345@gmail.com,3anjushahlw@gmail.com,4Shudhanshusuman007@gmail.com, &5inchat75@gmail.com

1Ali Yavar Jung National Institute of Speech and Hearing Disabilities (Divyangjan), ERC, Kolkata - 700090

Introduction:

The primary means of communication of an infant with its surroundings is through crying. The newborn's cry is automatic and reflexive and comprises of rhythmic alternation of cry sounds (utterances) and inspiration (Hirschberg, Szenda, Koltai, and Illenyi, 2009). Infant cry is serves as a biological indicator which alerts the caregiver to attend to the infant's need. Crying is the highest degree of arousal produced by nervous system excitation triggered by some form of biological threat that may involve basic physiological processes like hunger, pain, sickness or insult (Lester, 2012). The acoustic characteristics of cry are directly influenced by the infant's physical and psychological state or by various external stimuli (Neustein, 2010). Several researchers have supported the view that high risk infants have distinctly more aversive cries than healthy infants (Vuorenski, Lind, Wasz-Hockert, and Partenen, 1971; Frodi et al., 1978; Zeskind and Lester, 1978).

The sound spectrograph produces a permanent visual record showing the distribution of energy in both frequency and time. The spectrogram has been a useful tool for the advancement of understanding infant cry analysis. The term cepstrum is defined as the power spectrum of the logarithm of the power spectrum. Fourier transformation of an acoustic signal is first performed to create a spectrum. Thus, the intensity of each frequency within the signal is shown in the spectrum. Several researchers have also supported the use of spectral and cepstral method to analyse infant cry (Alku et al. 2013, Naithani 2015, Tenold et al. 2005).

Need for Study:

Less number of studies has been documented in literature for identifying factors which can help to predict communication disorders in high risk neonates in neonatal intensive care units and well-baby nurseries through comparison of cepstral and spectral measures with healthy infants. This study is a step in the same direction.

Aim & Objectives:

The aim of the study is to compare Cepstral and Spectral parameters of infant cry in normal and high risk neonates using PRAAT software.

Objectives:

  1. To compare Cepstral and Spectral parameters of infant cry in normal and high risk neonates using PRAAT software.
  2. To analyse Cepstral and spectral features of infant cry samples using PRAAT software.
  3. To compare the Cepstral and spectral features of infant cry of high risk neonates and normal neonates.
  4. To describe the various distinctive Cepstral and spectral features of cry of high risk neonates and normal neonates.


Method:

Research design: Descriptive research design Participants

A total of 50 normal neonates and 50 high risk neonates within the age range of 0-3 months were selected for the study. The high risk neonates were recruited from neonatal intensive care units and well baby nurseries in pediatric wards of various hospitals in and around Kolkata.

Inclusion criteria: Should have been diagnosed as a high risk neonate or a normal neonate by a qualified neonatologist. Procedure:

The study was completed in three phases:

Phase 1: Participant selection

A total of 50 high risk neonates and 50 normal neonates within the age range of 0-3 months Phase 2: Elicitation and recording of infant cry

Cry was recorded for all the participants during standardized pain stimulus such as during administration of injections or during heel lancing (Nandyal, 1982) by a qualified nurse in the neonatal intensive care units and well baby nurseries.

A head mounted external microphone was kept at a distance of 6-10 cm away from the neonates mouth (Titze, 1994) during recording and duration of 15 to 40 seconds for each cry sample was recorded (Patil, 2010). The recorded data was sampled at a sampling frequency of 44.1 KHz (Kheddache and Tadj, 2013) and was stored using the PRAAT software in the laptop PC. The recommendations of the National Centre for Voice and Speech (Titze, 1994) were followed during recording of neonates cry sample.

Phase 3: Cepstral and Spectral analysis of infant cry sample:

Cepstral & spectral analysis were done using PRAAT software version 5.20 (Boersma and Weenik, 2010). Cepstral Parameters studied are:

  1. CPPS: Cepstral Peak Prominence Smoothed
  2. CP: Cepstral Peak
  3. CPP: Cepstral Peak Prominence


Spectrographic Parameters studied are:

Trailing (glottal roll), Flat, Falling, Double harmonic break, Dysphonation, Rising, Hyper phonation, Inhalation, Vibration, Weak vibration

Results & Discussion:

Cepstral and Spectral parameters of infant cry was analyzed in normal and high risk neonates using PRAAT software and the results of independent sample t- test indicated significant difference between CPPS, CP and CPP with p value of 0.007 (p< 0.05), 0.000 (p<0.05) and 0.000 (p<0.05) respectively. Also, Significant differences were found in dysphonation ( p=0.013; p<0.05) and hyperphonation ( p=0.046; p<0.05) between the values of normal children and high-risk neonates and no significant differences were found in other spectrographic parameters. The aim of the present study was to compare Cepstral and Spectral parameters of infant cry in normal and high risk neonates using PRAAT software. There are several researches supporting the use of Cepstral parameters for assessing voice. Maryn and Weenink (2015) in their study found the PRAAT'S version of the CPPS is a reliable measure to effectively provide dysphonia severity. In 2012, Molaeezadeh, Salarian and Moradi did a study where they have pointed out that many of the studies done on the auditory analysis of newborn cries have utilized the Cepstral coefficients to diagnose different diseases such as hearing disorder, asphyxia, CNS damage etc. The present study, has examined features that can be described as reflecting prosodic aspects of cry sounds. Work on cry prosody focuses on aspects such as the contour of cry pitch (Wermke, Mende, Manfredi, and Bruscaglioni, 2002; Mampe, Friederici, Christophe, and Wermke, 2009). The occurrence of shift, biphonation, diplophonia, hyperphonation, shatter, glide, vibrato, glottal roll, furcation, noise, and breaks was observed from the narrow band spectrogram. The minimum fundamental frequency values of the VLBWI were significantly higher than those of the controls (p=0.035) which did not aligned with our study. A study was done by Chen in 2014 and he did not found any significant difference in the cry phonation, percentage phonation, and first spectral peaks between the two groups. There was no significant difference in mean spectral energy and spectral tilt across groups, which was contradictory to the findings of previous studies and supportive with the present study.

Summary & Conclusion:

The study provides valuable information regarding the cry patterns in both quantitative and qualitative manner through spectral and Cepstral analysis. The parameters of the spectral and Cepstral analysis can be used in predicting the speech and language development of an infant which would enable in early identification and subsequent early intervention of speech and language impairments. Longitudinal study of cepstral and spectral features of infant cry can help in identifying specific cry patterns which are sensitive to specific disease conditions.

Taking into account different age range of the same group can help in identifying the significant cry patterns and their combination occurring in specific type of high risk condition. Other type of cries like pleasure and discomfort can also be studied by taking all these cepstral and spectral parameters.


  Abstract – SO176: Delayed Development of Categorical Perception in Children with Pre-lingual Deafness Top


Sharon Mary Oommen1, Jisma Rose George2 & Ardra Krishna3

1sharon.oommen@gmail.com,2jismaroseg@gmail.com, &3toardraskrishna@gmail.com

1National Institute of Speech & Hearing, Thiruvananthapuram - 695017

Introduction:

Categorical perception refers to the ability of an individual to discriminate between but not within-category differences along a auditory or verbal stimulus. The functional aspect of considering categorical perception in speech perception is to screen out irrelevant information for the recognition of lexical units. Hearing impairment affects categorical perception to a degree which depends on the duration of deficit and on the age at which it develops during language development.

Need for Study:

Assessment of categorical perception can be a simplified method to analyze the auditory identification and discrimination skills in children with pre-lingual deafness. The need to analyze these skills is necessary to note the development and progress of the child and their acquisition of phonological skills and later literacy skills. Hence the current study was carried out.

Aim & Objectives:

The study aimed to evaluate the categorical perception in children with the major objectives, (1) to compare the development of categorical perception in children with hearing impairment with that of aged matched peers (2) to assess whether categorical perception of can be an effective method to assess identification and discrimination skills of children with hearing impairment.

Method:

The study consisted of 90 participants, who were equally divided into The first group consisted on normal hearing children (Group I), and second group (Group II) consisted of children who were diagnosed with severe to profound hearing loss, with no associated problems, and who have undergone cochlear implantation surgery before 2 years of age. The implant age was 4 to 5 years. All the participants were using the same type of speech processors. All the participants of Group II were bimodal users. The stimulus for the study was taken from a section in the Malayalam Linguistic Profile test.The task used two alternative force- choice procedures. The stimulus was presented in pairs, comprisingof either different stimuli or the same stimulus presented twice. The participants had to indicate whether both words in the pair was same or different. There were24 pairs of stimuli.

Results & Discussion:

The scores were statistically analyzed across groups. Participants from Group I were able to discriminate the minimal pairs up to 90%, but for group II the results varied from 60-75% of correct discrimination. The results was an evidence that that there was a persistent delay in categorical perception even though the auditory rehabilitation was done in the early ages of life. The results also reveal that children with better scores have better auditory discrimination and identification compared to the ones with reduced scores.

Summary & Conclusion:

The current study verifies that perception of minimal pairs can be an effective method to assess identification and discrimination skills of children with hearing impairment post cochlear implantation.Assessment of perception of minimal pairs can be a simplified methodology of analyzing categorical perception in children. Categorical perception can also be considered as an important factor for the acquisition of phonological skills and later literacy skills. Hence it should be considered and monitored thoroughly during intervention for pre-lingual deaf implantees. The study also highlights the importance of considering critical periods of language development while planning rehabilitation for children with cochlear implantation.


  Abstract – SP738: Adaptation and Validation of Communication Attitude Test (BigCAT) for Kannada Speaking Adults Who Stutter Top


Rakesh C V1, Jyotsna K2 & Santosh Maruthy3

1cvrakesh.sphg@gmail.com,2jyotsna.k7@gmail.com, &3santoshm79@gmail.com

1All India Instittute of Speech and Hearing, Mysuru - 570006

Introduction:

Stuttering is a fluency disorder that is widely depicted in terms of its overtly presenting behaviours (e.g., Oral and silent prolongations, syllable/sound and mono-syllabic word repetitions, and blocks). These overt behaviours are readily distinguished by the listeners.

Additional to these core behaviours, covert symptoms such as affective reactions, variations in a speech-associated negative attitude, coping behaviours (escape and avoidance behaviours) and increased anxiety are also reported in the literature. Further, the existing literature suggests the importance of involving attitude based approaches in adults who stutter (AWS) to tread in the right direction concerning rehabilitation, emphasising the importance of attitude related studies in AWS.

In order to capture these compelling subtle aspects that form a facet of stuttering, there have been prior attempts in literature: Beginning with The Iowa Scale of Attitude Toward Stuttering, to the works of Erikson via S- scale and the subsequently brief: S- 24, to name a few. Though the tools mentioned above were successful in predicting the therapeutic outcomes, they proved unsuccessful in adequately measuring the speech associated negative attitudes of AWS. These tools are presently employed only as a guide to aid appropriate clinical interviewing due to their poor internal validity [8]. Thus, there was an imminent need for a valid, clinically employable tool to measure the speech associated attitude in AWS. Since 1980's Brutten and Vanryckeghem have strived to achieve this through their multifaceted tool- The Behaviour Assessment Battery (BAB) [9]. As a culmination of their works, namely- Behavioural Checklist, Speech anxiety measured through the Speech Situation Checklist, and communication attitude through the Communication Attitude Test.

The current topic of interest is derived from the Communication Attitude Test for Adults (BigCAT). This 35- item tool is known to be one of the most outstanding works that have helped capture the communication attitudes in AWS. This tool has been reported to have high reliability, and internal validity and thus is inferred to be more adept at distinguishing AWS from adults who do not stutter (AWNS), compared to the others that form the BAB. The tool has been contemporaneously made available in many languages- Persian, Dutch, to name a few [16]. Further, the authors have consistently revised the test every decade to ensure its applicability: the most recent clinically available version of BigCAT [Vanryckeghem & Bruttten, 2017, unpublished manuscript] consists of 34 items.

Need for Study:

The Indian context has a dearth of clinically applicable tools that have attempted to explore the affective, behaviour, and cognitive components in AWS. This may be due to the lack of cultural and language-specific normative, as India is a multi-lingual country. The established normative of the BigCAT is for the western population, and the tool is being used widely. The presence of language and culture-based differences make it unsuitable for applying the same tool to assess speech associated attitude, without adapting and generating normative concerning the population of interest. As the current study was based in Karnataka, the Kannada language was deemed as the appropriate choice.

Aim & Objectives:

The current study aimed to adapt the recent version of the Communication Attitude Test for Adults (BigCAT, 2017) to the Kannada language and to validate the same.

Method:

The English version of BigCAT was forward translated to the Kannada language by the first author and backward translated by two competent Kannada-English bilingual Speech-Language Pathologists [17]. After making the necessary changes by ensuring the influence of cultural differences, the first author engaged in back-translation with the test author via skype. The suggestions and corrections were incorporated before finalising the Kannada version of BigCAT.

Subsequently, the BigCAT- K was administered on 259 adults in the district of Mysuru. Further, the participants were divided into two groups; Group 1 consisted of 229 AWNS (124 males and 105 females), and Group 2 consisted of 30 AWS (29 males and one female). The participants were above 18 years of age with Kannada as their native language. The participants were chosen based on the inclusion and exclusion criteria. Before the administration of the test, informed written consent was obtained from the participants. Stuttering Severity Instrument-4 was administered on the participants of Group 2 to confirm the presence of stuttering. BigCat- K consisted of 34 questions assessing the speech associated attitude of the individuals. The participants were instructed to respond by circling 'True' or 'False' before the test administration, and both CWS and CWNS were provided with practise test items. To assess the test-retest reliability, after a gap of 7-10 days, the BigCAT-K was re-administered to 10% of the original sample from both the groups.

Results & Discussion:

The mean BigCAT-K scores for Group 1 (AWNS, N=229) was 5.22 with a standard deviation (SD) of 3.20 and median = 4. The minimum score was one, and the maximum score was 14. For Group 2, (AWS, N= 30) the mean score was 22.13, with SD of 6.80 and median 21. The scores ranged from 10 to 33. To check if the obtained means scores were significantly different across the two groups, Independent sample t-test was performed. The results revealed a significant difference between Group 1 and Group 2 mean values [t (257) = 22.907, p = 0.000]. To check the validity of the test, discriminant analysis was performed, and the results revealed that the BigCAT-K precisely differentiated 99.6% of AWNS and 90 % of AWS. Together, 98.5 % of the original grouped cases were correctly identified. Cross-validation indicated a 96.9% correct classification.

To check for the test-retest reliability, the interclass correlation coefficient was done, and the results revealed excellent reliability for the AWNS (r = 0.90) and AWS (r=0.98) groups. The scores revealed that the test could clearly distinguish AWS from AWNS, as the mean scores for the above were three standard deviations apart, revealing that the has high validity. Thus, it may be inferred from the scores that the Kannada version of BigCAT follows the same trend as the original version in terms of its reliability and validity.

Summary & Conclusion

The study was initiated in an attempt to provide a clinical tool to probe into the attitude in AWS in the Kannada Language. The BigCAT [Vanryckeghem & Bruttten, 2017, unpublished manuscript] was translated, and back-translated to seek the test author's approval. The study included 229 AWNS and 30 AWS as participants, on whom the test was administered. The scores for each group were calculated and compared using independent sample t-test. The results revealed that the two groups showed a very significant difference of 3 standard deviations between them. Thus, BigCAT-K could prove to be a highly efficient tool in dissecting information regarding the speech associated attitudes in AWS, in the Indian context, in equivalence with its original version.


  Abstract – SP739: Cross-cultural Adaptation and Validation of the Evaluation of the Ability to Sing Easily (EASE) Scale for Kannada- Speaking Carnatic Classical Singers Top


Devika Vinod1, Usha Devadas2 & Santosh Maruthy3

1devikavinod1624@gmail.com,2usha.d@manipal.edu, &3santoshm79@gmail.com

1All India Institute of Speech and Hearing, Mysuru - 570006

Introduction:

Singing is a vocal art performed with modulations and accurate breath control. The singing voice is the product of a delicate balance of physiologic control, artistry, and techniques (Teachey, Kahane, & Beckford, 1991). Meticulous training methods and practice sessions assert a great demand on the singer's vocal apparatus contributing to the exceptional incidence of voice problems. Hence, for an active professional singer, voice health demands greater attention. Singers need to perceive their vocal status after each performance and assess if they are facing any vocal loading effects. The perception of vocal apparatus may prevent them from developing voice disorders. Considering the need to heighten the perception of singers towards their voice, Phyland et al. (2013) developed a concise, easy to use tool for singers that permit the self-evaluation of vocal status. The questionnaire addresses two major issues; Vocal Fatigue (VF) and Pathologic Risk Indicators (PRI). Both the sections comprise of 10 questions each, with four alternatives describing each question: not at all, mildly, moderately, and extremely. Each of the alternatives is given a score of 1, 2, 3, and 4, respectively, with three questions under the VF section reverse scored. The protocol gives an overview of one's voice use and thus serves as an instrument for the prevention and early detection of voice problems if any in singers.

Need for Study:

EASE scale, available in the English language, was proved to be sensitive to detect subtle changes of vocal loading effects in singers. Such a scale to identify the early symptoms of vocal loading effects in singers is not available in Indian languages. Thus, the need was felt to translate and cross-culturally adapt the EASE scale to the Kannada language. Once adopted, such a scale can be used effectively to identify early symptoms of vocal loading in Kannada-speaking Carnatic classical singers. Hence, the present study aimed to develop the Kannada version of the EASE self-assessment scale by performing socio-cultural and linguistic adaptation.

Aim & Objectives:

The objectives of the present study were (a) to cross-culturally adapt the Evaluation of the Ability to Sing Easily in the Kannada Language (EASE-K), (b) to examine the psychometric properties of the EASE-K, and (c) to compare the Kannada EASE-K scores with the original EASE scores.

Method:

Following standard guidelines for the process of cross-cultural adaptation of self-report measures (Beaton, Bombardier, Guillemin, & Ferraz, 2000), the original English version was translated to develop a pre-final EASE-K version. As in the southern part of India, Carnatic classical singing is dominating, Carnatic classical singers between the ages of 18-75 years were considered for the cultural and linguistic validation of EASE-K. After forward and backward translation of EASE-K, it was distributed to 12 Carnatic singers to assess the linguistic and cultural appropriateness. Including their suggestions final version of EASE-K was developed.

Further, the EASE-K was administered on 104 (32-Males & 72-Females) Carnatic singers to assess the psychometric properties. The internal consistency of the subscales was obtained based on the Cronbach's alpha. The tool was randomly redistributed to 12 out of 104 participants to verify the test-retest reliability of the EASE-K subscales. Cronbach's alpha was used to assess test-retest reliability. Scores of the original English version of EASE (Phyland et al., 2013; Phyland et al., 2014) were compared with EASE-K, with the permission of the original author, to understand the cultural and linguistic validity of EASE-K. Spearman's Correlation Coefficient was used to assess the correlation between the scores of two subscales (vocal fatigue & Pathologic risk indicator). Non-parametric tests were carried out to study the effect of age and gender on EASE subscales.

Results & Discussion:

The results of reliability analysis indicated a good (VF; r= 0.75) and an excellent (PRI; r=0.90) reliability for EASE-K subscales. Current results are consistent with the original EASE version (alpha values = 0.91 and 0.89 for Vocal fatigue and Pathologic risk indicators, respectively). An inter-item correlation was carried out, and the results revealed that all items of VF and PRI correlated significantly with their respective total scores. Cronbach alpha for inter-item analysis revealed an excellent correlation for both VF and PRI subscales with alpha values 0.752 and 0.755 respectively.

Further, the Spearman correlation coefficient was performed to understand the correlation between the scores of two subscales. The results indicated a statistically significant positive correlation (p<0.001) between the two subscales (r= 0.745). The results signify that singers with higher scores in vocal fatigue also had significantly higher scores in pathologic risk indicator. Current results suggest that singers with pathologic risk indicators like vocal edema might experience a greater extent of vocal fatigue than those who do not report it.

Mann Whitney U test was used to assess the difference in subscale scores across gender groups. There was no significant difference between males and females for both the subscales (VF, p = 0.40 & PRI, p=0.27 respectively). Further, the singers were divided into three age groups (18-29, 30-39 and >40 years) to study the age-based differences in subscale scores. The scores were compared across three age groups using the Kruskal Wallis test. The results revealed no statistically significant difference in scores for VF (p =0.34) and PRI (p=0.55) subscales across the age groups.

The mean and standard deviation of subscale scores (VF and PRI) of original EASE were compared with EASE-K scores using the online t-test. The results revealed a significant difference (p= 0.02 and p ≤ 0.001 for VF and PRI respectively) between English and Kannada version of EASE. For VF subscale, the mean value was significantly lower for EASE-K (mean =16.04) when compared to the original version (mean = 17.42). For PRI subscale, the mean value was higher for EASE-K (mean =15.54) when compared to the original version (mean = 13.04). This difference could be due to participant variability, the sample size, and the difference in singing styles practiced by the participants. However, clinically the values are almost similar.

Similarly, EASE-K did not find any difference in subscale scores across age groups. However, in the original study PRI scores were significantly higher for the younger age group of singers compared to older age groups. This difference in findings might be due to the difference in the singing styles of the participants. The original EASE version reported the results based on the responses of Musical Theatre (MT) singers and EASE-K version on the Carnatic classical singers. Carnatic classical singers might have a different amount of vocal loading when compared to theater singers.

Summary & Conclusion:

As per our understanding, there is possibly no such self-assessment tool, which is validated in Indian languages to predict the possible vocal impairments in healthy singers. The present study was hence an effort to cross-culturally adapt and validate the English version of EASE into Kannada language and test its impact in Classical Carnatic Singers. Results of the study showed good internal consistency and test re-test reliability of EASE-K. Based on the findings of the study, the Kannada version of EASE is a reliable and valid tool. It can be used in Carnatic classical singers to assess the vocal loading effects following their singing performance. Such an attempt sets a great benefit as a tool to address healthy voices unlike most of the self-assessment tools which target specific conditions.


  Abstract – SP741: A Comparative Study on Frequency of Occurrence of Voiced and Unvoiced Fillers and its Duration in Kannada and English Bilingual Adults Top


Mohammed Asif Basha1, Shivani Ambekar2 & Harjeet Singh3

1ashasif555@gmail.com,2sanshi1907@gmail.com, &3asifbeast555@gmail.com

1Shravana Institute of Speech and Hearing, Ballari - 583104

Introduction:

Human Communication comprises of speech and non-speech transmission of thoughts, ideas and emotions. Speech is a complex task involving exchange of signals via acoustic wave transmission (Fant, 1960). Speech activities involving individuals and groups have prime role in social assistance. Speech task involves synthesis of thoughts and production of meaningful utterances, prior to the selection of words and syllables from the lexicon vocabulary followed by syntactic arrangement of these words to form phrases and sentences. The spontaneous speech activity involves dynamic processing of words from the lexicon, during which occurrence of speech errors, fillers are common.

Spontaneous speech includes syllables, words, phrases, repetition, prolongations and fillers. According to Dalton and Hardcastle, 1977, the Fillers can be of three types- 1.Fillers associated with articulatory closure of stop consonants (50-250 msec), 2.Fillers associated with the breath and 3.Fillers associated with voiced and unvoiced fillers that can appear before or after the entire speech task, sentence, word or phrase. The voiced fillers can be in bisyllabic and monosyllabic form like /a/, /em/, /er/, /um/, /and/. Multiple occasions can be responsible for the occurrence of voiced and unvoiced fillers where anxiety and nervousness would result to increased interference in spontaneous speech.

Need for Study:

India is a country with diversified languages (about 1,652 recognized languages) with multiple variations in the spontaneous speech such as rate of speech, intonations, rhythm. While these aspects are studied in an intensified manner all over the world one aspect in the spontaneous speech which usually goes unnoticed are the non-speech fillers. Although antecedently studies in the western context have been focusing to analyze the fillers in spontaneous speech for English language, however there are very few studies on this aspect in Indian Context. Consequently, some studies revealed positive correlation between frequency of occurrence and complexity of task (Ex: Levin et al, 1967; Taylor, 1969; Siegman and Pope, 1965) where as some studies reported no significant correlation between complexity of the task and frequency of occurrence (Ex: Goldman- Eisler, 1961). India being a multilingual country and therefore requires detailed studies on complexity of task and frequency of occurrence of fillers (voiced and unvoiced) in the first and second languages. The present study is focused to analyze the effect on duration and frequency of occurrence for voiced and unvoiced fillers in speakers of Kannada (L1) and English (L2) language.

Aim & Objectives:

The main objective of this study is to assess the Frequency of occurrence and Duration of voiced and unvoiced fillers in Kannada and English languages.

Method:

Twenty participants (10 Males and 10 Females) within the age range of 18-25 years (Mean: 21.65, SD: 2.21) were selected for the study, with significant proficiency in Kannada and English languages, with normal hearing and no dysfluencies in speech and no psychological, cognitive disabilities. A written consent was signed from the participants. The participants were made to sit comfortably in a sound treated room for collection of data samples, Firstly; LEAP-Questionnaire (Language Experience and Proficiency Questionnaire, Viorica Marian et al, 2007) was administered. Secondly, A comprehensive picture 'A populated Road scene' consisting of multiple activities were given to participants and were asked to describe the picture for three minutes in Kannada language and the speech was recorded using the PRAAT Software (Version 6.1.03), followed by a time gap of two minutes before the next task- General conversation (describing their childhood memories) for three minutes in Kannada language. Further, the same procedure was followed to record the speech samples in the English language. Finally, four task recordings from each of the twenty individuals were assembled and analyzed using the Sonic Visualizer software (Version 3.0.3) to obtain the speech spectrogram for duration analysis and to count the number of fillers. After the analysis of recordings the data obtained was tabulated and statistically analyzed using IBM SPSS (Version 26) software for the interpretation of results. Paired t- Test was performed to analyze the statistically significant difference between each variable. Pearson's Correlation test was performed to analyze the significant correlation for both fillers among the results of Picture description and general conversation task.

Results & Discussion:

LEAP-Q analysis for mean exposure in Kannada language was 58.50% with mean proficiency rating of 9.45 whereas; in English it was 41.50% with mean proficiency rating of 8.80 on a scale of 10 for both males and females. All participants reported to have Kannada as their first dominant language (L1) and English as their second dominant language (L2) with Twelve years of formal education in both languages. Paired t- test results for frequency of occurrence revealed significant difference between two languages in the picture description task for voiced fillers (M=5.950, SD=0.242);[t(19)=-2.598,p<0.05] and for unvoiced fillers(M=4.050,SD=6.886);[t(19)=-2.630,p<0.05].Significant difference between two languages were observed in General conversation task for voiced fillers(M=5.050,SD=8.739);[t(19)=-2.584, p<0.05], but not in unvoiced fillers(M=1.450,SD=8.805);[t(19)=-0.736,p>0.05]. Consequently, no significant difference was observed when tasks were switched from picture description to general conversation in Kannada language [t(19)=0.295,p>0.05] and in English language[t(19)=1.180,p>0.05]. Paired t- test results for duration revealed no significant difference between two languages in picture description task for voiced fillers (M=0.248, SD=0.169);[t(19)=0.655,p>0.05] and for unvoiced fillers(M=0.007,SD=0.182); [t(19)=0.183,p>0.05]. However, significant differences between two languages were not observed in general conversation task for voiced fillers(M=0.025, SD=0.144);[t(19)=-0.775,p>0.05].Further, no significant difference was observed when tasks were switched from picture description to general conversation in Kannada [t(19)=-0.810,p>0.05]and in English [t(19)=0.937,p>0.05]. Pearson's Correlation test results for frequency of occurrence reported strong positive correlation (0.7<r<1.0) for voiced fillers and moderate positive correlation (0.3<r<0.7) for unvoiced fillers between the picture description and general conversation task in both Kannada and English language. However, the results for duration of the fillers revealed weak negative correlation (0.0<r<0.3) for voiced fillers and moderate positive correlation (0.3<r<0.7) for unvoiced fillers between the picture description and general conversation task in both Kannada and English language.

The results of mean duration of the voiced fillers for both tasks were observed to be above the filler range of 200 msec (Dalton and Hardcastle, 1977). The analysis showed negative correlation between rate of speech and frequency of occurrence. Consequently, the results of the present study for frequency of occurrence of voiced and unvoiced fillers were significantly different among that of L1 (Kannada) and L2 (English). The occurrence of unvoiced fillers in the spontaneous speech was followed by voiced fillers similar to the findings of Malte Belz and Myriam Klapi, 2013. However, voiced fillers were considered as the product of anxiety (Mansur G.Lallgee, Mark cook, 1969).

Summary & Conclusion:

Analysis for frequency of occurrence showed significant difference in voiced and unvoiced fillers among Kannada and English speakers which may be due to the hesitations, anxiety. Further, no significant difference was observed in duration of fillers between Kannada and English Language. Consequently, no significant difference was observed for the change of task from Picture description to general conversation. Therefore, Increase in the number of fillers for spontaneous speech may affect the communication by interrupting the spontaneity of thoughts, leading to communication hindrance. Presently, the need to study fillers is essential in spontaneous speech. However, this study is conducted on a small sample size, future studies can be conducted on a larger group with dysfluencies in speech and with various speech tasks. Voiced and unvoiced fillers can be analyzed among different Indian languages.


  Abstract – SP742: Systemic Vocal Fold Hydration - An Outcome in Acoustic and Perceptual Voice Characteristics among Future Voice Professionals Top


V Swati1, Amritha M.L.2, Bhuvaneswari K3 & Sundaresan R4

1swatibaslp2020@gmail.com,2amritha.ml@gmail.com,3kbhuvana87@gmail.com, &4undaresanslp@yahoo.com

1Holy Corss College, Trichy - 620002

Introduction:

Increasing vocal fold hydration is one of the simplest recommendations used to prevent voice disturbances. Systemic hydration refers to fluid within the body and the vocal folds whereas superficial hydration is the fluid lining the vocal fold surface and laryngeal lumen. To increase superficial hydration, speakers are advised to humidify inhaled air with nebulizers, humidifiers and steam inhalations .For systemic hydration patients are counselled to drink at least 64fl.ozof water each day and to limit intake of caffeinated and alcoholic beverages (Sivasankar &Leydon 2010).Typical recommendations of water intake are approximately 6 to 8 glasses of water (approximately 1.9 litres) a day. Increased water intake is recommended because there is a growing body of evidence that chances of vocal fold swelling and vocal fold lesions, such as vocal fold nodules are reduces with increased hydration (Marston&Titze, 1994). The effects of increasing water intake are systemic, meaning that the whole body payoff of increasing water intake. Titze 1998 proposed that energy needed for small vocal fold vibration depends on tissue biochemical properties including viscosity, tissue thickness and elasticity. This energy referred to as Phonation Threshold Pressure (PTP) which is the minimum sub glottal pressure required to initiate vocal fold oscillation. The phonation threshold pressure (PTP) has been found to be increased through dehydration challenges including reduced water intake (VerdoliniMarston,Sandage,Titze 1994).

Voice professionals are the professionals who use their voice for their profession. It is common to have various voice disturbances including dryness of vocal folds and dysphonia due to excessive use of voice in professional voice userswhich will have an impact on their career.

These disturbances are due to various factors importantly dehydration of vocal folds and atmospheric changes and many other factors like medications, allergies, aging and upper respiratory tracts infections. In order to prevent the organic and functional laryngeal health, it is necessary to keep the body hydrated, which alters the viscoelastic properties of vocal folds thereby improving voice quality. The present study is focussed on the hypothesis that each parameters of voice will be better after systemic vocal fold hydrationand this resultsin improvement in overall quality of voice.

Need for Study:

Voice professionals are usually prone to get many disturbances due to excessive usage of vocal folds and dehydration, hence the present study was carried out to analyse the effect of voice quality before and after hydration thereby preventing voice disorders in future voice professionals through hydration.

Aim & Objectives:

To analyse the effects of systemic vocal hydration in acoustic and perceptual characteristics of voice among future voice professionals.

Method:

The study was carried out using cross sectional designcomprises of 45female future voice professionals(undergraduate students pursuing Audiology and Speech Language Pathology course) age range between18 to 20 years with mean age of 19 years. Participants reported as using their voice for about 6 to 8 hours daily were included in the study. Participantswith excessive caffeine drinking habit, common cold or other upper respiratory tract infections, complaint of hypo or hyper thyroidism were excluded from this study. General demographic data along with the background history including information regarding voice usage and litres of water intake per day were collected. Based on the information’s collected 45 participants were grouped into 3 with 15 participants in each group as group A, group B, group C. Participants who reported water intake less than 1 litre per day were included in group A, 1.5 to 2 litres per day as group B and more than 2 litres as group C.The experiment design includes both subjective and objective measures of voice consist of Maximum Phonation Duration (MPD),s/z ratio, vocal assessment using Dr.Speech software, perceptual assessment scale(GRBAS) and a self perceptual questionnaire. The self perceptual questionnaire was developed as a part of this study which assess 5 domains (sore throat, dryness, strain, weakness, throat clearing behaviour) and the participants were asked to rate the symptoms in a 5 point scale .The study was carried out in 4 phases. In phase 1 a baseline assessment of acoustic, perceptual and self perceptual voice evaluation and ratings were done initially on all the three groups of participants. Phase 2, 3 and 4 was carried out at the end of next three days respectively after providing proper hydration with the examiner monitoring. Participants were recommended to take 2 litres of water in regular intervals of time throughout the day and was monitored by the examiners. Temperature throughout the day was also noted to rule out the effect of environmental humidity in voice quality.The acoustic characteristics analysed were jitter, shimmer, HNR and voice quality estimated from Dr.Speech software was also recorded.

Results & Discussion:

The pre and post analysis was carried out using paired t test. The result shows significant difference in Maximum Phonation Duration, shimmer and HNR before and after hydration. Maximum Phonation Duration increased significantly across phases. Shimmer values were decreased after hydration which is similar to the results given by Maria &Kenneth(2009). Harmonics to noise ratio (HNR) was found to be increased across phases.HNR quantifies relative amount of additive noise (Awen&Frankel1994), and high HNR indicates low level of hoarseness. There is a strong relationship between how we perceive voice quality and the harmonic noise ratio(Ferrand2007).This result was correlated with the voice quality estimate analysis carried out using Dr.Speech which shows significant positive variations in hoarse, harsh and breathy quality in all the three groups. Even though no significant difference was noted in jitter and Fo, jitter values decreased after hydration. No difference was noted in perceptual analysis using GRBAS scale and EGG parameters like CQ, CI and OR. Participants reported difference in self perception of dryness and weakness which is the common complaint over the period of three days. Reduced dryness and weakness were reported by the majority of participants. There is not much variations in temperature in four days and it ranges from 33to34 degree Celsius and the mean temperature was 33 degree Celsius.

Summary & Conclusion:

The present study show significant improvement in voice quality after hydration. Hence the present study concluded that hydration has an effect on improving voice characteristics in future voice professionals and can be used as a preventive measure for voice disorders. It can be also concluded that hydration can be used as an effective treatment for certain voice disturbances like dryness and weakness of throat and voice quality and can be considered in treatment plan.


  Abstract – SP743: Perceptual and Acoustic Similarities in Voice and Articulatory Characteristics of Monozygotic and Dizygotic Twins Top


U Jumana Haseen1, Amritha M.L.2 & Bhuvaneswari K3

1saboormohi1996@gmail.com,2amritha.ml@gmail.com, &3kbhuvana87@gmail.com

1Holy Corss College, Trichy – 620002

Introduction:

Monozygotic twins / identical twins are derived from one fertilized egg (zygote) and thus their genetic makeup is expected to be identical. Monozygotic twins are not completely “identical” but often have subtle differences in birth weight, or presence of congenital defects in one twin due to unequal allocation of blastomeres. Dizygotic twins or fraternal twins derived from two separately fertilized eggs and would therefore be expected to be different on the basis of both genetic and environmental factors. There are studies which show that voice of identical twins are sufficiently different to warrant their unique voice characteristics. According to Sataloff, the physical characteristics of laryngeal mechanism such as vocal fold length and structure, size and shape of supraglottic vocal tract and phenotypic similarities elsewhere in vocal mechanism are genetically determined. Debruyne and Vercammen in 2002 reported that monozygotic twins have almost identical vocal fundamental frequency F0, amount of variability in F0 and speaking fundamental frequency. Flash et al in 2002 reported that vocal range in semitones, found to be very similar in twins, especially in monozygotic twins. Similarity in speaking fundamental frequency (SFF) was also seen in monozygotic twins when compared to unrelated persons (Decoster et al, 2002). Structural similarity on genetic basis is reported not only for major anatomic features such as the volume and morphology of larynx and vocal tract but also for microscopic features such as the ultrastrucure of basal membrane of the vocal fold epithelium (Gray et al, 1997). Similarities in vocal tract size and shape also result in rather similar acoustic spectra, especially with respect to the shape of the spectrum above the first formants (Clement, 2005). Structural similarities on a genetic basis is reported not only for major anatomic features such as volume and morphology of larynx and vocal tract but also for microscopic features such as the ultra-structure of the basal membrane of the vocal fold epithelium( Julie Vercammen, 2002). Greater similarity in voice between Monozygotic twins (MZ) and then between Dizygotic twins (DZ) would have to be the result of genetic influences (McGuffin, 2001). Quantitative measures like fundamental frequency in phonation (Przbyla,1992), speaking fundamental frequency (Debruyne, 2002), formants (Forrai et al, 1983), Dysphonia Severity Index (Van Lierde et al ,2005) and glottal parameters show similarity in monozygotic twins. Even though many studies were reported regarding the monozygotic twins there are very few which have reports on dizygotic twins or fraternal twins which result from the fertilization of two different eggs with two different sperms. The existing studies on dizygotic twin show difference in vocal parameters.

Need for Study:

As reported in the literature there is a similarity in genetic make ups among twins which results in similar vocal tract shape and structure. This can results in similar voice characteristics. Only few studies have investigated acoustic and perceptual similarity in the voice of twins. Hence the present study focus on finding the similarity in formants, perceptual, acoustic, and aerodynamic voice Characteristics among monozygotic and dizygotic twins.

Aim & Objectives:

To analyze the formants, perceptual, acoustic, aerodynamic characteristics of voice among monozygotic and dizygotic twins.

Method:

10 pairs of monozygotic twins and 10 pairs of dizygotic twins within the age range of 20-30 years were recruited for the study. Individuals with the history of voice problems, upper respiratory tract infection, laryngeal problems and articulation problems were excluded from the study. Aerodynamic analysis was carried out by assessing Maximum Phonation Duration (MPD).Acoustical parameters such as Fundamental frequency (F0), jitter(%), shimmer(%), Harmonics to Noise Ratio (HNR) and Formants ( first formant (f1), second formant (f2), third formant (f3)were analyzed using PRAAT software version 5.1.04. For perceptual analysis the phonation samples of both monozygotic and dizygotic twin pairs were recorded and shuffled. To investigate perceived speaker similarity one sample from each pair of twins were played to 3 naive listeners and were asked to match with the other set of samples given. To check the intra rater reliability the samples were given for analysis to the same raters after 5 days.

Results & Discussion:

The results shows significant difference in dizygotic twins in perceptual, acoustic and aerodynamic aspects whereas no difference found in monozygotic twins. The significant difference was found across all the acoustic parameters like mean pitch, jitter, shimmer , and HNR in dizygotic twins. The mean pitch variations are more in dizygotic twins than in monozygotic twins. Monozygotic (identical) twins have similar vocal fundamental frequency (F0) and amount of variability in F0. The results were in accordance to the findings of Clement (2005). Variation in Harmonics to Noise Ratio (HNR) was lesser in monozygotic twins than in dizygotic twins. The Maximum Phonation Duration (MPD) values shows similar value for monozygotic twins and significant var