• Users Online: 22
  • Home
  • Print this page
  • Email this page
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 


 
 Table of Contents  
ORIGINAL ARTICLE
Year : 2018  |  Volume : 32  |  Issue : 1  |  Page : 34-38

Identification of NOTE-50 with stimuli variation in individuals with and without musical training


Department of Audiology, All India Institute of Speech and Hearing, Mysore, Karnataka, India

Date of Web Publication14-Jun-2018

Correspondence Address:
N Devi
All India Institute of Speech and Hearing, Manasagangothri, Mysore - 570 006, Karnataka
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/jisha.JISHA_32_17

Rights and Permissions
  Abstract 

Background: Music perception is a multidimensional concept. The perception of music and identification of a ra:ga depends on many parameters such as tempo variation, ra:ga variation, stimuli (vocal/instrument) variation, and singer variation. From these, the most important and relevant factor which is important for the perception of the ra:ga is the stimuli and the singer variation. However, the identification of a ra:ga also depends on an individual's music perception abilities. This study was aimed to compare the NOTE-50 (the minimum number of notes required to identify a ra:ga with 50% accuracy) identification of two different ra:gas with vocal or instrumental rendering in individuals with and without musical training. Methods: Thirty participants were divided into two groups as with and without musical training based on the scores of “Questionnaire on music perception ability” and “The Music (Indian music) Perception Test Battery.” Two basic ra:gas Kalya:ni ra:ga and ma:ya:ma:lļavagavlļa ra:ga of Carnatic music was taken as test stimuli. An experienced musician played violin in these two ra:gas in octave scale. Two ra:gas were also recorded in vocal (male and female singer) and instrumental rendering. These ra:gas were edited and slided for each note and combination of the notes. Hence, a total of 16 stimuli were prepared which were randomly presented 10 times for identification task. Results and Conclusion: The results revealed that there was a difference in perception of all the variations of the stimuli for those with musical training and without musical training. The stimuli with male rendering had better identification scores of NOTE-50 than the other stimuli. The number of notes required to identify a ra:ga correctly was lesser for participants with musical training. This could be due to the musical training and their better perceptual ability for music. Hence, it's concluded that identification, perceiving, understanding, and enjoying music require superior musical perceptual ability which could be achieved through musical training.

Keywords: Identification, questionnaire, ra:ga, randomization


How to cite this article:
Devi N, Kumar U A. Identification of NOTE-50 with stimuli variation in individuals with and without musical training. J Indian Speech Language Hearing Assoc 2018;32:34-8

How to cite this URL:
Devi N, Kumar U A. Identification of NOTE-50 with stimuli variation in individuals with and without musical training. J Indian Speech Language Hearing Assoc [serial online] 2018 [cited 2018 Jul 16];32:34-8. Available from: http://www.jisha.org/text.asp?2018/32/1/34/234483


  Introduction Top


Music is an art and Indian music is broadly classified into South Indian Carnatic music and North Indian Hindustani music.[1] Carnatic music can be either vocal or instrumental, and it is typically based on ra:ga and Talas, which are comparable to Western music as melody and rhythm. Ra:ga is complex in terms of melodic variation and the degree of rhythmic complexity than scales in Western music.[2] Sequential arrangement of notes (Swara in Carnatic music) in a ra:ga is capable of invoking the emotion of a song. The distinguishing characteristics of ra:gas are the swaras that is used, the order of its swaras, their manner or intonation and ornamentation, their relative strength, duration, and frequency of occurrence.[3] Each ra:ga has notes which are sung in a particular melody using prosody. Prosodic modifications include increasing/decreasing the duration of notes, by employing gamakas, and modulating the energy.[4] Music perception is complex, cognitively demanding task that taps into a variety of brain functioning. However, for music information retrieval, raga identification task can be used.[5] However, the perception differs based on the singer and the instrument used in music. There may be differences in an individual's perception of any stimuli depending on the type of music being played and the difference between effects of vocal and instrumental music.[6] Ra:ga identification consists of methods that identify different notes from a piece of music and classify it into the appropriate ra:ga.[7] Raga identification is a process of listening to a portion of music, blending it into series of notes, and analyzing the sequence of notes. The same principle is followed in the present study, and to make it more systematic, NOTE-50 concept was used where the chance factor of identifying a ra:ga was well controlled. However, the correct identification of a particular ra:ga requires a perceptual skill for music. The main motive behind ra:ga identification is that it is good tool for music information retrieval.[1] The individuals who have learned music over a period of time may be able to identify ra:ga better compared with those who have not learned music. Multitude of data suggests that musical training over a period of years has benefits not only on sensory processing but also on cognitive processing.[8],[9] Any music would involve finer modulations of amplitude, frequency, and temporal aspects. During extensive training, musicians recognize these fine variations. Hence, a well-trained musician will have rich auditory experience and are considered as auditory experts with better auditory skills than nonmusicians.[10] Musicians perform better than nonmusicians, not only on music specific skills but also on other general auditory skills.[11] However, there is a dearth in the literature pertaining to the ra:ga identification of Carnatic music. Hence, the aim of the study was identification of NOTE-50 (minimum number of note required to identify a ra:ga with 50% accuracy) with different variables such as ra:ga variation and stimuli (vocal/instrument) variation in identification of a ra:ga by those individuals who had undergone musical training and those who had not undergone any without musical training.


  Methods Top


The participants involved in the study comprised of two groups in the age range of 18–40 years. Group I consisted of 15 individuals (mean age range of 25.27, standard deviation [SD] = 3.88) with musical training and Group II consisted of 15 individuals (mean age range of 29.93, SD = 5.39) without musical training. Musical perception abilities of the participants were tested based on “Questionnaire on music perception ability.”[12] which had questions related to different parameters of music such as pitch awareness, pitch discrimination and identification, timber identification, melody recognition, and rhythm perception and music (Indian music) perception test battery,[13] which assessed different parameters of music such as pitch discrimination, pitch ranking, rhythm discrimination, melody recognition, and instrument identification. Individuals with the score of ≥61.1 on this test battery and with the score of more than 15 on above-mentioned questionnaire were assigned to Group I (with musical training) and less than score of 61.1 and <15 on the questionnaire were assigned to Group II. The cutoff criteria were used as per the normative scores.[12],[13] All the participants selected for the study did not have any history of otological or neurological problems and their hearing sensitivity was within normal limits (i.e., air conduction threshold of ≤15 dB HL in the frequency range of 250-8 kHz in both ears and air bone gap of <10 dB HL at any frequency).

Stimuli and procedure

Two basic ra:gas, Kalya:ni (S R2 G3 M2 P D2 N3 S) ra:ga (KR – Ra:ga 1) and Ma:ya:ma:lļavagavlļa (S R1 G3 M1 P D1 N3 S) ra:ga (MMR – Ra:ga 2) from Carnatic music were taken as the stimuli. These two ra:gas were sung in different rendering: male (M) and female singer (F) and also played through violin (I) in octave scale. Three professional vocalists and instrumentalist were seated comfortably in a sound-treated room in separate recording settings and were asked to sing and play the ra:gas. These were recorded using CSL 4500 model (Kay PENTAX, New Jersey, USA) at a sampling frequency of 48,000 kHz and was saved into computer. Vocal rendering was recorded using male and female voice. In each condition, musician played or sang the song in octave scale where the difference between first note sa and last note sa was one octave. The stimuli were normalized for peak amplitude using Adobe Audition version 3 (Adobe Systems Incorporated, California, USA). Goodness test was performed by playing the stimuli to ten musicians for identification of the ra:ga and quality and naturalness of the stimuli on three-point rating scale (good, fair, and bad). The stimuli that received the highest rating were taken as test stimulus. This stimulus was sliced into one note (S), two notes (S R1), and three notes (S R1 G3) up to entire sequence of eight notes (S R1 G3 M1 P D1 N3 S) for both the ra:gas. Testing was carried out as two phases: familiarization and identification. During familiarization phase, participants were requested to listen to violin notes played in octave notes for Kalya:ni ra:ga (KR) and were inculcated that whenever the notes are heard in a particular way it had to be identified as KR. A similar exercise was done for Ma:ya:ma:lļavagavlļa ra:ga (MMR). This familiarization phase was for a practice period of 15 min. In identification phase, participants had to identify the ra:ga after pay attention to notes by pressing the appropriate key on the keyboard for obtaining NOTE-50. The presentation of the stimuli and a compilation of the responses were done using DMDX software. For each stimulus trial, participants were presented with diverse integer of notes of a ra:ga (either Kalya:ni or Ma:ya:ma:lļavagavlļa) along with words Kalya:ni and Ma:ya:ma:lļavagavlļa on the laptop screen. Participants' task was to identify the stimulus by clicking the button 1 or 2 on keyboard, where 1 and 2 represented Kalya:ni and Ma:ya:ma:lļavagavlļa, respectively. The participants were given a constant interstimulus time 7 s after the stimuli to respond. Till then, the buttons 1 and 2 remained on the computer screen. Each stimulus one note (S), two notes (S R1), three notes (S R1 G3), and other sequences were replicated 10 times randomly to decrease the chance aspect. This resulted in a total of 80 stimuli for each ra:ga in each condition. All the conditions (male, female, and instrumental rendering) were presented randomly to the participants. The least number of notes that were required to identify the ra:ga with 50% precision were calculated using linear regression from the obtained data. Henceforth, this ordeal will be referred as NOTE-50 as this gives the minimum number of notes required identifying the ra:ga with 50% accuracy. Stimuli were presented to participants at 68–70 dB sound pressure level using headphones.


  Results and Discussion Top


The NOTE-50 scores of each participant were subjected to analysis. First descriptive statistics (mean and SD) are reported for all the measurements. Following this, Shapiro–Wilk test of normality was administered. As indicated by the normality test (P > 0.05), parametric tests were used for further analysis of the obtained data. Whenever main effects or interactions were significant, the post hoc test was done using pairwise comparisons with Duncan's/Bonferroni's correction applied for multiple comparisons. The mean score of the number of notes required and their 50% performance to identify a particular ra:ga across all the stimuli variables for both the group of participants were determined. [Figure 1]a and [Figure 1]b depicts the mean of the minimum number of notes required to identify a KR and MMR, respectively, across the two groups.
Figure 1: (a) Identification of the Kalya:ni ra:ga with different notes across the different stimuli (male, female, and instrumental rendering) for participants with and without musical training. (b) Identification of the Ma:ya:ma:ļavagavļa ra:ga with different notes across the different stimuli (male, female, and instrumental rendering) for participants with and without musical training. Note – FKR: Female Kalya:ni ra:ga; MKR: Male Kalya:ni ra:ga; IKR:Instrument Kalya:ni ra:ga; FMMR: Female Ma:ya:ma:ļavagavļa ra:ga; MMMR: Male Ma:ya:ma:ļavagavļa ra:ga; IMMR:Instrument Ma:ya:ma:ļavagavļa ra:ga; I is participants with musical training and II is participants without musical training. The black line 0.5 indicates the NOTE-50 which is 50% of the time a raga is identified with respect to the notes

Click here to view


From [Figure 1], it can be inferred that the identification scores were better for all the three variations of stimuli (female vocal, male vocal, and instrument rendering) in participants who had undergone musical training compared to the participants without musical training. For participants with musical training, the highest identification score of a ra:ga was obtained for a lesser number of notes of both the ra:gas. Further, through linear regression curves, minimum number of notes required to identify the ra:ga with 50% accuracy was determined. [Figure 2] indicates the mean and standard error of the minimum number of notes required to identify the ra:ga with 50% accuracy (NOTE-50).
Figure 2: Mean and standard error of minimum number of notes required to identify a ra:ga with 50% accuracy (NOTE-50) for both groups of participants. Note – FKR: Female rendering of Kalya:ni ra:ga; FMMR: Female rendering of Ma:ya:ma:ļavagavļa ra:ga; MKR: Male rendering of Kalya:ni ra:ga; MMMR: Male rendering of Ma:ya:ma:ļavagavļa ra:ga; IKR: Instrumental rendering of Kalya:ni ra:ga; IMMR: Instrumental rendering of Ma:ya:ma: ļavagavļa ra:ga

Click here to view


From [Figure 2], it can be inferred that individuals with musical training had better NOTE-50 than individuals without musical training. Analysis of variance (ANOVA) showed significant main effect of ra:ga F(1, 28) = 22.843 (P < 0.05), which reveals that MMR had lesser number of notes to be identified compared to KR mode of stimuli F(2, 56) = 46.648 (P < 0.05), which reveals male rendering required lesser notes followed by female and instrumental rendering and group of participants F(1, 28) = 180.0 (P < 0.01), Group I required lesser notes to identify a ra:ga. There was a significant interaction between group of participants and ra:ga F(1, 28) = 4.632 (P < 0.05) as well there was a significant interaction between mode of stimuli, ra:ga, and participants F(2,56) = 9.298 (P < 0.05). There was no significant interaction between group of participants and mode of the stimuli F(2, 56) = 0.933 (P > 0.05) and mode of stimuli and ra:ga F(2, 56) = 0.694 (P > 0.05). Since there was a significant interaction, one-way repeated measure ANOVA was carried out for comparison of the mode of stimuli for each ra:ga separately. Within the group of those with musical training, there was sia gnificant difference among all the mode of stimuli F(2, 28) = 11.182 (P < 0.05) for the KR. Pair-wise comparison using Bonferroni's correction across all the rendering for KR among those with musical training reveals that there was a significant difference for female and male rendering (P < 0.05), male and instrumental rendering (P < 0.05), however no significant difference between female and instrumental rendering (P > 0.05). Similarly, within the group of those without musical training, there was a significant difference among all the modes of stimuli F(2, 28) = 13.357 (P < 0.05) for the KR. Pair-wise comparison across all the rendering for KR among those without musical training reveals that there was a gnificant difference for female and instrumental rendering (P < 0.05), male and instrumental rendering (P < 0.05), however no significant difference between female and male rendering (P > 0.05). One-way repeated measure ANOVA was carried out for MMR separately for both the group of participants. Within the group of those with musical training, there was a significant difference among all the modes of stimuli F(2, 28) = 40.146 (P < 0.05) for the MMR. Pair-wise comparison across all the renderings of mode of stimuli for MMR among those with musical training reveals that there was a significant difference only for female and instrumental rendering (P < 0.05), however no significant difference between male and instrumental rendering (P > 0.05) and female and male rendering (P > 0.05). Similarly, within the group of those without musical training, there was a significant difference among all the modes of stimuli F(2, 28) = 15.503 (P < 0.05) for the MMR. Pair-wise comparison across all the rendering for MMR among those without musical training reveals that there was a significant difference for female and male rendering (P < 0.05) and male and instrumental rendering (P < 0.05), however no significant difference between female and instrumental rendering (P > 0.05).

Further, paired t-test was carried out to within the groups across the ra:gas. Among those with musical training, the results reveal that there was a significant difference between the ra:gas for the female t(14) = 4.208, P = 0.001 and male rendering t(14) = 4.508, P = 0.000, and among those without musical training, there was significant difference only between the male rendering t(14) = 3.401, P = 0.004. Pearson's correlation coefficient was done to check the relation between the “Questionnaire on music perception abilities” and “Music (Indian music) Perception Test Battery” with that of scores of the NOTE-50. [Table 1] summarizes the results of Pearson's correlation coefficient.
Table 1: Results of relationship between “Questionnaire on music perception abilities” and “music (Indian music) perception test battery” with that of NOTE-50

Click here to view


It can be inferred from [Table 1] that there was a significant negative correlation of NOTE-50 and musical abilities' measures Questionnaire on music perception abilities and “music (Indian music) perception test battery.” This shows that individuals who had higher scores on musical abilities' measures were able to identify the ra:gas with 50% accuracy with lesser number of notes. Therefore, NOTE-50 can also be used as a tool to measure the musical abilities of the individuals in Indian classical music.

The results indicate that, within different variants and renderings of the ra:ga, the male and female renderings were easier to identify with a lesser number of notes compared to instrumental rendering. The identification of the musical instrument rendering has also been reported to be difficult.[14] The fact could be the F0 variation with respect to the instrument or the stimuli being played, which can be inferred that the F0 of the male rendering is lesser compared to female followed by instrumental.[15] At the level of the auditory system, the F0 which is much lesser is easily segregated and perceived better.[16] Music played through instrument is very difficult for identification and is a vital crisis in both scientific and practical applications. Detail analysis of spectral and temporal features can alone provide a better identification of a ra:ga with an instrument; however, perceptual listening of an instrument to identify a ra:ga is very difficult and it is much more complicated for those with poor knowledge on music. Moreover, comparing within vocal music stimuli, male rendering was easier to identify compared female rendering. The speaker's sex can be easily identified from the audio signal alone.[17] The reason for difference in the perception of the spoken signal among the sex was that adult male voices are “marked” by the sexually selected features of lessened F0 and formant frequencies.[18] Estimating the gender of speakers, the listeners may rely on resonances of the vocal tract for arbitrating the stimuli.[19],[20],[21] Presumably, the sex identification from stimuli would be possible because of the strong correlation of the formant frequency with vocal-tract length [22] and vocal-tract extent, in turn, correlates with body size,[23] which correlates with sex. The association between sex and supralaryngeal vocal-tract length (or more indirectly sex and skull size) emerges in puberty when the course of maturity deviates for boys whose vocal tracts lengthen more than those of girls, associated with a modification in the comparative sizes of the oral and pharyngeal cavities.[24] The larger larynxes can produce more of the lower pitch than smaller larynxes as in females. Male's hormones cause the larynx to become larger and the folds to lengthen and thicken.[25] Hence, the perception and identification of the ra:ga could also depend on the F0 of the rendering. The male rendering which has lower F0 is easier to be identified compared to the female rendering or instrumental music. However, the study was limited only to two ra:gas of Carnatic music with few variations. Hence, a generalization of the same results to all the other ra:gas and other renderings require more controlled research. While comparing between the ra:gas used in the present study, the MMR was easily identified. However, there is a dearth in the literature to support the finding of the present study for the differences in the perception between the ra:gas. The possibilities could be that the familiarity as usually the MMR being the basis ra:ga of Carnatic music that was trained. However, further research is required with more ra:gas being evaluated for identification, perception, and retrieval of musical abilities. The present study revealed that participants with musical training outperformed those without musical training in identification of a ra:ga. This divulges that music information retrieval is based on the musicality of the individual and training.


  Conclusion Top


To estimate an individual's musical abilities, researchers often use self-reported questionnaire of musicianship. However, being a nonmusician does not denote an absence of musical ability. The ability of musical perception might be undiscovered. Hence, in this present study along with a self-reported questionnaire and perceptual test of musical ability, another perceptual measure of NOTE-50 was used which revealed a good correlation for estimating a musicality in nonmusicians. Hence, NOTE-50 can be used as one of the perceptual tools. This study also provides information that individuals trained for any musical perception might have more abilities to enjoy, understand, and perceive music superiorly. However, the parameters such as the singer or stimuli variation and ra:ga variation might interfere in identification musicality of an individual. Identification of a ra:ga by an individual who has not undergone formal musical training is not easier or simple; one has to consider the parameters that are involved in music, before judging an individual for their music perception abilities.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

 
  References Top

1.
Sridhar R, Geetha TV. Raga identification of carnatic music for music information. IJRTER 2009;1:571-4.  Back to cited text no. 1
    
2.
Trisiladevi CN, Nagappa UB. “Overview of Automatic Indian Music Information Recognition, Classification and Retrieval Systems”, In Proceedings of IEEE International Conference on Recent Trends in Information Systems; 2011.  Back to cited text no. 2
    
3.
Belle S, Joshi R, Rao P. Raga identification by using swara intonation. J ITC Sangeet Res Acad 2009;23.  Back to cited text no. 3
    
4.
Ishwar V, Bellur A, Murthy HA. Motivic Analysis and Its Relevance to Raga Identification in Carnatic Music. Proceedings of the 2nd CompMusic Workshop; 2012.  Back to cited text no. 4
    
5.
Sudha R, Kathirvel A, Sundaram RM. “A System of Tool for Identifying Ragas Using MIDI”. In Proceedings of Second International Conference on Computer and Electrical Engineering IEEE; 2009. p. 644-7.  Back to cited text no. 5
    
6.
Furnham A, Bradley A. Music while you work: The differential distraction of background music on the cognitive test performance of introverts and extraverts. Appl Cogn Psychol 1999;11:445-55.  Back to cited text no. 6
    
7.
Manisha K, Bhalke DG. Raga identification of Indian classical music: An overview. IOSR J Electron Communication Engineering 2015;100-5.  Back to cited text no. 7
    
8.
Tervaniemi M, Kruck S, De Baene W, Schröger E, Alter K, Friederici AD, et al. Top-down modulation of auditory processing: Effects of sound context, musical expertise and attentional focus. Eur J Neurosci 2009;30:1636-42.  Back to cited text no. 8
    
9.
Zatorre RJ, Belin P, Penhune VB. Structure and function of auditory cortex: Music and speech. Trends Cogn Sci 2002;6:37-46.  Back to cited text no. 9
    
10.
Kraus N, Chandrasekaran B. Music training for the development of auditory skills. Nat Rev Neurosci 2010;11:599-605.  Back to cited text no. 10
    
11.
Banai K, Fisher S, Ganot R. The effects of context and musical training on auditory temporal-interval discrimination. Hear Res 2012;284:59-66.  Back to cited text no. 11
    
12.
Devi N, Kumar AU, Arpitha V, Khyathi G. Development and standardization of 'questionnaire on music perception ability'. J ITC Sangeet Res Acad 2017;6:3-13.  Back to cited text no. 12
    
13.
Archana D, Manjula P. Music (Indian Music) Perception Test Battery for Individuals Using Hearing Devices. Student Research at AIISH, Mysore. (Articles based on dissertation done at AIISH), Volume VIII: Part A – Audiology; 2010. p. 6-17.  Back to cited text no. 13
    
14.
Jun W, Emmanuel V, Stanislaw R, Takuya N, Nobutaka O, Shigeki S. Musical Instrument Identification Based on New Boosting Algorithm with Probabilistic Decisions. International Symposium on Computer Music Modeling and Retrieval (CMMR), Bhubaneswar, India; 2011.  Back to cited text no. 14
    
15.
Kitahara MT, Goto H, Okuno G. Musical Instrument Identification Based on F0 Dependent Multivariate Normal Distribution. Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP); 2003. p. 409-12.  Back to cited text no. 15
    
16.
Middlebrooks JC, Simon JZ, Popper AN, Fay RR. The auditory system at the cocktail party. Springer Handb Aud Res 2017;60. [Doi: 10.1007/978-3-319-51662-2].  Back to cited text no. 16
    
17.
Lass NJ, Hughes KR, Bowyer MD, Waters LT, Bourne VT. Speaker sex identification from voiced, whispered, and filtered isolated vowels. J Acoust Soc Am 1976;59:675-8.  Back to cited text no. 17
    
18.
Owren MJ, Berkowitz M, Bachorowski JA. Listeners judge talker sex more efficiently from male than from female vowels. Percept Psychophys 2007;69:930-41.  Back to cited text no. 18
    
19.
Schwartz MF. Identification of speaker sex from isolated, voiceless fricatives. J Acoust Soc Am 1968;43:1178-9.  Back to cited text no. 19
    
20.
Ingemann F. Identification of the speaker's sex from voiceless fricatives. J Acoust Soc Am 1968;44:1142-4.  Back to cited text no. 20
    
21.
Schwartz MF, Rine HE. Identification of speaker sex from isolated, whispered vowels. J Acoust Soc Am 1968;44:1736-7.  Back to cited text no. 21
    
22.
Fant G. Acoustic Theory of Speech Production. The Netherlands: Mouton, the Hague; 1960. p. 242.  Back to cited text no. 22
    
23.
Smith DR, Patterson RD. The interaction of glottal-pulse rate and vocal-tract length in judgements of speaker size, sex, and age. J Acoust Soc Am 2005;118:3177-86.  Back to cited text no. 23
    
24.
Fitch WT, Giedd J. Morphology and development of the human vocal tract: A study using magnetic resonance imaging. J Acoust Soc Am 1999;106:1511-22.  Back to cited text no. 24
    
25.
Lee B. Are Male and Female Voices Really That Different? 2012. Available from: http://www.vocalability.com/voice-science/are-male-and-female-voices-really-that-different/. [Last accessed on 2014 Mar 09].  Back to cited text no. 25
    


    Figures

  [Figure 1], [Figure 2]
 
 
    Tables

  [Table 1]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Methods
Results and Disc...
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed90    
    Printed11    
    Emailed0    
    PDF Downloaded30    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]