Neuroimaging and the Listening Brain Most of us live in an auditory world. We use spoken language to communicate, we tune in to environmental sounds, and we listen to music. Formal and informal experiences with these acoustically and functionally complex sounds interact with our neural systems, from the peripheral to the central auditory systems, and ... Features
Free
Features  |   July 01, 2010
Neuroimaging and the Listening Brain
Author Notes
  • Patrick C. M. Wong, PhD, CCC-SLP, is associate professor of communication sciences and disorders, otolaryngology-head and neck surgery, and neuroscience at Northwestern University. He directs the Communication Neural Systems Research Group, supported by the National Science Foundation and the National Institutes of Health. Wong’s clinical interests include rehabilitative audiology and neurogenic communication disorders. Contact him at pwong@northwestern.edu.
    Patrick C. M. Wong, PhD, CCC-SLP, is associate professor of communication sciences and disorders, otolaryngology-head and neck surgery, and neuroscience at Northwestern University. He directs the Communication Neural Systems Research Group, supported by the National Science Foundation and the National Institutes of Health. Wong’s clinical interests include rehabilitative audiology and neurogenic communication disorders. Contact him at pwong@northwestern.edu.×
Article Information
Hearing & Speech Perception / Hearing Disorders / Features
Features   |   July 01, 2010
Neuroimaging and the Listening Brain
The ASHA Leader, July 2010, Vol. 15, 14-17. doi:10.1044/leader.FTR2.15082010.14
The ASHA Leader, July 2010, Vol. 15, 14-17. doi:10.1044/leader.FTR2.15082010.14
Most of us live in an auditory world. We use spoken language to communicate, we tune in to environmental sounds, and we listen to music. Formal and informal experiences with these acoustically and functionally complex sounds interact with our neural systems, from the peripheral to the central auditory systems, and from neurotransmitters to neural networks. Not surprisingly, deficits of complex auditory processing are associated with an array of conditions that limit activity and participation, including not only hearing loss, but also conditions such as autism and learning problems that are complicated by concomitant cognitive factors. Because of its broad impact on our communicative functions and quality of life, complex auditory processing is an area that requires expertise from professions of our discipline (audiology and speech-language pathology) as well as from cognate disciplines such as cognitive psychology, neuroscience, and education.
Auditory processing is a complex phenomenon that involves sensory, perceptual, and cognitive resources/functions (Wingfield & Tun, 2007; Pichora-Fuller, 2009). To understand the neural underpinnings of auditory processing, it is essential to be able to observe neurophysiological responses from brainstem nuclei to the cognitive regions connected to the auditory cortex. Such observations can be made by high-spatial resolution neuroimaging methods such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). In addition, high-temporal resolution studies—such as electroencepholography (EEG) or magnetoencephalography (MEG)—provide crucial and complementary information (Kraus et al., 2009; Wong et al., 2007). Although high-spatial resolution methods can help identify activities in small brain regions (e.g., one-millimeter space), high-temporal resolution methods provide information regarding how responses change over a very small time scale (e.g., milliseconds).
Basic Science
Neuroimaging has shown to be reliable in measuring responses of the human auditory pathway, including auditory brainstem and thalamic nuclei (Schoenwiesner et al., 2007), and in examining the effects of sound intensity levels, attention, and task modulation (Rinne et al., 2008; von Kriegstein et al., 2008). In terms of more complex auditory stimuli, speech perception consistently has been shown to activate the auditory cortex and connected regions associated with articulation and word semantics (Hickok & Poeppel, 2007; Rauschecker & Scott, 2009). Interestingly, listening to music can recruit the same neural areas as speech perception but only for trained musicians. In one study, musicians showed higher activation in the left auditory cortex, Broca’s area, and motor cortex than non-musicians, but only when listening to music of an instrument they played—for example, a violinist listening to violin music (Margulis et al., 2009).
Speech presented in background noise is a frequently used experimental paradigm as it yields ecologically valid and clinically relevant information about the way the auditory and attention networks function in challenging listening contexts. Several high-spatial resolution neuroimaging studies provide growing evidence that both auditory and cognitive brain regions are recruited in speech perception, especially in noisy situations (e.g., Salvi et al., 2002; Zekveld et al., 2006). One methodological caveat is that neural responses to the noise produced by the scanner must be isolated in order to identify confidently the activation patterns associated solely with processing of the experimental stimuli. To address this problem, C“sparse sampling” technique, in which stimuli are presented in between the image scans (e.g., Griffiths et al., 2001). Because the hemodynamic response associated with regional brain activation peaks several seconds after the onset of the auditory stimulus, researchers using sparse sampling can observe brain activity related mostly to auditory processing of the experimental stimuli without interference from scanning noise.
A recent fMRI study examined cortical contributions to speech perception in noise in younger adults with normal peripheral hearing functions (Wong et al., 2008a). Using a picture-word matching task, the subjects identified words embedded in quiet and in multi-talker babble noise [at two signal-to-noise ratios (SNRs): SNR +20 and -5 dB] conditions. Although subjects were equally accurate in the quiet and in the SNR+20 (less noisy) conditions, their performance was predictably impaired by the louder noise in the SNR-5 condition. The reaction time data revealed the same pattern of results as the accuracy data (i.e., faster responses in the quiet and SNR+20 conditions as compared to the SNR-5 condition).
In the left panel of Figure 1[PDF], statistical contrasts of the fMRI activation results from the SNR+20 vs. the quiet condition are displayed (orange/red pattern indicates greater activation in the SNR+20 condition compared to the quiet condition). This contrast shows higher levels of activation in the auditory cortex, particularly in the bilateral superior temporal gyri (STG) for the SNR+20 condition. In the middle panel of Figure 1, statistical contrasts of the activation results from the SNR-5 vs. the quiet condition are shown. Compared to the left panel (SNR+20 vs. quiet), the SNR-5 condition yielded an increased left lateralization in the STG. Moreover, the middle panel also shows left posterior STG (pSTG), rather than middle STG (mSTG) activation; pSTG and mSTG are anatomically and functionally distinct. In the right panel, a more direct comparison of the two noise conditions (SNR-5 vs. SNR+20) shows greater activation in the left pSTG in the SNR-5 condition (the louder noise condition) than in the SNR+20 condition.
These three images comparing (or “contrasting”) different listening conditions indicate that the pSTG becomes important during speech perception as the level of noise increases. In addition to the STG, additional activation was observed in the inferior parietal lobule (IPL) area and the nearby precuneus, as well as in the various aspects of the prefrontal cortex (PFC) (shown in Figure 2 [PDF]) during the noise conditions compared to the quiet condition.
This pattern of activation follows the posterior auditory pathway, as proposed by Rauschecker (1998), indicating that perhaps listening to speech in noise requires auditory-motor integration (Hickok & Poeppel, 2007) and recruitment of the phonological working memory network (Baddeley, 2003) in the pSTG/IPL-PFC, in addition to acoustic analyses performed in the mSTG. The poor behavioral performance in the noisier SNR-5 condition compared with the SNR+20 condition points to the task’s difficulty and the likely need for support from phonological working memory—both storage and rehearsal—as well as attention networks.
Clinical Populations
High spatial resolution neuroimaging methods also have been used in individuals with auditory deficits, although to date just a few studies have been published. These include anatomical studies for pre-surgical planning and for detection of neuroanatomical anomalies in sudden hearing loss (e.g., Yoshida et al., 2008) as well as a few functional neuroimaging studies. In terms of peripheral sensorineural hearing loss, Propst et al. (2010) found less activation in the auditory cortex and attentional networks in children with hearing loss as compared to controls during narrowband noise and speech-in-noise tasks. In adults, subjects with hearing loss show a greater increase in brain activation as the intensity level of the frequency modulated tone stimuli increased compared to control subjects (Langers et al., 2007). Although this approach is not yet clinically informative, using high spatial resolution imaging allows us to begin to examine the impact of peripheral hearing loss on cortical activation.
Neuroimaging is particularly useful for exploring complex auditory processing and central nervous system deficits because it allows researchers to observe interactions among different neural systems. For example, older listeners often complain that they cannot hear well in challenging listening environments such as noisy restaurants. Multiple studies have used fMRI to probe the interactions of cortical systems in older adults listening in noisy environments. Harris et al. (2009) asked younger (average age around 29 years old) and older listeners (around 70 years old) to perform a word-recognition task in which the intelligibility of the stimuli was manipulated using low-pass filtering. No age differences were found in the auditory cortex but differences were found in the anterior cingulate cortex (presumed to be part of the attention network). Structurally, the volume of left auditory cortex was found to be larger in the younger listeners, and interestingly, these volumetric differences were correlated with their word-recognition accuracy scores.
In another study using fMRI to study speech perception in noise, Wong et al. (2009) compared cortical responses of younger and older subjects as they identified single words in quiet and in two multi-talker babble noise conditions (SNR+20 and −5 dB). Behaviorally, older and younger subjects showed no significant differences in the first two conditions, but older adults performed less accurately in the SNR −5 condition. In older subjects, the fMRI results showed reduced activation in the auditory cortex but an increase in working memory and attention-related cortical areas (prefrontal and precuneus regions), as well as in word semantic-related areas (middle temporal gyrus, MTG), especially in the noisy conditions, yielding a significant group x SNR interaction. Figure 3 [PDF] shows some of these results.
Most interestingly, increased cortical activity in these cognitive regions correlated positively with behavioral performance in older listeners (Figure 4 [PDF]), indicating that these regions may support compensatory strategies. Neuroanatomical analysis (Wong et al., in press) further demonstrate a positive correlation between the size and thickness of the dorsal and ventral aspects of prefrontal cortex and older listeners’ ability to perceive speech in noise, as measured by the Quick SIN test, a clinical instrument for measuring speech recognition in noise. It is worth noting that areas of activation typically linked to word semantic processing in middle temporal gyri also were positively correlated with performance. Taken together, these studies provide corroborative evidence for the engagement of both auditory and other cognitive brain regions during speech perception in noisy environments. These results are consistent with behavioral findings that point to the contribution of cognitive resources (e.g., working memory and attention) in complex auditory perception (e.g., Humes, 2007).
Several studies have used neuroimaging methods to examine neural activation in individuals who are cochlear implant (CI) candidates or users. Because of safety concerns, these studies most often use PET rather than fMRI, especially when imaging is performed after implantation. A post-implantation PET study with six adult CI users showed increased occipital cortex activation as compared to control subjects (Giraud et al., 2001), in addition to greater activation in other cognitive brain regions such as the precuneus and parahippocampus. In a recent PET study exploring music-related brain activation, Limb et al. (2010) found greater auditory cortex activation in adult CI users than in control subjects. One recent study also examined pre-implantation differences in adult CI candidates using fMRI (Lazard et al., 2010). Eight CI candidates performed a rhythm judgment task. Post-implant word recognition score was positively correlated with activation in left frontal, parietal, and posterior temporal regions but was negatively correlated with anterior temporal regions and the supramarginal gyrus. These results suggest not only that fMRI measures could be used clinically to guide decision-making, but also that the contribution of phonological processing regions in the posterior superior temporal regions may be key to post-implant speech perception outcomes. The same concept of using pre-surgical imaging to predict post-surgical outcomes was applied in pediatric CI candidates; in a PET study, Lee et al. (2005) found that more successful CI users (defined by speech perception performance two years after surgery) had high metabolic activity in the fronto-parietal area while less successful CI users had higher activity in the occipital cortex pre-surgically.
Rehabilitation Potentials
Because complex auditory processing depends on multiple processes, rehabilitative strategies that focus on one factor (e.g., audibility) may not be the most effective approach for all patients. Auditory training that encompasses perceptual and cognitive processing could be more beneficial. Although the numerous auditory and speech training studies that have been conducted have generally yielded positive results, the neural underpinnings of complex auditory processing, in both typical and disordered populations, need to be more extensively investigated before direct clinical applications can be developed and tested.
Although not routinely administered in clinical populations, neuroimaging techniques hold great promise for future use. For example, recent speech training studies have included examinations of neural mechanisms. These studies typically investigated non-native language speech production in adulthood and explored neural changes and pre-training neural predictors for success. Golestani and Zatorre (2004) found that English subjects learning to identify the Hindi (foreign) dental-retroflex contrast (English does not have this contrast) resulted in brain activation that is closer to activation associated with identifying English (native) consonants—these neural changes included greater activation of the left superior temporal gyrus, insula-frontal operculum, and inferior frontal gyrus. Golestani et al. (2007) and Wong et al. (2008b) both found increased Heschl’s gyrus volume in adults who were more successful in speech learning, supporting the notion that neuroanatomical data may be useful in predicting second-language acquisition ability.
Future Use
Because complex auditory processing involves sensory, perceptual, and cognitive processes, high spatial-resolution neuroimaging provides a valuable method to examine how these processes interact. The neuroanatomical precision and information of these methods complement the high temporal resolution methods that are more often used in our discipline to study the human auditory system. Both PET and fMRI have been used to examine auditory processes such as those related to speech perception in noise and music processing in younger adults with normal peripheral hearing functions. They also have been used to investigate auditory processing in a clinical setting with individuals with hearing loss and those who use CIs.
In the future, these neuroimaging methods (both high-spatial and high-temporal resolution) likely will be used for pre-treatment planning and also may be used to probe the impact of rehabilitation strategies on auditory and cognitive brain functions.
References
Baddeley, A. (2003). Working memory: Looking back and looking forward. Nature Reviews Neuroscience, 4(10), 829–839. [Article] [PubMed]
Baddeley, A. (2003). Working memory: Looking back and looking forward. Nature Reviews Neuroscience, 4(10), 829–839. [Article] [PubMed]×
Giraud, A. L., Price, C. J., Graham, J. M., & Frackowiak, R. S. J. Functional Plasticity of language-related brain areas after cochlear implantation. (2001). Brain, 124, 1307–1316. [Article] [PubMed]
Giraud, A. L., Price, C. J., Graham, J. M., & Frackowiak, R. S. J. Functional Plasticity of language-related brain areas after cochlear implantation. (2001). Brain, 124, 1307–1316. [Article] [PubMed]×
Griffiths, T. D., Uppenkamp, S., Johnsrude, I., Josephs, O., & Patterson, R. D. (2001). Encoding of the temporal regularity of sound in the human brainstem. Nature Neuroscience, 4, 633–637. [Article] [PubMed]
Griffiths, T. D., Uppenkamp, S., Johnsrude, I., Josephs, O., & Patterson, R. D. (2001). Encoding of the temporal regularity of sound in the human brainstem. Nature Neuroscience, 4, 633–637. [Article] [PubMed]×
Golestani, N., Molko, N., Stanislas, D., LeBihan, D., & Pallier, C. (2007). Brain structure predicts the learning of foreign speech sounds. Cerebral Cortex, 17, 575–582. [PubMed]
Golestani, N., Molko, N., Stanislas, D., LeBihan, D., & Pallier, C. (2007). Brain structure predicts the learning of foreign speech sounds. Cerebral Cortex, 17, 575–582. [PubMed]×
Golestani, N. & Zatorre, R. J. (2004). Learning new sounds of speech: reallocation of neural substrates. Neuroimage, 21, 494–506. [Article] [PubMed]
Golestani, N. & Zatorre, R. J. (2004). Learning new sounds of speech: reallocation of neural substrates. Neuroimage, 21, 494–506. [Article] [PubMed]×
Harris, K. C., Dubno, J. R., Keren, N. I., Ahlstrom, J. B., & Eckert, M. A. (2009). Speech Recognition in Younger and Older Adults: A Dependency on Low-Level Auditory Cortex. The Journal of Neuroscience, 29, 6078–6087. [Article] [PubMed]
Harris, K. C., Dubno, J. R., Keren, N. I., Ahlstrom, J. B., & Eckert, M. A. (2009). Speech Recognition in Younger and Older Adults: A Dependency on Low-Level Auditory Cortex. The Journal of Neuroscience, 29, 6078–6087. [Article] [PubMed]×
Hickok, G., & Poeppel, D. (2007). Opinion—The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393–402. [Article] [PubMed]
Hickok, G., & Poeppel, D. (2007). Opinion—The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393–402. [Article] [PubMed]×
Humes, L. E. (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. Journal of the American Academy of Audiology, 18, 590–603. [Article] [PubMed]
Humes, L. E. (2007). The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. Journal of the American Academy of Audiology, 18, 590–603. [Article] [PubMed]×
Kraus, N., Skoe, E., Parbery-Clark, A., & Ashley, R. (2009). Experience-induced malleability in neural encoding of pitch, timbre and timing: implications for language and music. Annals of the New York Academy of Sciences: Neurosciences and Music III, 1169, 543–557. [Article]
Kraus, N., Skoe, E., Parbery-Clark, A., & Ashley, R. (2009). Experience-induced malleability in neural encoding of pitch, timbre and timing: implications for language and music. Annals of the New York Academy of Sciences: Neurosciences and Music III, 1169, 543–557. [Article] ×
von Kriegstein, K., Patterson, R. D., & Griffiths, T. D. (2008). Task-dependent modulation of medial geniculate body is behaviorally relevant for speech recognition. Current Biology, 18, 1855–1859. [Article] [PubMed]
von Kriegstein, K., Patterson, R. D., & Griffiths, T. D. (2008). Task-dependent modulation of medial geniculate body is behaviorally relevant for speech recognition. Current Biology, 18, 1855–1859. [Article] [PubMed]×
Langers, D. R. M., van Dijk, P., Schoenmaker, E. S., & Backes, W. H. (2007). fMRI activation in relation to sound intensity and loudness. Neuroimage, 35, 709–718. [Article] [PubMed]
Langers, D. R. M., van Dijk, P., Schoenmaker, E. S., & Backes, W. H. (2007). fMRI activation in relation to sound intensity and loudness. Neuroimage, 35, 709–718. [Article] [PubMed]×
Lazard, D. S., Lee, H. J., Gaebler, M., Kell, C. A., Truy, E., & Giraud, A. L. (2010). Phonological processing in post-lingual deafness and cochlear implant outcome. Neuroimage, 49, 3443–3451. [Article] [PubMed]
Lazard, D. S., Lee, H. J., Gaebler, M., Kell, C. A., Truy, E., & Giraud, A. L. (2010). Phonological processing in post-lingual deafness and cochlear implant outcome. Neuroimage, 49, 3443–3451. [Article] [PubMed]×
Lee, H. J., Kang, E., Oh, S., Kang, H., Lee, D. S., Lee, M. C., & Kim, C. (2004). Preoperative differences of cerebral metabolism relate to the outcome of cochlear implants in congenitally deaf children. Hearing Research, 203, 2–9.
Lee, H. J., Kang, E., Oh, S., Kang, H., Lee, D. S., Lee, M. C., & Kim, C. (2004). Preoperative differences of cerebral metabolism relate to the outcome of cochlear implants in congenitally deaf children. Hearing Research, 203, 2–9.×
Limb, C. J., Molloy, A. T., Jiradejvong, P., & Braun, A. R. (2010). Auditory Cortical Activity During Cochlear Implant-Mediated Perception of Spoken Language, Melody, and Rhythm. Journal of the Association for Research in Otolaryngology,11, 133–143.
Limb, C. J., Molloy, A. T., Jiradejvong, P., & Braun, A. R. (2010). Auditory Cortical Activity During Cochlear Implant-Mediated Perception of Spoken Language, Melody, and Rhythm. Journal of the Association for Research in Otolaryngology,11, 133–143.×
Margulis, E. H., Misna, L. M., Uppunda, A. K., Parrish, T. B. & Wong, P. C. M. (2009). Selective Neurophysiologic Responses to Music in Instrumentalists with Different Listening Biographies. Human Brain Mapping, 30, 267–275. [Article] [PubMed]
Margulis, E. H., Misna, L. M., Uppunda, A. K., Parrish, T. B. & Wong, P. C. M. (2009). Selective Neurophysiologic Responses to Music in Instrumentalists with Different Listening Biographies. Human Brain Mapping, 30, 267–275. [Article] [PubMed]×
Pichora-Fuller, M. K. (2009). How cognition might influence hearing aid design, fitting, and outcomes. The Hearing Journal, 62, 32–38.
Pichora-Fuller, M. K. (2009). How cognition might influence hearing aid design, fitting, and outcomes. The Hearing Journal, 62, 32–38.×
Propst, E. J., Greinwald, J. H., & Schmisthorst, V. (2010). Neuroanatomic Differences in Children With Unilateral Sensorineural Hearing Loss Detected Using Functional Magnetic Resonance Imaging. Archives of Otolaryngology—Head & Neck Surgery, 136, 22–26. [Article] [PubMed]
Propst, E. J., Greinwald, J. H., & Schmisthorst, V. (2010). Neuroanatomic Differences in Children With Unilateral Sensorineural Hearing Loss Detected Using Functional Magnetic Resonance Imaging. Archives of Otolaryngology—Head & Neck Surgery, 136, 22–26. [Article] [PubMed]×
Rauschecker, J. P. (1998). Cortical processing of complex sounds. Curr Opin Neurobiol, 8(4), 516–521. [Article] [PubMed]
Rauschecker, J. P. (1998). Cortical processing of complex sounds. Curr Opin Neurobiol, 8(4), 516–521. [Article] [PubMed]×
Rauschecker, J. P. & Scott, S. K. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nature neuroscience, 12, 718–724. [Article] [PubMed]
Rauschecker, J. P. & Scott, S. K. (2009). Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing. Nature neuroscience, 12, 718–724. [Article] [PubMed]×
Rinne, T., Balk, M. H., Koistinen, S., Autti, T., Alho, K., & Sams, M. (2008). Auditory selective attention modulates activation of human inferior colliculus. Journal of Neurophysiology, 100, 3323.
Rinne, T., Balk, M. H., Koistinen, S., Autti, T., Alho, K., & Sams, M. (2008). Auditory selective attention modulates activation of human inferior colliculus. Journal of Neurophysiology, 100, 3323.×
Salvi, R. J., Lockwood, A. H., Frisina, R. D., Coad, M. L., Wack, D. S., & Frisina, D. R. (2002). PET imaging of the normal human auditory system: Responses to speech in quiet and in background noise. Hearing Research, 170, 96–106. [Article] [PubMed]
Salvi, R. J., Lockwood, A. H., Frisina, R. D., Coad, M. L., Wack, D. S., & Frisina, D. R. (2002). PET imaging of the normal human auditory system: Responses to speech in quiet and in background noise. Hearing Research, 170, 96–106. [Article] [PubMed]×
Schoenwiesner, M., Krumbholz, K., Rubsamen, R., Fink, G. R., & von Cramon, D. Y. (2007). Hemispheric asymmetry for auditory processing in the human auditory brain stem, thalamus, and cortex. Cerebral Cortex, 17, 492.
Schoenwiesner, M., Krumbholz, K., Rubsamen, R., Fink, G. R., & von Cramon, D. Y. (2007). Hemispheric asymmetry for auditory processing in the human auditory brain stem, thalamus, and cortex. Cerebral Cortex, 17, 492.×
Scott, S. K., Rosen, S., Wickham, L., & Wise, R. J. (2004). A positron emission tomography study of the neural basis of information and energetic masking efforts in speech perception. The Journal of the Acoustical Society of America, 115, 813–821. [Article] [PubMed]
Scott, S. K., Rosen, S., Wickham, L., & Wise, R. J. (2004). A positron emission tomography study of the neural basis of information and energetic masking efforts in speech perception. The Journal of the Acoustical Society of America, 115, 813–821. [Article] [PubMed]×
Tremblay, K. L., Billings, C. J., Friesen, L. M., & Souza, P. E. (2006). Neural Representation of Amplified Speech Sounds. Ear & Hearing, 27, 93–103. [Article]
Tremblay, K. L., Billings, C. J., Friesen, L. M., & Souza, P. E. (2006). Neural Representation of Amplified Speech Sounds. Ear & Hearing, 27, 93–103. [Article] ×
Wingfield, A., & Tun, P. A. (2007). Cognitive supports and cognitive constraints on comprehension of spoken language. J Am Acad Audiol, 18(7), 548–558. [Article] [PubMed]
Wingfield, A., & Tun, P. A. (2007). Cognitive supports and cognitive constraints on comprehension of spoken language. J Am Acad Audiol, 18(7), 548–558. [Article] [PubMed]×
Wong, P. C. M., Skoe, E., Russo, N. M., Dees, T., & Kraus, N. (2007). Musical Experience Shapes Human Brainstem Encoding of Linguistic Pitch Patterns. Nature Neuroscience, 10, 420–422. [PubMed]
Wong, P. C. M., Skoe, E., Russo, N. M., Dees, T., & Kraus, N. (2007). Musical Experience Shapes Human Brainstem Encoding of Linguistic Pitch Patterns. Nature Neuroscience, 10, 420–422. [PubMed]×
Wong, P. C. M., Uppanda, A. K., Parrish, T. B., & Dhar, S. (2008). Cortical Mechanisms of Speech Perception in Noise. Journal of Speech, Language, and Hearing Research, 51, 1026–1041. [Article]
Wong, P. C. M., Uppanda, A. K., Parrish, T. B., & Dhar, S. (2008). Cortical Mechanisms of Speech Perception in Noise. Journal of Speech, Language, and Hearing Research, 51, 1026–1041. [Article] ×
Wong, P. C. M., Warrier, C. M., Penhune, V. B., Roy, A. K., Sadehh, A., Parrish, T. B. & Zatorre, R. J. (2008b). Volume of Left Heschl’s Gyrus and Linguistic Pitch Learning. Cerebral Cortex, 18, 828–836. [Article]
Wong, P. C. M., Warrier, C. M., Penhune, V. B., Roy, A. K., Sadehh, A., Parrish, T. B. & Zatorre, R. J. (2008b). Volume of Left Heschl’s Gyrus and Linguistic Pitch Learning. Cerebral Cortex, 18, 828–836. [Article] ×
Wong, P. C. M., Jin, J. X., Gunasekara, G. M., Abel, R., Lee, E. R., & Dhar, S. (2009). Aging and cortical mechanisms of speech perception in noise. Neuropsychologia, 47, 693–703. [Article] [PubMed]
Wong, P. C. M., Jin, J. X., Gunasekara, G. M., Abel, R., Lee, E. R., & Dhar, S. (2009). Aging and cortical mechanisms of speech perception in noise. Neuropsychologia, 47, 693–703. [Article] [PubMed]×
Yoshida, T., Sugiura, M., Naganawa, S., Teranishi, M., Nakata, S., & Nakashima, T. (2008). Three-Dimensional Fluid-Attenuated Inversion Recovery Magnetic Resonance Imaging Findings and Prognosis in Sudden Sensorineural Hearing Loss. The Laryngoscope, 118, 1433–1437. [Article] [PubMed]
Yoshida, T., Sugiura, M., Naganawa, S., Teranishi, M., Nakata, S., & Nakashima, T. (2008). Three-Dimensional Fluid-Attenuated Inversion Recovery Magnetic Resonance Imaging Findings and Prognosis in Sudden Sensorineural Hearing Loss. The Laryngoscope, 118, 1433–1437. [Article] [PubMed]×
Zekveid, A. A., Heslenfeld, D. J., Festen, J. M., & Schoonhoven, R. (2006). Top-down and bottom-up processes in speech comprehension. Neuroimage, 32, 1826–1836. [Article] [PubMed]
Zekveid, A. A., Heslenfeld, D. J., Festen, J. M., & Schoonhoven, R. (2006). Top-down and bottom-up processes in speech comprehension. Neuroimage, 32, 1826–1836. [Article] [PubMed]×
0 Comments
Submit a Comment
Submit A Comment
Name
Comment Title
Comment


This feature is available to Subscribers Only
Sign In or Create an Account ×
FROM THIS ISSUE
July 2010
Volume 15, Issue 8