
I am interested in understanding how the human brain uses multisensory information, such as when watching a friend’s mouth-movements while listening to her speech, to enhance comprehension of spoken language. Multisensory integration is especially useful in noisy situations and for individuals with hearing loss. To help me gauge the brain mechanisms, I use behavioral measures, electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI) to record brain activity while individuals process sounds and images. My current research program examines how multisensory neural networks develop, adapt in noisy situations, and reorganize due to hearing loss and following hearing restoration (following cochlear implantation).