A Short Introduction
Broadly, my research is dedicated to understanding how humans comprehend speech. I ask questions like: How does the auditory system transform an acoustic signal into dicrete representations like /p/ and /b/? How are words encoded in the brain - as morphemes (e.g. [dis], [appear], [s]) or as whole words? What neural mechanisms allow the processing system to "repair" interpretations of previously heard speech?
In order to address these questions, I primarily use magneto-encephalography (MEG) and electro-encephalography (EEG); capitalising on their excellent temporal resolution to track neural processes as speech unfolds over time. To these data I apply a range of analysis techniques — computational modelling, machine learning, neural networks — whichever is best suited for the question at hand.
From a talk I gave at Columbia University; amazing how the "with" makes me sound like a 90's TV host.