Language is central to human life. The speed and accuracy with which typically-developing individuals acquire, produce and understand language is nothing short of remarkable - vastly out-performing even the most advanced artificial intelligence systems. The goal of my lab is to provide an algorithmically precise account of how the human brain achieves this feat. Such insight has the power to inform neuroscience (understanding the human brain) and engineering (building intelligent machines). My research is organised around two overarching questions: (i) what representations does the brain derive from auditory input? (ii) what computations does the brain apply to those representations? To address these questions, we combine insight from neuroscience, machine learning and linguistics, and work with neural measurements at different spatial scales: magnetoencephalography (MEG), electrocorticography (ECoG) and single-unit recordings.
The Gwilliams Lab of Auditory Neuroscience at New York University is currently accepting PhD student applications and postdoctoral researcher applications, for a September 2023 start date.