top of page

Current Projects

clark-young-tq7RtEvezSY-unsplash.jpg

01

Music and Language

Music and language are two very important forms of communication. How do we learn the rules that are unique to music and language as we grow up? Is it possible that infants perceive all of our infant-directed singing and speaking as part of one large group of sounds and don't differentiate between music and language? Finally, how do we learn to flexibly apply our knowledge of music and language differently depending on what we're listening to? By digging into these big questions, we hope to understand what features of sounds are important for alerting us to the fact that someone us talking or singing to us and develop training techniques to help individuals who struggle with language, such as children with dyslexia. We use EEG and behavioural paradigms to understand how people of all ages listen to music and language, speech and song in particular.

02

Listening for the
Human Voice

Our past work has shown that infants, children, and adults bias their attention toward the human voice in complex scenes. This means that when you're in a busy place, like at the supermarket, you'll be better at noticing someone talking compared to the sound of an employee restocking the bulk foods section. Our current work is trying to understand what features of sounds (acoustic characteristics) predict our ability to notice changes in the sounds around us, including speaking and singing. We also want to know how children who struggle with language detect changes to the human voice or other environmental sounds (e.g., speaking vs. car starting or dog barking).

sai-de-silva-YLMs82LF6FY-unsplash_edited.jpg
toddler-g9fd737b29_1920.jpg

03

Neural Tracking of Environmental Sounds

Music and language are two very important forms of communication. How do we learn the rules that are unique to music and language as we grow up? Is it possible that infants perceive all of our infant-directed singing and speaking as part of one large group of sounds and don't differentiate between music and language? Finally, how do we learn to flexibly apply our knowledge of music and language differently depending on what we're listening to? By digging into these big questions, we hope to understand what features of sounds are important for alerting us to the fact that someone us talking or singing to us and develop training techniques to help individuals who struggle with language, such as children with dyslexia. We use EEG and behavioural paradigms to understand how people of all ages listen to music and language, speech and song in particular.

bottom of page