Neural Tracking of
Environmental Sounds
01
Neural Tracking of Speech-Song Comprehension
Song can be more effective than speech in language learning, perhaps because music can be more rhythmically predictable and engaging. Indeed, our lab’s research shows that compared to speech, songs are processed (‘tracked’) more readily by the brain (Vanden Bosch der Nederlanden et al., 2020; 2022). My research explores not only how the brain might track song more strongly than speech, but also how such neural mechanisms relate to behavioural outcomes like engagement and learning. Although our testing has mostly been in a typical lab setting so far, we are about to start an exciting mobile brain imaging study in a classroom setting to understand learning in more realistic group interactions.
02
Attentional Speech Bias
The ASB4S study aims to determine if neural tracking can be used to index attention and attentional speech bias in complex scenes in adults and infants. The study uses a modified auditory change detection paradigm optimized for EEG, where participants are presented with an auditory scene consisting of four different sounds for either 4 (adults) or 10 (infants) seconds. Adults are asked if they detect a volume change in any of the sounds to assess their attention to individual sounds within a scene. Infants' attention to individual sounds is assessed through neural activity and looking time. Characterizing the attentional speech bias in complex auditory scenes is important to understand how children acquire language in the real world, thus, this study will establish the perceptual, neural, and attentional components pivotal for successful communication in high volume scenes.