top of page

Listening for the Human Voice

anubhav-saxena-RA5ntyyDHlw-unsplash.jpg

01

Attentional Speech Bias

The ASB4S study aims to determine if neural tracking can be used to index attention and attentional speech bias in complex scenes in adults and infants. The study uses a modified auditory change detection paradigm optimized for EEG, where participants are presented with an auditory scene consisting of four different sounds for either 4 (adults) or 10 (infants) seconds. Adults are asked if they detect a volume change in any of the sounds to assess their attention to individual sounds within a scene. Infants' attention to individual sounds is assessed through neural activity and looking time. Characterizing the attentional speech bias in complex auditory scenes is important to understand how children acquire language in the real world, thus, this study will establish the perceptual, neural, and attentional components pivotal for successful communication in high volume scenes.

02

Bilingual Speech Bias

Every day, we make sense of the many sounds around us, but not all sounds are attended to equally. Current evidence shows that we are biased to attend to speech over other real-world sounds in complex auditory scenes. But what if that speech was in a foreign language? This study examines whether linguistic experiences play a role in this attentional speech bias. Adult listeners from English monolingual, English-Mandarin bilingual, and English-Other Language bilingual backgrounds completed a change detection task with auditory scenes composed of speech (in English and Mandarin), musical instruments, animal calls, and environmental sounds. This research allows us to better understand how our unique and diverse experiences with language draw our attention to speech in a busy world - stay tuned for our interesting findings!

Mothers and their Baby_edited.jpg
Holding a Mic

03

Role of Rhythmic Regularity in Biasing Attention

Adults, children, and infants alike possess an attentional speech bias (ASB), where they are better able to attend to human speech over other sound types in acoustic scenes which contain multiple, simultaneously presented real-world sounds (Vanden Bosch der Nederlanden et al., 2016). This project examines the ASB in greater depth by attempting to determine whether rhythmic regularity of speech modulates attention in complex scenes. This project uses a change detection paradigm with regular, highly regular, and irregular rhythms of speech. We predict that regularity will act as a form of attentional capture, with greater sensitivity for detecting when regular or highly regular speech sounds change than irregular speech sounds in complex acoustic scenes. The results of this project have major implications for improving change detection in critical settings, such as within the Armed Forces, and also how to speak best to maximize children's ability to learn language. 

bottom of page