Music and Language
01
Spontaneous Imitation of Pitch
Have you noticed your little one singing along to their favorite song? Learning to match pitch—the highs and lows of sounds—helps children communicate in both music and language. In songs, keeping pitch intervals (the distance between notes) is key to maintaining a recognizable melody. In speech, pitch also plays a role, like when a voice rises for a question or falls at the end of a statement. However, we can usually understand speech even if the pitch is a little off, unlike a song, where wrong notes can make it difficult to recognize a melody. This project explores how 4- and 8-year-old children repeat back spoken and sung phrases, with and without explicit instruction to match pitch. By exploring how children develop the ability to focus on key sound features, we can further understand how they make sense of the auditory world around them.
02
Infant Discrimination of Language and Song
The Infant Discrimination of Language and Song (IDoLS) project aims to discern infants’ ability to discriminate between speech and song using a modified Stimulus Alternating Preference Procedure (SAPP) (Houston et al., 2007). In line with our predictions, our preliminary results show that the younger infants do not differentiate between speech and song while the older infants do, suggesting the emergence of domain-specific processing before or around the first birthday. This work is important for furthering our understanding of the developmental trajectory for the cognitive and perceptual processes behind music and language perception in infants!
03
Learning Math Online
Using a short sung, spoken, and alternating spoken-sung lesson we explore whether adults can learn about speed and velocity through song. We also look at whether sung lessons are more effective than speech, and whether this knowledge is only word for word recollection or if it can also be used to apply to more complex concepts like math word problems
04
Speech-Song Pitch Expansion
The Speech-Song Pitch Expansion (SSPE) study explores how well participants can detect subtle pitch changes in both song and speech. This experiment brings a fun twist by comparing the abilities of two groups: children aged 4-6 and adults. The challenge unfolds through an interactive computer game called The Kettletop Imposters Game, where participants must determine which dragon is the imposter based on pitch clues. Who will be better at catching the imposter dragon—kids or grown-ups? Time to find out!
05
Neural Tracking of Speech-Song Comprehension
Song can be more effective than speech in language learning, perhaps because music can be more rhythmically predictable and engaging. Indeed, our lab’s research shows that compared to speech, songs are processed (‘tracked’) more readily by the brain (Vanden Bosch der Nederlanden et al., 2020; 2022). My research explores not only how the brain might track song more strongly than speech, but also how such neural mechanisms relate to behavioural outcomes like engagement and learning. Although our testing has mostly been in a typical lab setting so far, we are about to start an exciting mobile brain imaging study in a classroom setting to understand learning in more realistic group interactions.
06
Spontaneous Synchronization and Rhythm Study
In this study, we examine whether participants spontaneously synchronize their speech output to a rhythmic stimulus and whether spontaneous synchronization relates to sensorimotor synchronization skills when tapping to the beat of popular music. This work is important for understanding the cognitive and perceptual processes underlying spontaneous and intentional synchronization to external stimuli, which is important for music, movement, and communication.
07
German Word Learning
What is the best way for children and adults to learn a new language? We explore how musical elements in song could enhance the L2 acquisition process for children relative to child- and adult-directed speech (CDS and ADS). Previous work illustrated that CDS and song were both better at promoting language learning than adult-directed speech for children and adults (Ma et al., 2020; 2024), but they did not examine whether to-be-learned words in the song condition that fell on or off the beat impacted learning. Preliminary data show no significant differences in response rates between modalities (CDS, ADS, Song off and on-beat) or between words presented on-beat and off-beat for both adults (n = 20) and children (n = 20). This suggests that rhythmic characteristics in song may not play a crucial role in word learning for either group in our current sample. However, as data collection continues, our findings can contribute to our understanding of how young children acquire languages and how songs might offer an easy way for parents and teachers to promote language learning.
08
Conveying Emotions in Speech Versus Song
The Conveying Emotions in Speech vs. Song (CESVS) project explored how well everyday people can express and understand emotions through speech compared to song. Previous studies mainly focused on professional actors and musicians, so this study aimed to see if the same results applied to the general population. Using a naturalistic method, undergraduate students improvised five emotions (happy, peaceful, neutral, sad, and afraid) through both speaking and singing. These improvisations were recorded, and another group of students rated the emotions and modality (speech or song) in either a "semantic" (with meaningful words) or "non-semantic" (without recognizable words) condition. The findings from this research help us understand how different ways of expressing emotions—through speech or song—serve different roles in communication, offering insights into how music may have evolved.