News

Communicating emotions

In music, words and across cultures
Published: 24 February 2015

Facial expression more important to conveying emotion in music than in speech

 

Video used in an experiment  designed to assess the role played by facial expressions in conveying emotion in song.

Regular concert-goers are used to seeing singers use expressive and often very dramatic facial expressions. Indeed, music and speech are alike in that they use both facial and acoustic cues to engage listeners in an emotional experience. McGill researchers wondered what roles these different cues played in conveying emotions. To find out, they did an experiment where participants were offered recordings of short phrases (neutral statements, all of which were seven syllables long, such as “children tapping to the beat” or “people talking by the door”), which were then spoken or sung with a variety of emotions. Participants were then offered these recordings in three different formats: either audio alone, video alone (with no sound), or full audio-video recordings, and were asked to identify the emotions that the performers' intended to convey.

The researchers discovered that when it came to song, although the participants had a hard time recognizing the emotion based on the audio recording alone, once visual cues were added, the observers’ understanding of the emotions the music sought to convey improved dramatically.  In contrast, participants were much better able to recognize emotion in speech, whether they were listening to audio alone, watching a video without sound, or seeing both at the same time.  As a result, the researchers believe that visual cues play a much more important role in the understanding of the emotions being conveyed by music than they do in the understanding of speech.

To contact the lead researcher directly: steven.livingstone [at] ryerson.ca

To read the full paper in The Quarterly Journal of Experimental Psychologyhttp://www.tandfonline.com/doi/full/10.1080/17470218.2014.971034#abstract

Mandarin-speaking Chinese more likely to read emotions in voices of others; 

English-speaking North Americans rely more on facial expressions

If you are a Mandarin-speaker from China and want to understand how someone else is feeling, you are likely to concentrate on their voice rather than on their face. The opposite is true for English-language speakers in North America, who tend to “read” the emotions of others in their facial expressions rather than in their tone of voice. These cultural/linguistic differences run so deep that they are to be found not only in terms of behaviour, but even at the level of brain activity, according to a study recently published by McGill researchers in Neuropsychologia

The researchers arrived at this conclusion by using an electroencephalogram (EEG) to measure brain activity as they asked the participants (20 Mandarin-speakers and 19 English- speakers, all of whom were based in Montreal) to identify the emotions being expressed in a series of vocal and visual cues. The researchers believe that the Mandarin-speakers’ greater reliance on tone of voice than on facial cues to understand emotion compared to English-language speakers may be a result of the limited eye contact and more restrained facial expressions common in East Asian cultures.

To contact the lead researcher directly: pan.liu [at] mail.mcgill.ca

To read the full paper in Neuropsychologia: http://www.sciencedirect.com/science/article/pii/S0028393214004540

 

 

Back to top