• Ei tuloksia

2.6 Measuring musical expression

2.6.2 Measuring perceptions of specific emotions in music

When studying aspects of the music performer‟s expressive abilities, it could be useful to determine the emotional content of the music itself as defined by structural features such as harmony, rhythm and melodic shape. One reason for this is that the structural features of the music may affect the listener‟s ratings of expressivity, so it might be desirable to aim for a balance of different emotions portrayed in the musical stimuli. This would enable the observation of how experimental effects differ depending on the emotional content of the music. However, deciding on the emotional content of a musical excerpt can be difficult due to the subjective nature of emotion in music (Yang, Liu, & Chen, 2006). In addition, it should be considered whether the researcher desires to measure the perceived or induced emotion (Kim et al., 2010), and the listener should be instructed accordingly. The current study focuses on measurements of perceived emotion.

The two main approaches to measuring perceived emotion in music are the discrete approach, and the dimensional approach. Each has a different theoretical basis. The discrete model is based upon the assumption that there are unique cognitive mechanisms for understanding each emotion, while the dimensional model assumes that emotions consist of underlying bipolar dimensions (Lundqvist, Carlsson, Hilmersson, & Juslin, 2009). Studies adopting the discrete approach ask listeners to rate the emotional content of distinct emotions (Eckman, 1992). For example, the Geneva Emotional Music Scale (GEMS) model (Zentner, Grandjean,

& Scherer, 2008) is a discrete emotion rating scale, and aims to identify emotions induced specifically by music. Studies adopting the dimensional approach ask listeners to rate 2 or 3 emotion related dimensions; valence (negative to positive), arousal (low to high energy) and tension (low to high). Whether the third dimension is necessary to the construct of emotion experience is unclear, as while Schimmack and Grob (2000) concluded that it was necessary, Eerola and Vuoskoski (2011) concluded that the dimensions could be collapsed to valence and arousal alone without impairing the fit of the statistical model to the data.

The discrete and dimensional approaches have been compared for both music-induced emotions (Vuoskoski & Eerola, 2011) and for perceived emotions in music (Eerola &

Vuoskoski, 2011). In both studies the dimensional model was found to outperform the discrete model in discriminating ambiguous emotions, although in terms of perceived emotions the difference between the two models was small. However, one limitation of the

dimensional approach is the underlying assumption that the dimensions are bipolar (Schubert, 1999). For example, positive and negative emotions can be perceived or experienced at the same time, so it might make more sense to consider positive and negative valence astwo independent unipolar scales, such asin the Positive and negative affect schedules (PANAS).

This limitation may be the reason why the dimensional model of emotion in music has been shown to place music which has been rated as sad or fearful on the positive end of the valence dimension (Eerola & Vuoskoski, 2011; Zentner et al., 2008).

It is unclear which emotion model is the best for measuringmusical emotion.While the dimensional model lacks evidence to back up its theoretical basis of separate neural mechanisms for separate emotions (Eerola & Vuoskoski, 2011), the rating of discrete

emotions in music has been found to be reliable across many participants and across cultures (Balkwill & Thompson,1999; Eerola &Vuoskoski, 2013; Kim et al., 2010). However, the discrete model seems to be less effective for identifying mixed or ambiguous emotions. The decision of which model to use, most likely depends on study design and research questions, and some studies have even adopted an approach that conceptualisesmusical emotions as both discrete and dimensional (Christie & Friedman, 2004; Eerola &Vuoskoski, 2013; Nyklíček, Thayer & Doornen, 1997).

3 THE CURRENT STUDY

The literature discussed in this thesis has provided information on the acoustic devices that musicians employ to achieve expressive playing, how performer gesture visually conveys expressivity, and some indication of the effects of the suppression of a musician‟s natural body movement on their performance. However, very little research on expressive playing has directly addressed the relationship between a performers‟ approach to, or amount of body movement and the expressivity of the sound of their performance. In other words, if a performer moves more expressively when they play, will they produce more expressive sounding music? Thus, the current study proposed the following research question:

Will instructing a musician to either inhibit or freely express their natural body movement during performance affect listener ratings of the audible expressivity of their performance?

With the aim of answering this question, the current studyasked performers to play under 2 movement conditions, and explored whether listener ratings would be affected by those movement conditions.

While there is some theoretical and empirical evidence to suggest that musicians‟ body movements are important to the creation of expressive sound (Juslin, 2003; Leman, 2010;

Sloboda, 1996; Vines et al., 2004; Wanderley et al., 2005), there is also evidence that ancillary gesture can be suppressed without disrupting expressive intentions (Thompson &

Luck, 2012). In addition, the complex nature of violin technique might suggest that too much body movement while performing can be detrimental to sound production (Galamian, 2013), and it may be the case that violinists learn to suppress their expressive gestures without compromising their expressive intention. Therefore, although an effect of movement

condition on expressivity ratings was predicted, the direction of the effect was not predicted.

The term „expressive gesture‟, was used here to mean any movement which the performer felt was visually expressive and couldbe altered without compromising sound quality. This could include both movements with a physical sound supporting role, and movements with an expressive intention. The precise roles of gestures were not speculated here, but the overall

effect of the absence or presence of these gestures on the perceptions of the sound, was the matter of interest.

The independent variable was the amount of expressive gesture, which was manipulated via instructions to the performer. The dependent variable was listener ratings of perceived

expressivity. The performances consistedof short melodies which were chosen to each reflect one of the emotions happy, sad, tender,and scary. In addition, the emotional content of the melodies was rated by listeners,with the aim of validating the emotion labels given to the melodies, and exploring how the emotional content of the melodies mediated the

experimental effect. The experiment was a between-subjects design.

The experimental manipulation was made by instructing performers to play under 2

conditions; one that asked them to move as little as possible while still playing expressively, and another that asked them to focus on being visually expressive while still taking care of the expressive sound. It was predicted that the former condition would result in less non-essential movement, and the latter would result in more. This was measured using motion capture technology.

The hypotheses were:

H1: Each melody will yield significantly higher emotion ratings for the intended emotion, compared to the other three emotion ratings

H2: There will be an effect of movement condition on listener ratings of audible expressivity.

The direction of the effect is not predicted.

H3: The visually expressive condition will result in a greater amount of performer bodily movement than the immobile condition.

4 METHOD