Past, Present, and On-Going Research in the LIVELab
A double blind placebo controlled study on the effects of a probiotic on autonomic and psychological stress responses in volunteers
Trainor, Schmidt, Bosnyak, Karichian, et al
Funded by Lallemand
There is evidence from animal models that the bacteria that live in our gut may influence our emotional state and general well-being. This is thought to occur through connections between the brain and gut. Probiotics that are found in foods such as yogurt are thought to influence brain function.
This study plans to test whether taking a commercially available probiotic changes stress responses by measuring brain activity, heart rate (HR), electromyography (EMG), stress cortisol levels, and subjective experiences of emotion or stress in healthy participants, before and after taking a probiotic or a placebo. The study will be conducted at the McMaster LIVELab, which is capable of collecting both physiological and behavioural measures from groups of up to 100 participants at a time.
Behavioural and Physiological Responses to Neighbourhoods
The Expressing Vibrancy Project conducted at the McMaster LIVELab aimed to use quantitative measures to help define ‘vibrancy’ and its implications in future cultural plans. This study included ratings on tablets as well as the measurement of physiological indices in a subset of the participants. We collected the physiological data while participants watched a series of eight video tours of Hamilton areas. In order to measure what our participants thought about the videos, we used a measure of brain activity called electroencephalography (EEG).
Communications between neurons (the cells that make up gray matter in the brain) underlies all human thought. Neurons communicate through tiny electrical currents. Each of these electrical currents is very small, but when groups of neurons are communicating together, they create electrical potentials that are large enough to be measured on the surface of the scalp, a measurement that is called electroencephalography or EEG. In each of our test sessions, we placed an EEG cap containing seven sensors on the head of each of ten participants. The sensors were connected with long wires to EEG amplifiers, which recorded the signals.
In addition to EEG, we also used a number of other measures to infer how people felt while watching the video clips. These measures included Heart Rate (the number of beats the heart makes per minute), Breathing Rate (the number of breaths a person takes per hour), and a measurement derived from the variability of the heartbeat, called LF:HF (the ratio of sympathetic activation of the heart to parasympathetic, detailed later).
Combining all these measures, we hoped to provide greater insight into the definition of vibrancy from real citizens that extended beyond the typical survey data that has historically been available.
Phase Analysis in Steve Reich’s Drumming
by Russell Hartenberger
The purpose of the project is to obtain a computer analysis of the process of phasing in Steve Reich’s composition, Drumming. It involves two musicians playing on four pairs of bongo drums that are prepared with contact microphones that go to triggers. The musicians will phase against each other and the computer will record the process. The data will be published in a book project that is being writen on performance practice in the music of Steve Reich.
Audio engineer for the project will be Ray Dillard. Computer analyst will be Michael Schutz from McMaster University. Musicians will be Bob Becker and Russell Hartenberger.
Leader-follower group dynamics
by Andrew Chang, Dan Bosnyak, and Laurel Trainor
Funded by LIVELab grant
People coordinate and adapt with each other to perform many kinds of tasks in daily life, such as moving a heavy desk together or turn-taking in communication. Among these activities, each member in the ensemble might play a different social role, such as leader or follower. We are interested in investigating how ensemble members dynamically coordinate and adapt to each other during a cooperative task, and how the social roles, such as being a leader or a follower, affect the strategies of each member in coordinating with others.
We investigated this research question with professional string quartets. Music ensemble performance is an excellent model for studying social coordination because it requires precise coordination in tempo, harmonic relationships, and real-time mutual adaptations during performance of a musical piece. We are interested in understanding whether the leader-follower group dynamics could be reflected by body motion, brain activities (EEG) and/or heart rate during music performance, and whether the coordination level determines the performance quality. This study should extend our understanding of how group dynamics are incorporated in brain activities and body movements.
Musical expressions and the emotional responses of audiences
by Andrew Chang, Dan Bosnyak, and Laurel Trainor
Funded by LIVELab grant
Being touched by music might not exclusively rely on hearing the musical sounds, but might also be affected by seeing musicians’ expressive movements during performances. We are interested in investigating how the expressiveness of musical sounds and body movements interactively affect the emotional feelings of audiences.
The professional string quartet ensemble performed short musical pieces either with or without musical expression, and with either exaggerated or minimized body movements, while eight audience members attended the experimental concert. We are interested in understand how these manipulations affect EEG, head movement, and heart rate responses, as well as the subjective emotional reactions of the audience members. This study can extend our understanding of how the emotional responses of audiences are affected by different ways of performing.
Performance Anxiety in Skilled Pianists
by Sarah Lade and Laurel Trainor
Funded by LIVELab grant
Musical Performance Anxiety (MPA) is a form of Social Anxiety (SA) in which musical performers show declines in performance quality due to the physiological and cognitive influences of their anxiety. Up to 70% of professional musicians experience such extreme MPA that their professional or personal levels of functionality are significantly impacted (van Kemenade, van Son, & van Heesch, 1995; James, 1998). Such musicians are often forced to consider alternate career paths (Appel, 1976). The field of MPA research is still in its infancy, with research lacking consensus regarding the etiology of MPA. It is unknown how manifestations of MPA interfere with performance quality and which treatment options (e.g. behavioural, cognitive, pharmacological) are optimal.
The present study will explore neurological (EEG), physiological (EMG, ECG, temperature, respiration, cortisol), cognitive (standardized anxiety questionnaires, working memory measures) and behavioural measures (performance quality) of MPA. It will also examine the relative combination of each to the overall experience of MPA. Through an understanding of how MPA manifests at these different levels, we can achieve better understanding and optimization of interventions.
The experiment will require use of: the auditorium, in which the audience will sit and the performances will take place; the greenroom, in which participants will wait their turn to perform; the prep room, in which participants will be fitted with physiological equipment; and the small experiment room, in which cognitive tasks will be run.
The experiment is a within-subjects design, with two musical performance conditions: an audience-present condition and an audience-absent condition. Both conditions will be run in the LIVElab auditorium. The audience-absent condition will be a musical performance in an empty auditorium, with only a technician present. The audience-present condition will be a musical performance in front of an audience. In both conditions, 2 instances of cognitive, physiological and neurological measures will be taken at 2 time points: pre-performance (3 min prior, 5 min prior) and post-performance (3 min post, 5 min post).The order in which participants take part in these two conditions will be counterbalanced.
Regularly performing pianists, with a minimum level of Grade 10 RCM, will be recruited from local university music programs and the Royal Conservatory of Music – Toronto. They will be screened with a standardized questionnaire (K-MPAI). Informed consent will be obtained prior to the experiment. Audience participants will be recruited through the LIVElab subject pool.
EMG equipment (8 receiver boxes and associated electrodes, amplifier, computer software and monitor), EEG equipment (16 channel caps, amplifier, computer), Biomonitors (to record HR, Respiration, Temperature), salivettes (to obtain cortisol measures), Disklavier piano to record performances for later qualitative evaluation. Cognitive measures will require the pilot
room and a computer to run cognitive tasks. Tablets will be required to obtain data from audience members.
The audience absent condition will be a single session with only an experimenter present. Prior to arriving at the LIVElab, partipants will have filled out and returned 3 documents: a demographics questionnaire, the Kenny-Musical Performance Anxiety Index (K-MPAI) and the Brief Fear of Negative Evaluation scale (BFNE). On the day of the audience-absent condition, the participant will begin by filling out 1 additional questionnaire (State-Trait Anxiety Questionnaire) while being fitted with physiological measurement equipment. The experimenter will then take the participant to the pilot room and take 5 min of resting state measures, in addition to a baseline salivary cortisol measure, and will then lead them onstage. Once onstage, participants will provide an instantaneous indication of their level of anxiety (on a 1-100 scale). Then, another 3 min of resting state measures will be taken while the participant is seated at the piano (height of anxiety). The participant will perform for 15 min. After performing, the participant will provide another 3 min of resting state measures taken, while still seated. The
experimenter will lead the participant off stage to the pilot room, take another cortisol measure to gage the reaction to the stress, and for another 5 min of post-performance resting measures. Finally, the participant will complete a 30 min cognitive task in the pilot room. Afterward, the experimenter will remove the measurement equipment, help the participant clean up, and thank them for coming. The audience -absent conditions are expected to require 3 days of LIVELab testing.
Each audience-present test session will have approximately 40 audience members and 4 performing participants. To test the 12 participants, the audience-present condition will require 3 separate days in the LIVElab. The 4 performers will arrive 1.5 hrs prior to the performance start. In the time prior to the performance, participants will be debriefed on the format of that day’s performance. The performance will require 3 technicians, in addition to multiple volunteers. One technician will fit performers with physiological equipment, one will manage recording physiological measures on stage and one will handle recording physiological measures off-stage (pilot room). A volunteer will be permanently stationed in the green room to monitor the time course of the performance and to ensure performers are aware of their performance start time. Two other volunteers will be present in the auditorium at all times.
Audience members will arrive 20 min prior to the performance. The auditorium volunteers will welcome audience members and hand out programs. Once seated, they will distribute tablets to audience members, on which information sheets, consent forms, questionnaires (demographic questionnaire, STAI, BFNE) and cognitive tasks will be administered. The performance will be identical to the audience-absent condition, except for the presence of an audience and participants performing one after the other. Every 30 minutes another of the 4 performers will begin the set of steps outlined in the audience-absent condition.
At the end of the four performances, a debriefing form will appear on the tablets for the audience members. They will be thanked for their participation and a volunteer will show them to the exit. The performers will be debriefed at this same time.
Most studies focus on individual components of MPA (cognitive, neurological, physiological or somatic) without considering their interaction in a holistic context. The field of MPA research is in need of a study in which all components of MPA are explored both in isolation and relation to one another. Up to 70% of musicians report suffering debilitating performance anxiety (James, 1998). Many musicians either self medicate with alcohol or drugs (Steptoe & Fidler, 1987), or use beta-blockers, which they often obtain through a friend, not a doctor (Fishbein et al., 1988). Beta-blockers effectively manage physical symptoms of anxiety, but do not help with cognitive symptoms (Brantigan, Brantigan & Joseph, 1979). The lack of regulation and discussion in the musical community around this issue of MPA is extremely problematic. Correct dosages will differ between individuals, and beta-blockers may interact with other medication. The pervasiveness of this issue among professional musicians suggests a lack of viable non-pharmacological treatment options for musicians suffering from MPA, the impact of which may mean the difference between a career in music or not. This study will serve as a empirically-sound base for future studies to build on, in addition to being an important contribution to the literature on SA and MPA.
Creation of new performance works using technology systems in the LIVELab
by Ranil Sonnadara
Funded by ARB & Yamaha
Advances in entertainment technology now allow individuals to experience, in their home, reproductions of artistic performances that include video and sound whose quality may rival that of a live performance. However, many people still prefer to attend live performances by their favorite artists, and many artists prefer to perform for live audiences. During live performances, artists modify their presentation depending on audience reactions, the natural acoustics of the venue, the lighting, and so on, while audience members feel a connection with the artists and an opportunity to influence the performance. In live performances, there is a real-time complex interaction between performers and audience members, where each feel they are directly influencing the other. This complex interaction between artist, audience, and venue may add to the enjoyment of the artists and audience in part because the perceived level of interaction between them is enhanced. With new technology in the LIVELab, we are in the exciting position of being able to control and manipulate variables such as the acoustic environment and lighting in order to augment experiences, as well as to incorporate into the performances real time responses from audience members such as EEG, heart rate, and reaction data collected from handheld electronic tablets that can enhance interactions between performers and audiences. In the present proposal, researchers will work with high-level musicians and composers to create exciting and unique new works that incorporate high levels of interaction between musicians and between musicians and audiences, and to evaluate the experiences of those participating in these performances.
Principal investigator Ranil Sonnadara has a lifelong interest in the use of technology in the creation of artistic works. Prior to attending graduate school, Ranil had a career as a lighting and sound designer for theatre and film, as well as a composer for theatre, and his design practice was built around the creative use of technology to enhance performance events. This work drove RS to pursue graduate studies to formally study the theoretical foundation that supported his design work. Over the course of his graduate career, Ranil focused his work on examining performance and the development of expertise, in particular examining how performance changed as a function of practice and experience. Initially, Ranil focused on musicians, but expanded this work to athletes and surgeons, focusing on performance in high stakes environments.
The proposed new artistic works will take place in the new CFI-funded LIVE (Large Interactive Virtual Environment) Lab, a fully functioning 100-seat 2200 sq. ft. concert hall commissioned as a research tool for studying human interaction. It is an ideal venue as it allows exploration of aspects of performance that are not possible to recreate with a home multimedia centre or indeed at other performance venues. The lab has been constructed with a number of integrated technological systems that, taken together, are not available in any other space, and that can be made to interact in ways that can only be achieved here. The proposed studies and the technology and ideas they will develop could provide local and international artists with a truly unique opportunity to create new works, which allow them to interact with both the performance venue and the audience in ways that have not been possible until now. This work will enable us to explore key factors that affect performer and audience relationships and engagement, and how to enhance these experiences.
by Laurel Trainor, Dana Swarbrick, and Dan Bosnyak
Groove is the aspect of music that makes you want to move (Janata et al., 2011). Four high groove and four low groove songs were played to participants while their EEG and heart rate were measured. Four of these songs, two in each condition, were more familiar to participants. Groove had no significant effect on heart rate, heart rate variability or EEG power. However, whether or not the songs were familiar did affect these measures.
There was a trend for heart rate to be higher in the high familiarity compared to low familiarity songs (p=0.063). Frequency analysis of heart rate variability can reveal the activity of the parasympathetic nervous and sympathetic nervous systems. The mean xxx (reflecting sympathetic nervous system response) was significantly greater for the high compared to low familiarity songs (p=0.032). Finally, EEG power was extracted in the delta (1-4 hz), theta (4-8 hz), alpha (8-14 hz), beta (14-30 hz), gamma (30-50 hz) bands as participants listened to each song. There was a significant interaction between EEG band and familiarity (p=0.042). There was a trend for higher power in the delta band for high familiarity compared to low familiarity songs (p=0.065). Given that the delta band contains the frequency of the beat tempo, this suggests a stronger physiological response to the beat of familiar songs. Together these initial analyses of physiological responses suggest that groove has little effect on physiology, but that people show heightened responses when listening to familiar music.
Experience-Related Differences in Conducting Gestures
by Kendra Oudyk and Rachel Rensink-Hoff
Funded by ARB
This study used motion capture technology to examine the movements of professional and student conductors. Previous research has indicated that musical experience is related to the way people move while engaging with and creating music. The present study adds to this small body of research by using a more ecologically valid setting and a greater number of conductors than have been used in past studies involving conducting and motion capture.
Five professional and six student conductors each conducted a live choir as they performed four contrasting musical excerpts in McMaster University’s LIVELab. Independent variables included conductor personality, measured with the Big Five Inventory, and musical experience, measured with the Ollen Musical Sophistication Index. Dependent variables included the singers’ ratings of the conductors, as well as the following kinematic variables extracted from the motion capture data: variation in gesture size and speed, hand independence, and periodicity of gestures.
Results of multilevel regression analyses indicate that experience-related differences in the kinematics of conducting gestures are most visible in the variation of the size of left-hand gestures. However, it was also found that the speed of gestures in both hands as well as the size of gestures in the right hand do not vary with respect to conductor experience.
Future study could build on this research by comparing kinematic variables of conducting motions with expert assessments of the performance audio, to see whether experience-related differences in gestures such as those found in the present study influence musical outcomes of the ensemble.
Creating Technology-Based Dance Activities for People with Parkinson’s: Collaboration Between McMaster and Hamilton City Ballet
by Matthew Woolhouse
This interdisciplinary project has four primary goals: (1) create technology-based dance systems for people with Parkinson’s Disease (PD) in collaboration with Hamilton City Ballet’s (HCB) Dance for Parkinson’s (D4P) program; (2) assess how multimedia systems mediate PD sufferers’ experiences; (3) investigate benefits and risks of integrating digital technologies into arts-based activities with aging or health-impaired populations; (4) use the systems as tools with which to carry out longitudinal studies examining the efficacy of music and dance on people with PD. Ethics approval has been granted for the project (HiREB Project #: 14-419-S), which is enabling us to work with people with PD, enrolled HCB’s D4P program.
Acting in Action: Gestural Analysis of Character Portrayal in Actors
by Matthew Berry & Steven Brown
Using the technology available in the LIVELab we are performing the first experimental study of acting. During dramatic role-playing, actors undergo a process of pretending to be someone who they are not. There are various methods by which actors are able to transition into a role, and they are typically dichotomized as being either mentalistic (i.e., internalizing the inner thoughts and feelings of the character) or gestural (i.e., emphasizing the overt physical and expressive behaviours of the character). The quality of character portrayal may vary depending on the actor’s choice of method. The current project focuses on the gestural correlates of acting, exploring the manners of vocalizing and gesturing that comprise a compelling portrayal of a character. In the basic experiment, we had actors perform a semantically-neutral text either as themselves or as each of eight different stock characters (e.g., bully, king, lover, babysitter). The dependent variables included vocal prosody (e.g., pitch, loudness, timbre), facial expression, and body gesture, where the latter two were measured using motion capture. For each modality of expression, we used principal components analysis to create a two-dimensional map of the characters based on similarities in the patterns of vocal prosody, facial expression, and body gesture across all characters.
Cognitive Audiology – Conversational Background Noise Stimuli
by Karin Humphreys, Scott Watter, and Jeff Crukley
One of the major challenges in spoken word recognition is understanding speech in the presence of background noise, especially conversational noise. This is true for individuals with normal hearing, but especially so with individuals with hearing impairment, and also presents a major challenge in the development of hearing aid technology. Drs Humphreys and Watter are collaborating with Dr Jeff Crukley in the newly emerging field of Cognitive Audiology, to look at how cognitive factors (e.g. divided attention, distraction, attentional load, and how these decline with aging) interact with audiology, especially as applied to hearing aid research. Dr. Crukley is a research scientist based in the Missisauga office of Starkey Hearing Technologies, which is the largest manufacturer of hearing aids in the United States.
As part of the work we intend to do, it is important that we are able to use some high-end recordings of surrounding conversational noise to be able to test participants’ abilities to distinguish words in background noise. We are proposing to spend one day in the LIVELab, recording background conversation, from a varying number of speakers (3, 6, and 10), in both English and in another language (language to be determined, but given what we know of the McMaster linguistic community, Cantonese, Urdu and Arabic are all excellent candidates). While “babble tracks” do already exist for use in audiology research, there are two things that make this proposal unique. First, a well-matched sample of a non-English language does not exist, to our knowledge. This will serve as an important control to be able to partial out effects of simple overlapping speech frequencies, vs. the role of semantically interpretable noise. Secondly, the way in which the LIVELab is able to record will be far superior to other existing tracks to our knowledge. Our goal is to record in two ways – both the surround multi-track recordings (which can then be edited down to a number of different channels for use in different situations, such as a regular lab, using headphones, but can also be replayed within the LIVELab for the most accurate playback). We would also plan to make use of the KEMAR to record from in the midst of the surrounding conversations.
These recorded stimuli could then be able to be manipulated off-line to mimic different echoic properties of different spaces. Our plan would be to use these stimuli in ongoing work with Starkey, but to also make sure they are available for any other users of the LIVELab for other research. We would anticipate publishing these in a journal such as Behavior Research Methods. Our initial thought would be to make these stimuli freely available to academic researchers, but perhaps to charge a fee to commercial users to support ongoing research and stimuli creation. We believe this would create a highly valuable resource both for our own research in this field, and for the LIVELab to be able to use more generally.
Examining the relationship between neural activity and motor trajectories in a synchronization task
by Fiona Manning and Brandon Paul
The purpose of this project is to explore the relationship between synchronized motor trajectories and electrophysiological responses to rhythms. We are simultaneously recording motion capture and EEG data in participants that listen and tap with auditory rhythms to examine associations between these measures. Because movement and listening to rhythms are related, we predict that the consistency of movement trajectories will correspond with brain activity that is responsible for our ability to hear rhythms. By uniquely capturing these measures simultaneously, we hope our results will strengthen our understanding of how we interact with rhythm, music and dance.
Exploring the Influence of Gesture on the Lecture
by David I. Shore, Irina Ghilic, and Amy Pachai
Gesturing is ubiquitous in verbal communication–everybody moves when speaking. Hand, arm, and torso movements augment the spoken word. Constraining gestures leads to slower speech, especially when discussing spatial content, and restricting speech (e.g., not using words that contain a certain letter) produces more gestures (Alibali, Kita, & Young, 2000). Gesturing also appears to enhance the thinking process of the speaker—blind participants use gestures at a similar rate to sighted participants, even if they are told their audience is also blind (Iverson & Goldin-Meadow, 1998). Thus, gestures both convey information to an audience and facilitate the thinking process of a speaker. The current proposal examines the impact of gestures on the effectiveness of lecturing in an educational context.
Beat gestures heavily influence spoken communication (Biau & Soto-Faraco, 2013). These rapid hand flicks achieve a high velocity and contain abrupt starts and stops. The gestures can be restricted to pointing a single finger, or as expressive as moving the entire torso (Biau & Soto-Faraco, 2013; Leonard & Cummins, 2011). Beat gestures may contain little semantic content, but can be used to disambiguate the auditory signal (Biau & Soto-Faraco, 2013; Leonard & Cummins, 2011). The speed and timing of gestures are both critical factors. When produced in synchrony with the spoken narrative, beat gestures modulate auditory-related brain activity in listeners (Biau & Soto-Faraco, 2013; Leonard & Cummins, 2011).
The present pilot project aims to evaluate the role of beat gestures, and speaker movement in general, on the academic lecture. Specific research questions include (1) measuring the variability in gestures and movement across speakers, (2) examining the relation between gesture timing and modulation of vocal intonation, and (3) evaluating the impact of gesture extent and quantity on audience evaluation of the presentation. In addition, we will use the captured videos and measurements for future experiments where we will manipulate the video or audio content of the presentations. Additionally, this project will provide vital experience in motion capture and the LIVELab systems for the two trainees.
There will be two groups of participants used in this study.
1) A group of students from PSYCH 3TT3 (Educational Psychology) will provide the stimuli for the experiment. Each student must prepare a TED-Ed presentation–a 5-minute educational talk targeted to a general audience–as a mandatory component of the course. Students in this course will be given the opportunity to give a “practice version” of this talk in the LIVELab, while wearing motion capture sensors. These volunteers are experienced presenters, as they also work as teaching assistants for the Introductory Psychology course at McMaster University. In this role, they teach 2-3 tutorials of 25 students each week. We will collect motion capture data of the upper torso, arms, hands, and head from these presenters in addition to video and audio recording.
2) We will also require a group of naïve observers to rate the presentations on a number of subjective measures, including effectiveness of the presenter and clarity of communication. Each presenter will give their 5 minute talk, while the audience members pay attention to the presentation. Once each presentation is complete, audience members will provide tablet-based responses to standard questions on effectiveness and clarity of the presentation.
Gestures are an integral part of spoken communication. In a teaching environment, most knowledge transfer between teachers and students occurs through spoken lectures. Accompanying the speech are an ample variety of gestures. We want to explore the extent to which gestures influence listeners’ attention and comprehension. These pilot data will help us map the array of gestures that facilitate an optimal learning environment. This study will spark novel research in the field of pedagogy and multisensory perception, and promote future studies on the influence of gestures in teaching.
Vocal Effort and Room Acoustics
by Laurel Trainor and Dan Bosnyak
One of the challenges facing musicians during live performance is adjusting to different acoustic environments in which they must perform. The amount of reverberation greatly affects people’s ability to hear and entrain to fellow performers. Previous research has shown that singers will adjust features of their performance, such to increase their vocal intensity in the presence of higher levels of background noise (Lombard effect), decrease their vocal intensity when feedback about their own sound in increased (Sidetone-amplification effect), and play at a slower tempo and with more staccato articulation when reverberation is higher.
In Experiment 1, we investigate three questions: (1) What is the nature of the adjustments made by vocalists in response to different reverberation conditions? (2) Do the magnitudes of these adjustments depend on the years of vocal training received? (3) Does the familiarity of the repertoire influence the degree of adjustment in vocal effort?
20 professional or student singers will be recruited through McMaster University, Western U., U. Toronto and by word of mouth. Each singer will choose 4 songs, of 3 to 5 minute each, from a list given to them: 2 familiar songs that they have sung previously and 2 unfamiliar songs. On testing day, they will complete background questionnaires. They will then be fitted with a wireless headset, which will record the sound pressure of their voice near the point of production, the Source Sound Pressure. A second microphone will be situated a specific distance away from the performer, which will record Distal Sound Pressure. Each subject will be fitted with a respiration belt, which will record changes in thoracic circumference during respiration—the Depth of Breathing.
The respiration belt will also allow the Rate of Breathing to be recorded. A wireless Noraxon Biosensor module will be used to measure Skin Temperature and raw ECG data during the performance. Another two wireless EMG sensors will be used to measure general muscle tension on the upper trapezius. Each singer will be equipped with motion sensors to measure their posture and movement during their performances. Audio and visual recordings of the performances will be made.
Each singer will sing each of their 4 songs under 2 acoustic conditions (very low reverberation; very high reverberation) for a total of 8 performances. A KEMAR acoustic mannequin will be placed in the centre of the 2nd last row and the singers will be instructed to sing to the dummy.
We predict that the tempo will decrease and degree of staccato articulation will increase with increasing reverberation. We predict that vocal effort (measured by source sound pressure, respiration, heart rate, and skin temperature) will decrease with increasing reverberation, and that heart rate and EMG will decrease with increasing reverberation. We predict that more experienced musicians will show less effect of room acoustics on these measures of vocal effort, and also that their Distal Sound Pressure will be more consistent compared to less experienced musicians. This study will give us scientific understand of the adjustments that musicians need to make in order to perform live music in different venues, and has implications for how these skills might be taught to young performers.
Experiment 2: Exploring effects of room acoustic imperfections in vocal performance
Good singers rely on echoes from the room in order to monitor how their sound is reaching the audience. The LIVELab is close to a “perfect” acoustic in that it produces an uncorrelated reverberant field. However, real rooms contain wall and objects (e.g., pillars) that reflect sounds in specific ways, and good singers may make use of this feedback in monitoring their vocal production. We can artificially create such reflective objects in the reverberation field using the active acoustics in the LIVELab. In this study we propose to study 10 professional singers. They will sing songs of their choice under different reverberant conditions containing artificial reflective objects or not. We will give the singings questionnaires to describe their subjective feelings about the quality of the acoustics for singing. In addition, we will measure vocal effect and record their singing to obtain objective measures of the importance of such reflective objects in the environment for singers.
Ειδω: To See – Vision to Visualization
by Thomas Doyle and Joseph Kim
Hobbes argued that knowledge is gained through the senses and that what is left behind is the memory of the sensation. Manipulation of that memory is performed using mental imagery. While Bloom’s hierarchy of learning needs is well known, his model goes further than cognitive requirements to encompass affective and psychomotor domains. Thus we can say that knowledge may be more efficiently created when sensory information can be more strongly tied to memory through, for example, physical interaction with the information. The question is whether such knowledge can be more effectively applied, or rather, if we can “see” better.
At its core, engineering design is the process of taking a Concept to Creation. To achieve successful technical design one must combine the vision of a solution (function) with a visualization of the part/assembly (form). First year engineering students often believe these skills are innate and as a result select streams of engineering that are perceived to minimize such skills, or in extreme cases the student will leave the program. These are skills that may be improved with practice; however, the underlying cognitive mechanism(s) is not well understood, nor have alternative methods of teaching vision+visualization for engineering design been well studied.
The research questions we wish to address are:
1. What are common cognitive/electrophysiological functions among engineering students that are good vs. poor “see”ers.
2. Can structured access to a three-dimensional printer:
a. effectively replace or improve traditional visualization teaching (form),
b. enhance the vision (function) design outcomes, and
c. decrease the skill learning time.
3. Whether individual differences in Working Memory Capacity (WMC) predict vision+visualization performance
The data collected will be from first year engineering students enrolled in Engineering Design 1C03 in September 2015. The course, described below, consists of weekly lecture, lab, and tutorial that run for 12-weeks. The intervention proposed is an alternative method of teaching visualization to first year engineering students. Currently in tutorial, the first 6 weeks are devoted to traditional hand sketching exercises with the pedagogical objective of improving visualization skills. Results from prior visualization study (Doyle and Booth) suggest access to rapid prototyping benefits student visualization form performance. We propose a replacement of traditional visualization instruction with structured access to rapid prototyping tools to investigate if the exposure and use of these machines would benefit student visualization of form and function and be at least equally effective at teaching these skills.
The three mechanisms of study are visualization, vision, and memory.
Data collection will be taken at 3 points in the course: weeks 1, 6, and 13.
The evidence to be collected:
1. Performance scores from standardized visualization tests (e.g. 3 dimensional shape rotations).
2. Working memory performance tests (e.g. cognitive load vs. capacity).
3. Quantitative electrophysiology during visualization:
(a) electroencephalogram (EEG) for activation of areas correlated to vision and working memory (11 channels) (Fp1, Fp2, F3, F4, Cz, T3, T4, P3, P4, O1 and O2),
(b) electro-oculogram (EOG) for physical motion of the eye in scanning visual stimuli (2 channels), and
(c) galvanic skin response (GSR) for stress and engagement (1 channel).
4. For identified exemplars, magnetic resonance brain imaging for further localization and data collection related to 3a).
Data collection will be taken at 2 points in the course: weeks 9 and 12.
The evidence to be collected:
1. During week 9 students are individually examined on system design.
2. During week 12 students are individually interviewed on their group project design.
Evaluators will be provided rubrics to assess the formative work and the summative result of the designs. Evaluators will have no knowledge of those that are participating in the study. Performance in summative and formative will be compared.
Weeks 1, 6, and 13 automated Operation Span task (ospan; Unsworth et al., 2005) to measure participants’ working memory capacity (WMC)––their ability to simultaneously process and store information.
Engineering Design is both art and science. The single term “visualization” does not by itself describe, nor distinguish, the success of being able “to see” complexity in function. By studying these questions we expect to gain insight to enhance the teaching and learning of vision+visualization for engineering design.
The role of social context in intersubject synchronization between audience members during musical performance
by Jessica Grahn, Daniel Cameron, and Molly J. Henry
Brain waves across the entire cortex synchronize with dynamic stimuli such as movies. A measure referred to as intersubject synchronization indicates that different brains synchronize the same way with the same stimulus, even when the viewers are separated from each other and view the movie at different times. To our knowledge, intersubject synchronization has rarely been examined in the context of listening to music. Moreover, although intesubject synchronization is positively related to audience preferences (as measured by fMRI), it is an open question whether intersubject synchronization is modulated by the presence of fellow audience members or live performers. Anecdotally, an important contributor to the enjoyment of a concert is forming a bond with the others who are attending to and enjoying the same experience. Thus, the proposed study will examine intersubject synchronization in three different social contexts to answer the following research questions: 1) how does social context change intersubject synchronization of brain responses to music?; 2) What role does groove (how much the music inspires movement) play in intersubject synchronization, and how is this modulated by social context?; 3) How do individual differences in beat perception abilities affect intersubject synchronization and enjoyment of a concert in these varying social contexts?
Over 3 days, 3 groups of 20 participants each (and 4 performers) will undergo electroencephalography (EEG) while listening to (and performing) music in one of 3 social contexts: 1) 20 audience participants observing a live musical performance (day 1), 2) 20 audience participants observing a recorded musical performance (the same performance as in 1, day 2), or 3) 20 isolated participants observing the same recorded musical performance (day 3). On day 1, the performance will occur live in the LIVELab, and performers will also undergo EEG. On day 2, the recorded performance (from day 1) will be presented on the Video Wall, and Active Acoustics will recreate the audio of a live performance; the performers will be absent. On day 3, the performance will occur in the LIVELab; audiovisual stimulation will be identical to day 2, but critically, we will isolate participants by separating them with a physical partition and/or presenting visual stimulation via tablets and headphones (see Part 6.D. Other).
The musical performance will consist of 10 pieces of music, alternating between high (n=5) and low (n=5) groove; high and low groove pieces will be selected based on ratings previously collected by our group from young, normal-hearing participants. Participants will rate how enjoyable and groovy the music was at the end of each song. We have opted not to take these measurements online during the performance so as not to interrupt the immersive experience we hope to create in the social contexts; online groove measurements will be collected from an independent sample in our lab to be correlated with intersubject synchronization. Each participant (and performer) will also complete a brief version of the perceptual Beat Alignment Test (BAT), presented via tablet, which will provide a measure of individual beat perception ability.
Intersubject synchronization (see Part 5.A. Data Analysis) will be compared across social contexts. We expect that intersubject synchronization will be enhanced by the presence of other audience members (contexts 1 & 2 vs. 3) and by the presence of live performers (context 1 vs. 2 & 3). We also anticipate that synchronization between audience members and performers will be highest during the live performance. We expect that intersubject synchronization will be higher for high than for low groove pieces, and lower for individuals who are poor beat perceivers as indexed by the BAT. Finally, we will characterize the impact that individual variations in beat perception ability might (or might not) have on enjoyment of a live performance, and whether enjoyment is rooted in successful intersubject synchronization.
The proposed project exceeds the current state of the art in the field in several ways. First, although several fMRI studies have examined intersubject synchronization during music listening, we are aware of only one unpublished study (presented in the Procedings of the International Conference on Music Perception and Cognition) making use of the temporally more precise EEG technique. This is important because EEG (but not fMRI) is capable of indexing fast-time-scale fluctuations in intersubject synchronization that might be coupled to variations in musical acoustics. Second, intersubject synchronization in response to music has only been examined in individual participants listening to music in isolation. This is despite the importance of the presence of performers and fellow audience members for the immersive experience of a live performance. Thus, the proposed project will use EEG to examine moment-to-moment fluctuations in intersubject synchronization in varying social contexts. The results may contribute to questions regarding the role of intersubject synchronization in social cohesion and the evolution of musical faculties. Moreover, by collecting data for participants viewing the performance in isolation, our comparison to group recordings made in the LIVELab may provide convincing evidence for the importance of a setup capable of many simultaneous recordings.
Dynamics of eye movements made in response to dance
by Matthew Woolhouse
In March 2012 I was a co-PI with Dr. Steven Brown on a successful CFI-LOF infrastructure bid for $520,978 to support dance research at McMaster Institute for Music and the Mind (MIMM). Subsequently, this enabled research to be undertaken in the field of eye-tracking and dance research, which led to an exploration of the influence of music-dance synchrony on the dynamics of eye movements, perception and attention. This research was published in December 2014 in Frontiers in Human Neuroscience, a leading peer-review journal within the field. A second follow-up study was completed in the LIVELab in summer 2015, and is in preparation for journal submission in late 2015.
The Science of “Secrets”: collaboration with Anthem SRO
by Matthew Woolhouse
In collaboration with the McMaster Digital Music Lab, and the McMaster Institute for Music and the Mind (MIMM), SRO/Anthem present “The Science of Secrets”: A blended experience of music and science for the release of Ian Fletcher Thornley’s new album “Secrets”. Ian and his band will perform a concert at MIMM’s LIVE (Large Interactive Virtual Environment) Lab for a group of fans (who are well versed with Ian’s music) and “newbies” (who are unfamiliar with the music of Ian). Throughout the performance, the two groups of audience members will be monitored for various physiological responses including galvanic skin response, heart rate, and biological movement. Before the concert, an afternoon listening session of Ian’s new album will take place in the LIVE Lab. Audience members will consist of a different group of fans and newbies who will be monitored for the same three physiological measures. In sum, this unique event will be the first ever to combine a major album launch with one of the world’s finest research facilities, dedicated to exploring human responses to music.
Galvanic Skin Response (GSR): Measures a change arousal/excitement based on sweat levels. Subjects are fitted with two finger electrodes which simply slip on to the index and middle finger of the hand.
Heart Rate (HR): Measures a change in arousal/excitement based on heart rate. Multiple electrodes (usually two) are placed on the subject’s chest using an electrode gel. The electrodes are wired to a monitor which is placed beside the subject during the performance.
Motion Capture: Measures the degree of head movement throughout a performance. Subjects are fitted with a lightweight cap equipped with tiny, reflective markers (Styrofoam balls). Motion capture cameras placed around the auditorium constantly emit an Infrared beam, which reflects off of the markers and back into the camera, giving precise coordinates of a subject’s head over time.
Applying Principles of Cognitive Psychology to the Classroom: Using Interpolated Tests to Improve Attention and Learning in a Live Lecture
Schacter, Heisz, Pachai, Benoit
Funded by: MIETL, VP Research
Classroom response systems and retrieval-enhanced learning: Examining underlying physiological activity supporting learning
Teeter & Thomson
Funded by: MIETL, VP Research
Effects of expectation and expressive timing on musical encoding
Nakata, Trainor et al.
Funded by Japan Society for the Promotion of Science (JSPS)
Synchronization of body movement to a beat
Brown, Chauvigne, Galea, Lyons, Richardson
Funded by NSERC
Biofeedback in performance anxiety
Lade, Trainor, et al.
Funded by CIHR
Moving together: Choreographic mappings of children with diverse dis/abilities and their neurological responses to a dance-play event (Phase 3)
Gibson, McLaren, Missiuna, Edwards, Chau, & Bennett
Funded by CIHR
Division of orthopaedics education study
Funded by McMaster University Division of Orthopaedics