CIVIL ENGINEERING 365 ALL ABOUT CIVIL ENGINEERING


Study 1

Participants

We collected data from 61 participants (42 female, 19 male, mean age = 22.0 years, SD = 3.4) without vision or hearing deficits and with Western cultural backgrounds at the University of Graz, Austria. Two additional participants were excluded because all of their ratings of the dependent variable inclusion of other in the self were zero. The Goldsmiths Musical Sophistication Index48 indicated that the musical training of the participants was heterogeneous, varying between the 1st and 93rd percentile with a mean at the 36th percentile. Participants provided written informed consent and the study was approved by the ethics committee at the University of Graz. All three studies conform to the code of ethics of the World Medical Association (Declaration of Helsinki).

Video stimuli

The 20-second videos can be found in the supplementary material section.

Independent variable synchrony

Virtual self and other were either walking in phase with the music (synchronous) or the virtual self was walking in phase and the virtual other out of phase with the music (asynchronous). Each stride consisted of 21 frames. In the synchronous videos, the strides of both figures were occurring at the same frame. In the asynchronous videos, the steps of the virtual other were delayed by eight frames.

Independent variable musical pattern

The videos were accompanied by real music with patterns and instrumentations typical for popular North American/Western or Indian music, and by an isochronous metronome. For North American and Indian musical patterns, three instrumental pieces with clear beats were selected (Indian: “Kedara in Vilambit & Drut” by A. A. Khan & T. N. Krishan, “Awakening” by Ken Zuckerman, and “Chaap Tilak” by Shujaat Khan; Western/North American: “What I Got” by Sublime, “Thinking” by The Meters, and “My Father’s Eyes” by Eric Clapton). The tempo of all instrumental pieces (between 92 and 96 bpm in the original versions) was aligned to the stride length of 636 ms/94.3 bpm by using the time warp option in Ableton Live 8 (Ableton, Berlin, Germany). The metronome had an inter-onset-interval of 636 ms.

Procedure and ratings

Data were collected in groups of 3 to 4 participants sitting at individual desks with room dividers and wearing closed over-ear headphones. Participants were instructed to watch the stick figure videos and to imagine that they are one of the figures and that the other figure represents an unknown person. They were told that the videos will have different auditory accompaniments and that they should pay attention to how the figures move in time with each other and in time with the auditory accompaniments. Four practice trials with an isochronous metronome as auditory accompaniment were presented at the beginning of the experiment. Afterwards, two blocks with 18 randomized trials were presented – the number of trials followed from the combination of 9 musical patterns (3 Indian + 3 North American + 3 metronome) and 2 synchrony conditions (synchronous and asynchronous movements).

Participants rated the interpersonal closeness with the virtual other on an adapted Inclusion of Other in the Self scale32 (IOS; Fig. 1C) and the likeability of the other. The IOS scale is a validated pictorial measure of closeness between self and other, which is not particularly susceptible to social desirability32. Both scales were continuous sliders ranging from 0 on the left to 100 on the right. At the end of the experiment, participants rated how much they enjoyed each piece of music and how familiar the music was on a continuous scale from 0 to 100. The experiment lasted approximately 20 minutes.

Music ratings

The mean ratings of familiarity with the music and enjoyment of the music for the 3 Indian and the 3 North American musical stimuli were compared in paired samples t-tests. As expected, familiarity with the music was higher for North American compared to Indian music stimuli (t(60) = 4.85, p < 0.001, d = 0.62). Similarly, enjoyment of the music was higher for North American compared to Indian music stimuli (t(60) = 7.08, p < 0.001, d = 0.91). A repeated measures correlation (rmcorr package in R) revealed a positive correlation between familiarity and enjoyment (r(314) = 0.34, p < 0.001).

Data analysis

Each participant’s mean rating of inclusion of other in the self (IOS) and likeability of the other (LIKE) from the three individual stimuli of each musical pattern (3 Indian stimuli, 3 North American stimuli, and 3 metronome stimuli) of both blocks were used for the statistical analyses.

We fitted linear mixed effects models to a dataset only including responses to videos with music to explain the dependent variables inclusion of other in the self (IOS) and likeability of the virtual other (LIKE), using the lmer function from R’s49lme4 package50 (Table 1). The fixed effects of the full models were synchrony (synchronous and asynchronous movement), musical pattern (familiar/North American and unfamiliar/Indian), enjoyment of the music, and the interaction between synchrony and enjoyment of the music. Based on previous research with a similar design demonstrating the strength of the effect of movement synchrony on affiliation14, synchrony was tested as first fixed effect. The random effect, noted as (1 | participant), accounted for individual differences by allowing a random intercept per participant. The null-model only included the random effect. The emmeans package51 in R was used for pairwise comparisons with Bonferroni corrections. A visual data inspection indicated that the residuals of all models were normally distributed. We compared the fit of the nested models using likelihood ratio tests.

Study 2

Participants

Participants between the ages of 18 and 60 without vision or hearing deficits were recruited over mailing lists and social media. Out of 271 participants who started the survey, 204 completed every question and were included in the analysis (112 female, 92 male, mean age = 36.0 years, SD = 10.9). The participants were born in Europe: 114, Asia: 41, North America: 21, Latin America: 14, Africa: 8, and Oceania: 6. According to the Goldsmiths Musical Sophistication Index48 the musical training of the participants was heterogeneous, varying between the 1st and 99th percentile with a mean at the 54th percentile. 141 participants used a laptop or computer (92 with headphones, 29 with external loudspeakers, and 20 with integrated loudspeakers) and 63 used a smartphone or tablet (31 with headphones, 8 with external loudspeakers, and 24 with integrated loudspeakers). Written informed consent was provided and the study was approved by the institutional review board at the Danish Neuroscience Centre.

Video stimuli

The 14-second videos can be found in the supplementary material section.

Independent variable synchrony

Virtual self and other were either walking in synchrony with each other and the music (beat interval: 700 ms/85.7 bpm) or the virtual other was walking asynchronously. In the synchronous movement videos, the step frequency of the virtual other was slowed down by 1%, i.e., 693 ms and the phase was slightly shifted. As a result, the steps of the two figures were approximately 60 ms apart at the beginning of the video, perfectly synchronized in the middle of the video, and approximately 60 ms apart in the end of the video introducing barely noticeable “human-like” imperfections. In the asynchronous movement videos, the virtual other was not only walking out of phase with the beat but also with a different step frequency, i.e., 800 ms instead of 700 ms.

Independent variable musical pattern

Based on Pre-studies 2A and 2B (see supplementary material), the following three musical stimuli were selected for the main experiment: “Bonde” by Ali Farka Toure and Ry Cooder from the album “Talking Timbuktu” (region: West Africa, 82 bpm), “Cumbia del Leon” by The Lions from the album “Jungle Struttin” (region: Latin America, 84 bpm), and “Nomads” by Zakir Hussain from the album “Music of the Deserts” (region: South Asia, 85 bpm). The outcomes of Prestudies 2A and 2B for these stimuli are presented in Table 4.

Table 4 Descriptive statistics of the three music stimuli selected after Pre-studies 2 A and 2B. Results of the finger tapping task (mean and standard deviation of inter-tap-intervals; beat interval: 700 ms) and the synchrony rating in which participants had to decide if a stick figure was walking in time with the beat of the music or out of time (percentage of correct answers and mean of the time needed for a decision).

Survey and ratings

The survey was carried out online on soscisurvey.de (SoSci Survey GmbH, Munich, Germany). A one-minute instruction video explained the task and the rating scales. After the instructions, six videos resulting from the combination of the independent variables (2 synchrony × 3 musical patterns) were presented. Participants rated the social closeness between the virtual self and other on an adapted Inclusion of Other in the Self scale32 (IOS) with a continuous slider ranging from zero on the left end to 100 on the right end (Fig. 1C). In contrast to Study 1, we did not include ratings of the likeability of the virtual other for two reasons. First, IOS ratings seem to better reflect the relevant social processes and evaluations in the current paradigm, while ratings of the likeability of the other might have been confounded with the liking of the music. Second, we reduced the duration of the online experiment to reach more participants.

The videos were presented in synchronous/asynchronous pairs per musical pattern. The order of musical patterns and the order of the movement condition within a musical pattern were randomized. After completing the video ratings, participants rated the music without any visual stimulus on the following continuous scales from 0 (“not at all”) to 100 (“very”): “How familiar are you with this general type of music?”, “How much did you like this specific piece of music”, and “How clear was the beat of this specific piece of music?”. Finally, participants filled out the musical training subscale of the Goldsmith Musical Sophistication Index48. The whole survey took approximately 10 minutes.

Music ratings

We analysed the familiarity with the music, the enjoyment of the music, and the perceived beat clarity of the music in three separate one-way repeated measures ANOVAs in the software JASP with the factor musical pattern (West Africa, South Asia, and Latin America). Greenhouse-Geisser corrections were applied when the assumption of sphericity was violated. Post-hoc comparisons were Bonferroni corrected. Familiarity with the music significantly differed between the three musical patterns (F(2,406) = 28.66, p < 0.001, η² = 0.12), with Latin American stimulus rated as more familiar than West African (mean difference = 12.74, SE = 2.10, p < 0.001, d = 0.43) and South Indian stimuli (mean difference = 14.53, SE = 2.23, p < 0.001, d = 0.46) and no difference between the latter. Enjoyment of the music did not significantly differ between the three musical patterns (F(2,406) = 2.35, p = 0.097, η² = 0.01). Perceived beat clarity of the music significantly differed between the three musical patterns (F(1.88,381.64) = 38.66, p < 0.001, η² = 0.16). The beat of the Latin American stimulus was perceived as clearer than the beat of the South Asian (mean difference = 3.41, SE = 1.41, p = 0.050, d = 0.17) and West African stimuli (mean difference = 13.78, SE = 1.72, p < 0.001, d = 0.56). Additionally, the beat of the South Asian stimulus was perceived as clearer than the beat of the West African stimulus (mean difference = 10.37, SE = 1.75, p < 0.001, d = 0.42). Within each musical pattern, familiarity with the music, enjoyment of the music, and perceived beat clarity were positively correlated with each other (all r(202)> 0.21, all p < 0.002). As shown in Supplementary Table S6, familiarity, enjoyment, and beat clarity ratings for the selected music stimuli were relatively homogenous between participants born in the following regions: West Africa, Latin America, South Asia, and others.

Data analysis

By using the lmer function from R’s49lme4 package50, we modelled IOS as a function of the following effects in the full models: synchrony, musical pattern, music rating (either familiarity with the music, enjoyment of the music, or perceived beat clarity of the music), and the interaction between synchrony and music rating (Table 2). Additionally, the random effect (1 | participant) accounted for individual differences by allowing a random intercept per participant. The null-model only included the random effect. The emmeans package51 in R was used for pairwise comparisons with Bonferroni corrections. A visual data inspection indicated that the residuals of the models were normally distributed. We compared the fit of the nested models using likelihood ratio tests.

Study 3

Participants

Forty-eight students at Aarhus University enrolled on a variety of study programs took part in the study (35 female, 13 male, mean age = 23.4 years, SD = 3.5). Data were collected in group tests with 26 and 22 participants, at the beginning of two separate lectures. Participation was voluntary and informed consent was provided by optionally returning the paper-and-pencil questionnaires that did not contain identifying information. The study was approved by the institutional review board at the Danish Neuroscience Centre.

Video stimuli

The 12-second videos can be found in the supplementary material section.

Independent variable synchrony

Virtual self and other were either walking in synchrony with each other and the music or the virtual other was walking asynchronously. The step frequency of both figures in the synchronous movement condition was 636 ms (94.4 bpm). The step frequency of the virtual other in the asynchronous movement condition was 700 ms.

Independent variable syncopation level

The three auditory stimuli were taken from a larger set of stimuli used in Matthews, Witek, Heggli, Penhune, and Vuust41. They consisted of repetitions of five identical piano chords in D major and a soft hi-hat sound marking the eighth notes. The stimuli had three syncopation levels: Low, moderate, and high (Fig. 3A), related to low, moderate, and high beat clarity, respectively. The sequences were slowed down to 94.4 bpm.

Procedure and ratings

Data were collected in two group sessions at the beginning of two different lectures, which were part of different lecture series at Aarhus University. Participants received a printed questionnaire with instructions on the first page. The instructions were additionally read aloud by the experimenter. The videos were presented on a screen with a beamer. Sound was played via active loudspeakers. The experimental stimuli consisted of six individual videos, resulting from the combination of the independent variables synchrony (2) × syncopation level (3). Each video was presented twice in two separate blocks resulting in 12 trials. The order of the six videos per block was randomized. After each video, participants had a few seconds to provide and answer on an adapted Inclusion of Other in the Self scale32 (IOS; Fig. 1C), which was printed on the questionnaire including a visual-analogue scale with a length of 100 mm, corresponding to IOS values of 0 to 100, similar to Studies 1 and 2. The experiment lasted approximately 10 minutes.

Data analysis

As a visual data inspection and Shapiro-Wilk normality tests indicated that most of the IOS distributions were not normal, we used Wilcoxon signed-rank tests for the analysis and computed the effect size as r = Z/sqrt(N × 2) with N = 48. The Bonferroni-corrected critical p-value for the resulting six comparisons is 0.05/6 = 0.0083. Additionally, we computed and compared IOS difference values (synchronous movement – asynchronous movement) for every syncopation level.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *