Being the victim of virtual abuse changes default mode network responses to emotional expressions
Author links open overlay panel, , , , , , ,
- aSystems Neuroscience, Institut d’Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain
- bEvent Lab, Department of Clinical Psychology and Psychobiology, Faculty of Psychology, University of Barcelona, Barcelona, Spain
- cDepartment of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
- dDepartment of Computer Science, University College London, London, UK
- eInstitute of Neurosciences of the University of Barcelona, Barcelona, Spain
- fDepartment of Cognition, Development and Educational Psychology, Faculty of Psychology, University of Barcelona, Barcelona, Spain
Received 13 May 2020, Revised 11 September 2020, Accepted 16 November 2020, Available online 7 December 2020, Version of Record 5 January 2021.
Reviewed 27 July 2020; Action editor Stefan Schweinberger
Abstract
Recent behavioural studies have provided evidence that virtual reality (VR) experiences have an impact on socio-affective processes, and a number of findings now underscore the potential of VR for therapeutic interventions. An interesting recent result is that when male offenders experience a violent situation as a female victim of domestic violence in VR, their sensitivity for recognition of fearful facial expressions improves. A timely question now concerns the underlying brain mechanisms of these behavioural effects as these are still largely unknown. The current study used fMRI to measure the impact of a VR intervention in which participants experienced a violent aggression from the specific vantage point of the victim. We compared brain processes related to facial and bodily emotion perception before and after the VR experience. Our results show that the virtual abuse experience led to an enhancement of Default Mode Network (DMN) activity, specifically associated with changes in the processing of ambiguous emotional stimuli. In contrast, DMN activity was decreased when observing fully fearful expressions. Finally, we observed increased variability in brain activity for male versus female facial expressions. Taken together, these results suggest that the first-person perspective of a virtual violent situation impacts emotion recognition through modifications in DMN activity. Our study contributes to a better understanding of the brain mechanisms associated with the behavioural effects of VR interventions in the context of a violent confrontation with the male participant embodied as a female victim. Furthermore, this research also consolidates the use of VR embodied perspective-taking interventions for addressing socio-affective impairments.
1. Introduction
During social interactions we routinely process emotional signals, whether from facial (Tracy & Robins, 2008) and bodily (de Gelder, 2016; de Gelder et al., 2004) expressions or from voices (Lima et al., 2019). However, evidence suggests that certain populations such as violent offenders show a different pattern in response to perceiving emotional expressions. Violent behaviour has been associated with deficits in recognising negative emotions and differences in processing stimuli with ambiguous emotional expressions (Blair et al., 2001; Kret & de Gelder, 2013; Wegrzyn et al., 2017). This raises the question of whether deficits in emotion recognition might be addressed by targeted interventions (Baron-Cohen et al., 2009; Schönenberg et al., 2014). Expected outcomes of such interventions might include changes at the perceptual level; for instance, differences in how a stimulus attribute, such as fear in a facial expression, is perceived, as well as changes at the cognitive processing level related to emotion perception (i.e., perspective-taking and empathy).
Virtual reality (VR) offers a new type of targeted intervention that aims to achieve behavioural and subjective changes in the participant (Slater, 2009; Slater & Sanchez-Vives, 2016). Furthermore, unlike traditional purely verbal and cognitive interventions, VR-based therapeutic interventions may be significantly enhanced with embodied perspective-taking. Through embodiment of the VR character, participants are able to experience events from the first-person perspective (1PP) of a virtual character (Gonzalez-Liencres et al., 2020). Virtual embodiment is implemented using a head-mounted display and an end-effector tracking device, where the participants see a life-sized virtual body from a first-person perspective moving synchronously with their actual body movements. This typically gives rise to a strong illusion of ownership of the virtual body and contributes to a sense of agency (Banakou & Slater, 2014; Nierula et al., 2019; Slater et al., 2010). Current studies leveraging VR for therapeutic and social purposes have related these VR-based illusions to sensory perception modification as well as to cognitive effects (Maister et al., 2014). For example, the experience of being virtually embodied in a child avatar results in the subsequent overestimation of the size of objects compared to embodiment in an adult body of the same height as the child (Banakou et al., 2013). Similarly, being embodied in a virtual body of a different race is related to a reduction of implicit racial bias (Maister, Sebanz, et al., 2013; Peck et al., 2013), an effect that may last for at least a week (Banakou et al., 2016). Moreover, in a behavioural study directly related to the present fMRI research, domestic violent offenders were able to overcome their deficit in recognizing fearful female facial expressions after experiencing a female victim’s perspective in VR (Seinfeld et al., 2018).
At present the underlying brain mechanisms sustaining the behavioural and neuropsychological modifications of VR experiences remain largely unknown and unexplored (Blanke et al., 2015; Maister et al., 2014). In relation to the Seinfeld et al. (2018) behavioural study, where results indicated the existence of increased fear recognition after experiencing the perspective of a victim, it is plausible that changes in emotion perception are associated with enhanced processing of specific visual features of the emotional stimuli per se. A possible explanation may be that the VR intervention specifically influenced the perception of fearful facial expressions by enhancing the processing of emotional content in face- and emotion-coding areas including the fusiform face area and the amygdala (Méndez-Bértolo et al., 2016; Vuilleumier et al., 2001). At the same time such activity pattern changes might be linked to social cognition processes. The latter involves the Default-Mode Network (DMN) areas such as the temporo-parietal junction (TPJ), the posterior cingulate cortex (PCC), the anterior temporal lobe (aT), and the medial prefrontal cortices (mPFC) (Arzy et al., 2006; Bernhardt & Singer, 2012; Bzdok et al., 2012; Saxe & Kanwisher, 2003; Tusche et al., 2016; Zaki & Ochsner, 2012). These areas take part in multiple internally directed processes and autobiographical memories, as well as in focused attention and information encoding (Andrews-Hanna, 2012; Buckner et al., 2008; Spreng et al., 2009).
To examine the underlying brain mechanisms of VR (Seinfeld et al., 2018), we designed a study in which male participants passively viewed facial and bodily emotion expressions (i.e., 9-step morph continua between extremely fearful and happy expressions) during two fMRI scanning sessions. Between the two fMRI scanning sessions, participants experienced an immersive VR domestic violence scene from the perspective of the female victim. In the VR scene, participants entered a virtual environment where their bodies were substituted with that of a life-sized virtual female body moving synchronously with their own real movements. Having embodied the virtual female, participants saw a virtual male approaching them, delivering abusive speech and gestures at the participant. Based on a previous study that reported an impact of this same virtual experience on the processing of facial expressions (Seinfeld et al., 2018), we hypothesized that the VR intervention would trigger changes in activity in the processing of the emotional morph stimulus as well as in social cognition network areas. To test this hypothesis, we used whole-brain and regions of interest (ROIs) analysis with repeated-measures ANOVA, and fMRI data were analysed by comparing both fMRI scanning sessions before and after the VR intervention. Specifically, we ran a contrast analysis comparing the fMRI scanning sessions before (pre-VR fMRI) and after (post-VR fMRI) the VR session; we also ran an ANOVA including as factors the fMRI scanning Session (i.e., pre and post VR), the Gender of the emotional morph stimuli (i.e., male and female) and the Morph step (i.e., more or less ambiguous). The pre- and post-VR scanning sessions each consisted of two runs, allowing us to assess within-run habituation effects (Plichta et al., 2014; Wright et al., 2001) due to stimuli repetition, and also to evaluate possible short-term effects of the VR experience in the post-VR session itself. Thus we ran an additional ANOVA including Run, Gender, and Morph step as independent variables to examine these effects. The analysis particularly focused on brain areas previously linked to emotional stimuli perception (i.e., amygdala and fusiform cortex) and social information processing (i.e., DMN areas), examining the interactions between the Run and Session factors and either the Gender or Morph factors. Furthermore, we examined changes in multivariate patterns in these ROIs in the runs across the two sessions (i.e., ANOVA by Run for pattern similarity).
2. Methods and materials
2.1. Participants
Fourteen male participants (mean age 20.9, SD ± 2.23) took part in the fMRI study. In the following, we report how we determined our sample size, all data exclusions (if any), all inclusion/exclusion criteria, whether inclusion/exclusion criteria were established prior to data analysis, all manipulations, and all measures in the study. The participants in the study were selected from a total pool of 114 Dutch males, previously screened for their fear recognition sensitivity using a behavioural test based on the perception of emotional morphs from happy to fearful facial expressions. We selected participants who identified less than 50% of female facial expressions as expressing fear (Fig. S1A). There were three reasons behind the selection criteria. First, we aimed to approximate the emotion recognition profile of the violent offenders found in Seinfeld et al. (2018) in a non-offender population, since the VR intervention used in that study evoked stronger perceptual changes in participants with a lower sensitivity in recognizing fear in female faces. Second, there is evidence that poor recognition of fear might predict difficulties in other relevant social domains and encompass abnormal brain responses to relevant facial stimuli (Corden et al., 2006). Third, we wanted to avoid the occurrence of a ceiling effect based on the possibility of having participants with a very high sensitivity in recognizing fear. A detailed explanation of the behavioural sampling and screening test can be found in Supplementary Materials.
Participants did not have criminal records and did not report having a history of physical or emotional abuse towards themselves or others. All participants provided written consent and received a monetary reward (€50). The experiment was approved by the ethical committee of Maastricht University and all procedures conformed to the Declaration of Helsinki. Participants were sent a follow-up email two weeks after the experiment asking them about their thoughts and emotions in relation to the VR experience. No negative side-effects of the VR experience were reported by any of the participants. No part of the study procedures or analyses was pre-registered to the research being undertaken.
2.2. Experimental design
We used a within-subject design and assessed the impact of the VR intervention by comparing pre-to post-VR measures. For the behavioural responses, we compared pre-VR to post-VR sensitivity to fear in emotional morphs (i.e., see sections 2.3 Emotional stimuli and 2.4 Behavioural tests and questionnaires for details). For the brain responses level, we used a similar methodology comparing pre-to post-VR brain processing (i.e., see sections 2.3. Emotional stimuli and 2.5. fMRI design for details). Each fMRI scanning session consisted of two runs with the same set of stimuli. Given the evidence that brain activity for emotional stimuli undergoes habituation (i.e., decreased activity for repeated stimuli presentation) (Fischer et al., 2003), especially in the amygdala (Plichta et al., 2014; Wright et al., 2001), we also analysed the data by run to account for habituation effects and to evaluate possible shorter-term changes within the session after the VR intervention. Fig. 1 summarizes the different stages of our experimental design.
2.3. Emotional stimuli
For the behavioural tests and fMRI scans, we created 9 step morph continua between 100% fearful and happy emotional expressions. Male and female facial expressions were selected from the Karolinska Directed Emotional Faces inventory (Goeleven et al., 2008; Lundqvist et al., 1998). Bodily stimuli were avatars expressing happy and fearful body postures based on the Bodily Expressive Action Stimulus Test (de Gelder & Van den Stock, 2011). All steps of the emotional morph stimuli were used in the behavioural tests. We specifically used the second, fourth and sixth steps of morph continua stimuli for our fMRI experiment, in order to capture possible changes in fearful-expression processing, as previously shown in Seinfeld et al. (2018). Fig. 2 shows examples of the stimuli used. See Supplementary Materials for details of stimuli construction.
2.4. Behavioural tests and questionnaires
We aimed to investigate the brain basis of the behavioural effect found in Seinfeld et al. (2018), related to enhanced fear recognition in female faces after the VR experience. With this purpose in mind, we designed a two-alternative forced-choice categorisation task including morphed faces and bodies (Fig. 2), that served as a pre-test (i.e., screening). This pre-test was administered three to four weeks before the fMRI experiment. The participants also completed a behavioural post-test immediately after the fMRI experiment finished. The pre-test stimuli contained faces of four identities (two males and two females); the post-test contained eight identities in total (four males and four females), including the same four pre-test identities, and an additional four novel identities. This allowed us to assess whether the use of VR could impact the recognition of fear for the same faces used in the screening phase (before VR), but also to check if it could affect the perception of novel faces, where no familiarity effect was present.
After the post-VR fMRI session, we also administered the Interpersonal Reactivity Index (De Corte et al., 2007) and the Buss-Perry Aggressiveness Questionnaire (Buss & Perry, 1992) (both Dutch versions), and a VR questionnaire to assess participants’ opinion about different aspects of the virtual experience. See Supplementary Materials for results.
To avoid false positives and an inflation of the effect size in the between-subject analysis (Marek et al., 2020; Yarkoni, 2009) we did not use the behavioural or questionnaire results for correlation with fMRI data. We decided to base the complete statistical analysis on within-subject comparisons which are more robust, less prone to false positive rates, and have higher power.
2.5. fMRI design
Participants underwent pre- and post-VR fMRI scanning sessions (maximally 30-min apart) using the same stimuli (two runs within each scanning session, 13 min per run). The fMRI pre-VR Session provided a baseline measure for stimuli processing prior to the VR intervention, and the fMRI post-VR Session assessed changes in the stimuli processing linked to the VR embodied experience. The fMRI experiment used a 2 × 2 × 3 factorial design, with factors: Type of Stimulus (faces/bodies), Gender (male/female stimuli), and Morph (steps 2/4/6 of the morph continua). Images of six different identities belonging to one gender and morph step (Fig. 2) were presented in mini-blocks of 3 sec, separated by jittered fixation cross presentations (mean = 8 sec). Participants were instructed to fixate on and passively view the stimuli, and to press a button when detecting a red dot (catch trials). Two additional seconds were added to the fixating period following the catch trial blocks. The stimuli were projected (Panasonic PT-EZ570, 60 Hz) onto a screen situated at the end of the bore. Participants saw them through a mirror placed on the head coil (viewing distance 75 cm).
We acquired a 5-min-resting state scan after the pre- and post-VR fMRI scanning sessions. During this phase, participants were instructed to attend to the fixation cross and to not think about anything specific. The resting state recorded after the pre-VR fMRI Session was used to create ROIs for individual participants. At the end of the post-VR fMRI Session, we also acquired anatomical images and performed a separate functional localizer run (15 min, data not used in the current study). For the functional localizer, participants passively viewed blocks of stimuli, including faces, bodies, houses, tools and words. Both block length and the inter-block interval was 12 sec. The face stimuli were front-view neutral faces from the Karolinska Directed Emotional Faces (Lundqvist et al., 1998), with 24 different identities (i.e., 12 males and females). Six of these identities (5 males) were also used in the main experiment, but since the functional localizer was presented after the main experiment, this could not interfere with face processing during the main experiment. The body stimuli were neutral, still front-view bodies (20 identities, 10 males) with the facial information removed (de Gelder & Van den Stock, 2011).
2.6. VR intervention
Between the pre and post-VR fMRI Sessions, participants went to a room adjacent to the scanner and experienced the same VR scenario as in Seinfeld et al. (2018) with Dutch dubbing. Through a head mounted display (Oculus Rift DK2 HMD, California, United States) participants saw a life-size virtual female body both when they looked down towards their own body and also reflected in a mirror moving in accordance with the participants’ real-time body movements, producing visuomotor synchrony (Fig. 3A, Kinect V2 for Xbox One, Microsoft, Washington, United States). After this embodiment phase, a virtual male entered the scene that approached and verbally abused the participant embodied as the female victim (Fig. 3B and C). Towards the end of the scene, the male avatar was standing face to face with the participant (i.e., female avatar), effectively intruding in their personal space (de Borst et al., 2018) (Fig. 3D). The technical implementation and equipment used for the VR experience are described in Supplementary Materials.
2.7. MRI data acquisition and analysis
The data were acquired at Scannexus, Maastricht Brain Imaging Center, Maastricht University, the Netherlands, using a full-body 3T scanner (Prisma fit, Siemens, Erlangen, Germany), with a 64-element head-neck receiver coil. We acquired whole-brain 2D EPI fMRI data of 2 × 2 × 2 mm3 (64 slices without gaps, TR = 1330 ms, TE = 30 ms, flip angle = 67, multi-band acceleration factor = 3, IPAT = 2, FOV = 200 × 200, matrix size = 100 × 100, phase encoding direction: anterior to posterior) (Setsompop et al., 2012), and MPRAGE anatomical data of 1 × 1 × 1 mm3 (TR = 2300 ms, TE = 2.98 ms). To correct for EPI distortion, an extra run of five volumes with posterior-to-anterior phase-encoding direction was acquired before each functional run. The fMRI data underwent in-plane distortion correction, standard pre-processing steps, spatially normalised into Talairach space, and were spatially smoothed up to 6 mm full-width-at-half-maximum (FWHM). We performed general linear models (GLM) analysis with 12 conditions of interest (face/body, male/female, morphing steps 2/4/6), the catch trial condition, and the z-transformed motion correction parameters (confound predictors). The predictors’ beta (% signal changes) and t values were estimated after percentage-normalising the time course data.
The fMRI data were analysed with BrainVoyager 20.2 (Brain Innovation, Maastricht, the Netherlands), NeuroElf toolbox v1.0 (http://neuroelf.net/) implemented in Matlab R2016a, and SPSS. For functional runs, in-plane top-up distortion correction was performed with the COPE plugin (version .5). The distortion-corrected runs then underwent slice scan time correction (with sinc interpolation), rigid-body 3D motion correction (trilinear/sinc interpolation for estimation/transformation), and temporal filtering (high-pass GLM-Fourier filter up to 2 sines/cosine cycles per run) to remove slow baseline drifts. The functional runs were then aligned to the anatomical runs and transformed into Talairach space. Spatial smoothing with Gaussian filters was then applied to the functional runs, 6 mm FWHM for the main experimental runs and 4 mm FWHM for the resting state runs to obtain less extensive ROIs.
2.7.1. Whole-brain analysis
We performed separately for faces and bodies: (1) an ANOVA of Session × Gender × Morph, including the contrast of post-VR fMRI Session > pre-VR fMRI Session, and also (2) the ANOVA of Run × Gender × Morph. The VR influence would be reflected by either the contrast of post-VR fMRI Session > pre-VR fMRI, the main effect of Session or Run, or by the interaction between Session/Run and Gender or Morph.
For the whole-brain analyses, multiple-comparisons corrections were performed by Monte-Carlo simulation separately for each of the resulting maps (cluster-level statistical threshold estimator plugin in BrainVoyager, https://support.brainvoyager.com/brainvoyager/functional-analysis-statistics/46-tresholding-multiple-comparisons-problem/226-plugin-help-cluster-thresholding), excluding voxels outside the brain, with the initial p value set at .001, and number of simulations = 5000. As post-hoc analyses, the directions of the main effects/interactions were examined in SPSS, with beta values extracted from the clusters. Clusters outside the brain/inside the white matter were treated as false positives and were not reported.
2.7.2. ROI analysis
We examined the influence of VR on emotional face and body perceptions by Run, in bilateral ROIs that were potentially implicated in emotion recognition (i.e., amygdala), social cognition (i.e., DMN areas, including the PCC, TPJ, aT, and mPFC), and stimuli processing-related areas (i.e., fusiform).
The unilateral amygdala ROIs were manually delineated according to the individual anatomy in Talairach space and merged into one bilateral ROI per participant (average volume = 1616.21 mm3, SD = 213.36 mm3). To define the DMN ROIs, first we defined a seed sphere ROI in the medial posterior cingulate cortex (PCC, radius = 4 mm, mean TAL coordinates x, y, z = −.28, −48.7, 32.29; SD x, y, z = .61, 2.67, 2.37). Then we performed functional connectivity analysis by correlating the averaged time course of this seed ROI to the whole-brain voxels in the resting-state run data from the pre-VR fMRI Session. This resulted in a set of areas highly correlated with the PCC ROI. A threshold of false discovery rate (FDR) = .0001 was computed by bootstrapping in the plugin (thresholds of correlation coefficients after FDR corrections were around .1). We manually selected contiguous voxels in the thresholded maps within a range of 40 × 40 × 40 mm3 (“Range” setting in BrainVoyager), resulting in one cluster per selection, and transformed them into ROIs in the PCC, TPJ, lateral aT, and mPFC (see Fig. S4 and Table S3 in Supplementary Materials). The beta values (% signal change) for all 12 conditions, for each of the 4 runs, were extracted from these ROIs. The ANOVAs of Face/Body × Gender × Morph × Run were analysed in SPSS. Multiple comparisons for post-hoc pairwise comparisons were adjusted with the Sidak method.
We created fusiform ROIs for each individual participant (unilateral clusters merged into one bilateral ROI), using the contrast of faces > bodies, with the data of run 1 and run 2 (the same GLMs separately estimated for each run, used in the ANOVA and ROI analyses above). To avoid circularity, we did not perform statistical inference between morph steps for run 1 or run 2 in these fusiform ROIs, but only for run 3. For the sake of completeness, we also presented the results of the ANOVA Run × Gender × Morph for faces, to check whether an interaction effect of run was present or absent.
The % signal change per condition for each of the four runs were extracted from these ROIs and analysed by the Face/Body × Gender × Morph × Run ANOVA. To examine whether the activity in DMN ROIs was consistently above or below the baseline (fixation only) in each run, we performed a one-sample t-test against 0 for each condition (FDR correction at q < .05).
We then plotted the event-related time course for the mini-blocks, from −2 to 12 repetition times (TRs) relative to the stimuli mini-block onset. Specifically, for each epoch corresponding to a mini-block, the raw blood-oxygen-level-dependent (BOLD) signals from −2 to 0 TRs were averaged as the baseline, and the subsequent TRs were %-normalised by this baseline value. Time courses for all epochs of the same condition were averaged within each run. The results of individual participants were then averaged to produce the group-level time courses.
2.7.3. RSA analysis
In these ROIs, we further examined the across-run changes of the multivariate patterns separately for the six face-stimuli and six body-stimuli conditions, with representational similarity analysis (RSA) (Nili et al., 2014). The neural pattern dissimilarity between stimuli condition pairs were summarised into a neural matrix, and these matrices across runs were compared with model dissimilarity matrices.
For the neural matrices, we extracted the t values estimated by the GLM per run without smoothing, for the voxels within each ROI in each participant, and computed their pattern dissimilarity (1-Pearson’s correlation r, smaller value indicates higher similarity), forming a 6 × 6 matrix. For the model matrices, we coded the six face stimuli conditions in vectors with 0 and 1, corresponding to the model conditions. For example, face male morph step 4 was coded for the gender model as [1 0], and was coded for the morph model as [0 1 0]. The gender and model matrices each were computed by Euclidean distances between condition pairs. Both models assume that the neural patterns were similar (short distance) within condition (gender/morph), and dissimilar (long distance) between conditions.
In the neural matrices, values below the diagonal were then compared (Spearman’s rank correlation rho, excluding the diagonal) to that of the model matrices. By putting the resulting rho values (Fisher’s-Z transformed to satisfy the normality assumption) into ANOVAs by run, we examined whether the neural pattern became correlated to the gender or morph models. We also separately computed the averaged correlations for conditions within male, within female, and between male/female, and examined their change across runs with a Compartment × Run ANOVA. These three sets of correlation values correspond to the three compartments in the gender model matrix and reflect the across–morph correlations. Lastly, we computed the averaged within-gender correlation minus the between-gender correlation, and performed a by-run ANOVA.
3. Results
3.1. Behavioural analysis
When comparing the averaged fear-rating proportion for pre-test stimuli (i.e., four identities) with those for all stimuli in post-test (i.e., four pre-test identities + four novel ones), we found a significant Session × Gender interaction effect (F (1,13) = 4.856, p = .046, ηp2 = .272). This interaction indicated that a higher proportion of female faces compared to male faces were rated as expressing fear in the post-test (Fig. 4A). However, this interaction effect disappeared when we compared exactly the same four identities between pre-test and post-test (F (1,13) = .151, p = .704, ηp2 = .011), excluding the four other novel stimuli in the post-test (Fig. 4B). This indicated that the female-specific effect could be driven by the novel stimuli identities (i.e., differences in individual face expressivity) in the post-test. We tested and validated this possibility in a behavioural validation study in a separate group without VR intervention (see Supplementary Materials). In this additional study we found that stimulus identity played a significant role in driving the perceived degree of fear in each face. Therefore, it is difficult to make conclusive inferences about the behavioural effects of the VR intervention, even though we found a positive effect of the VR intervention on female fear perception between the pre- and post-tests.
Since we observed a clear identity effect with facial stimuli, and the number of body stimuli identities was lower (two in pre-test and four in post-test) in the behavioural body tests, we decided to not analyse the behavioural data for body stimuli. Note that in the fMRI study this was not an issue as more stimulus identities and an equal number of stimuli identities were included for face and body stimuli.
3.2. Whole brain analysis
We report the results of contrasts and interaction effects, which would reflect the VR influence. See results of the main effects in Supplementary Materials.
3.2.1. Session × Gender × Morph ANOVA
We performed the contrast of post-VR fMRI Session > pre-VR fMRI Session and the ANOVA for face and body stimuli, respectively. For faces, the contrast analysis indicated higher activity in post-VR fMRI Session compared to pre-VR fMRI, in clusters including the ventral medial prefrontal cortex (vmPFC), PCC, ACC, early visual cortex, thalamus, and supramarginal gyrus (Fig. 5A, Table S4 and S5). No interaction between Session and Gender/Morph factors was found using the ANOVA. For body stimuli, the contrast of the post-VR fMRI Session > pre-VR fMRI only showed a cluster in the early visual cortex (Fig. 5B). In the ANOVA, a VR-related interaction of Session × Gender was found in the right superior frontal gyrus, where the activity in the post-VR fMRI Session for male but not for female bodies was lower than in the pre-VR fMRI Session.
3.2.2. Run × Gender × Morph ANOVA
We performed an ANOVA by Run to capture possible shorter-term influence of VR within the post-VR fMRI Session (effects in run 3 or run 4) that can be reflected by a main effect of Run or by an interaction Run × Gender or Run × Morph. For face stimuli, we found a significant interaction of Run × Morph in the right posterior superior temporal sulcus (pSTS) and left vmPFC, showing that activity patterns for the processing of morph step 2 was different compared to other morphs across runs. In the left vmPFC cluster, activity for morph step 2 was significantly higher than step 4 in run 2, but was significantly lower in run 3 (Fig. 6B,D). No other significant interaction was present (see Table S6 for cluster details). For body stimuli, we found a significant interaction of Run × Morph in the right posterior intraparietal sulcus IPS (see Fig. 6C,E).
3.2.3. Habituation effect in both by-session and by-run ANOVAs
For both face and body stimuli, extensive decreases of activity in run 2, 3, and 4 were observed in several regions, likely due to stimulus repetition. This habituation-related effect was observed in bilateral visual areas including the fusiform face area (FFA) for face stimuli. For body stimuli, decreased activity in the post-VR fMRI Session was observed in the extrastriate body areas (EBA), fusiform body areas (FBA), IPS, and inferior frontal gyrus (IFG) (Fig. 5, Fig. 6A).
3.3. ROI analyses: effects in socio-affective related areas
3.3.1. Amygdala, DMN ROIs
We performed an ANOVA for Face/Body × Gender × Morph × Run in areas that have been consistently related to emotional and social-cognition processing, including the amygdala, PCC, TPJ, aT and mPFC. The possible VR influence could be captured by a significant interaction of Run, Gender, or Morph (Fig. 7A, Table 1).
Table 1. Statistical details of the significant main effects and interaction effects found through the ROIs ANOVAs.
ROI | Main Effect or Interaction | Detailed Statistical Values |
---|---|---|
Amygdala | Face/Body | F (1,13) = 17.520, p = .001, ηp2 = .574 |
Amygdala | Morph | F (2,26) = 4.818, p = .017, ηp2 = .270 |
PCC | Face/Body × Morph × Run | F (6,78) = 3.860, p = .002, ηp2 = .229 |
TPJ | Face/Body × Morph × Run | F (6,78) = 2.939, p = .012, ηp2 = .184 |
aT | Face/Body × Morph × Run | F (6,78) = 2.802, p = .016, ηp2 = .177 |
PCC | Face/Body × Gender × Morph × Run | F (6,78) = 2.879, p = .014, ηp2 = .181 |
PCC | Morph × Run (effect only significant for faces) | F (6,78) = 2.209, p = .051, ηp2 = .145 |
TPJ | Morph × Run (effect only significant for faces) | F (6,78) = 2.611, p = .023, ηp2 = .167 |
aT | Morph × Run (effect only significant for faces) | F (6,78) = 2.240, p = .048, ηp2 = .147 |
Fusiform | Run | F (3,39) = 12.148, p = .000009 |
Fusiform | Morph (no significant effect) | F (2,26) = 1.187, p = .321, ηp2 = .084 |
Fusiform | Gender (no significant effect) | F (1,13) = .039, p = .846, ηp2 = .003 |
Fusiform | Gender × Morph (no significant effect) | F (1.315,17.092) = 1.466, p = .249, ηp2 = .101 |
In the amygdala ROI, we did not find any such interaction effect, but found a significant main effect of Morph (p = .017), and of Face/Body (p = .001), with higher activity for faces than bodies (% signal change (SC) = .115), a result consistent with the literature (Kret et al., 2011).
In the DMN ROIs however, we found a significant Face/Body × Morph × Run interaction in the PCC (p = .002), TPJ (p = .012), aT (p = .016), and a significant Face/Body × Gender × Morph × Run interaction in PCC (p = .014). Subsequent ANOVAs separately performed for faces and bodies showed significant interactions of Morph × Run for faces in TPJ (p = .023), aT (p = .048) and a trend in the PCC (p = .051), not for bodies (all p > .277). Post-hoc pairwise comparisons for faces further showed a specific effect for fearful faces in run 3 (i.e., after the VR intervention). In run 3, there was decreased activity for morph step 2 (i.e., fear) compared to morph step 4 and 6 (i.e., happy) in the PCC (p = .053, d = −.722; p = .028, d = −.812), in TPJ (p = .014, d = −.913; p = .043, d = −.753) and in aT (p = .058, d = −.711; p = .060, d = −.705) (Fig. 7).
To clarify the directionality of DMN activity and to better understand whether the stimuli and task induced activations or deactivations, we performed a one-sample t-test against baseline (fixation only) for each condition of the DMN ROIs (FDR q < .05). We did not find consistent above- or below–baseline activity for most facial expression stimuli conditions in each run. However, face female step 2 (i.e., fear) in run 1 was above baseline in the PCC (% SC = .281; p = .026, d = .672; FDR q = .291), while in run 3 it was significantly below baseline in aT (% SC = −.258; p = .030, d = −.652, FDR q = .0039). Assuming that the activity in DMN areas would have the same directionality for the same task without VR intervention (i.e., pre-VR fMRI), this above-baseline activity in run 1 indicates that the DMN was not consistently deactivated during passive viewing for externally-oriented emotional stimuli in our current study, and should be interpreted as task-induced activation instead.
The activity for face morph step 4 (i.e., ambiguous stimuli) was more consistently above baseline in DMN areas after the VR intervention, as is the case for the mPFC (% SC = .232, p = .046, d = .590, FDR q = .0039) in run 3 for male morph step 4 faces. This is also the case in the TPJ (% SC = .404; p = .006; d = .879, FDR q = .036), mPFC, and PCC (% SC = .397, .356; p = .021, .042; d = .702, .604; without FDR correction) for female morph step 4 faces in run 4. This above-baseline activity for face step 4 in DMN areas was consistent with the vmPFC and PCC clusters observed in the whole-brain post-VR fMRI Session > pre-VR fMRI contrast.
For body conditions, we did not observe consistent directionality for the activity, although male step 4 in run 2 in the PCC was significantly above baseline (% SC = .292, p = .003, d = .973, FDR q = .0259), and body female step 2 in Run 1 in aT was significantly below baseline (% SC = −.274, p = .004, d = −.950, FDR q = .039).
3.3.2. Face-processing ROI
To assess whether this fearful-face-specific DMN activity decrease in run 3 (i.e., run occurring immediately after the VR) also influenced the face-processing areas (i.e., bilateral fusiform ROI), we performed a Gender × Morph ANOVA for face stimuli in run 3, as a follow-up analysis. No significant effects related to the VR intervention were found. We found neither a significant main effect of Morph (p = .321), nor a difference between morph step 2 and 6 in Run 3 (mean difference = −.075, p = .446, d = −.380). We also performed a Run × Gender × Morph ANOVA across runs and found no significant interaction effects with the factor Run (all p > .302). Only a significant main effect of Run was observed (p = .000013), with higher activity for faces in Run 1 than Run 2 and 4 (p = .0000156, d = 2.109; p = .00128, d = 1.357), consistent with the habituation effect observed in the whole-brain analysis.
3.3.3. Averaged time-course in all ROIs
We further extracted the averaged time courses for each face-stimuli condition across runs, to see if there were within-epoch activity changes that may have been missed by the GLM analysis. The activity in DMN areas was more sluggish than in the fusiform ROIs (Fig. 7B), but the latency for the stimulus-locked signal drop in run 3 for fearful female faces (blue) was similar to that of the fusiform ROI (around 4–5 TRs after stimuli onset), while in run 1 this activity was stimulus-locked and above baseline. Also, there was a stimulus-locked elevation of signal for morph step 4 faces (green) mostly in run 4 (also in run 3, but peaked later). These patterns were consistent with the ROI activity versus baseline (with and without FDR correction), and the results of vmPFC and PCC clusters in the whole-brain contrast of the post-VR fMRI Session > pre-VR fMRI.
3.4. RSA analysis in ROIs
To further examine the fine-grained changes of multivariate patterns across runs for face stimuli, we separately correlated the neural representational dissimilarity matrices to the Gender and Morph model (Fig. 8A). Subsequently, we performed ANOVAs with the factor Run on the resulting correlation values.
The neural patterns did not change their similarities to the Morph model across runs (main effects, all p > .193; linear contrasts: all p > .267). However, for the Gender model, we found that the activity patterns over consecutive Runs in TPJ and aT ROIs became increasingly similar to the model (TPJ: main effects p = .043, linear contrast p = .040; aT: main effects p = .060, linear contrast p = .025, Fig. 8B). This linear increase could be driven by (a) an increased similarity within one of the genders (across morph steps), or (b) a decreased similarity between genders, or (c) a net decrease of similarity between genders compared to within genders.
We examined these three possibilities by averaging the correlation of the gender matrix (i.e., using values below the diagonal) in three compartments: within-male, within-female, between-genders. The Compartment × Run ANOVA showed a general linear decrease of correlation values over consecutive runs in PCC and TPJ ROIs (i.e., linear contrast for the Run factor, p = .001, p = 3.791 × 10−6, Fig. 8C), thus ruling out the possibility of (a) while supporting (b). Since these correlation values correspond to across–morph correlations within each compartment, this result indicates an increase in the pattern variability across runs, probably reflecting more variable processing in these ROIs. Interestingly the fusiform ROI also showed significant linear decrease (i.e., linear contrast: p = .015), indicating that the processing of facial stimuli per se became more variable across runs. In the mPFC there was a main effect of compartment (p = .005), where the pattern for female faces was significantly less consistent than that for male face across runs (mean Z difference = −.057, p = .016). In the aT there was a Compartment × Run interaction (p = .010) and a linear contrast interaction (p = .00007), where the pattern became less consistent for female faces and between genders, but not for male faces. For the ANOVA of the net within/between-gender correlation difference, a significant linear increase across runs was observed again in the aT (p = .025), supporting possibility (c) (Fig. 8D). Thus, we found that across runs there were significant processing differences between stimuli belonging to different genders and a smaller processing difference within each gender (Table 2).
Table 2. Statistical details of the significant main effects and interaction effects found in the RSA analyses (ANOVA by Run, ANOVA of Compartment × Run, ANOVA of the net within/between-gender correlation difference across Runs).
ROI | Significant Main Effect or Interaction | Detailed Statistical Values |
---|---|---|
TPJ | Run (ANOVA by Run) | Linear contrast: F (1,13) = 5.235, p = .040, ηp2 = .287 |
Main effect: F (3,39) = 2.979, p = .043, ηp2 = .186 | ||
aT | Run (ANOVA by Run) | Linear contrast: F (1,13) = 6.407, p = .025, ηp2 = .330 |
Main effect: F (3,39) = 2.681, p = .060, ηp2 = .171 | ||
PCC | Run (ANOVA of Compartment × Run) | Linear contrast: F (1,13) = 17.397, p = .001, ηp2 = .572 |
Main effect: F (3,39) = 9.093, p = .0001, ηp2 = .412 | ||
TPJ | Run (ANOVA of Compartment × Run) | Linear contrast: F (1,13) = 58.073, p = 3.791 × 10−6, ηp2 = .817 |
Main effect: F (3,39) = 22.396, p = 1.348 × 10−8; ηp2 = .633 | ||
TPJ | Compartment × Run linear to linear contrast interaction | F (1,13) = 6.234, p = .027, ηp2 = .324 |
Fusiform | Run (ANOVA of Compartment × Run) | Linear contrast: F (1,13) = 7.794, p = .015, ηp2 = .375 |
Main effect: F (3,39) = 6.331, p = .001, ηp2 = .328 | ||
aT | Compartment × Run (ANOVA of Compartment × Run) | F (3.237,42.080) = 4.189, p = .010, ηp2 = .244 |
Linear to linear contrast interaction: F (13,1) = 32.762, p = .00007, ηp2 = .716 | ||
mPFC | Compartment (ANOVA of Compartment × Run) | F (2,26) = 6.693, p = .005, ηp2 = .340 |
aT | Run (ANOVA of the net within/between-gender correlation difference across Runs) | Linear contrast: F (1,13) = 6.456, p = .025, ηp2 = .332 |
Main effect: F (3,39) = 2.746, p = .056, ηp2 = .174 |
For body stimuli none of the six ROIs showed a change of pattern correlation to either the gender or the morph model (main effects, all p > .205; linear contrasts: all p > .132), indicating that the linear changes of the neural patterns were specific for faces.
4. Discussion
We measured differences in brain activity associated with viewing facial and whole-body expressions before and after male participants experienced domestic violence in VR from the perspective of a female victim. Our study has three major results. First, embodiment in the victim specifically changes the neural processing of facial expressions. Second, post-VR brain activity changes related to processing fearful or ambiguous facial expressions were observed in several DMN regions and mainly consisted of enhanced DMN activity for ambiguous expression morphs compared with the baseline measure. For clearly fearful faces there was decreased DMN activity in run 3 directly following the VR intervention. Third, concerning gender-specific changes, we observed that patterns for processing stimuli within each gender and between the two genders became more variable across runs in several DMN regions (i.e., TPJ, aT) and in the fusiform area; the mPFC showed a more varied pattern for female than male faces throughout all runs. Taken together, these results clearly indicate that virtually experiencing a domestic violence situation from the perspective of the victim has an impact on brain areas related to socio-cognitive processing and to the perception of emotional expressions. The present findings provide a systematic effort to understand the brain mechanisms involved in VR interventions (Maister et al., 2014; Maister, Tsiakkas, & Tsakiris, 2013; van Loon et al., 2018) and specifically the brain mechanisms implicated in offenders’ increased sensitivity in recognizing fear in female expressions after the VR experience (Seinfeld et al., 2018). We discuss these findings now in more detail.
An important finding of the present study is that the VR intervention resulted in increased activity in the DMN when participants were processing ambiguous facial expressions. No other local univariate activity changes in specific face processing areas or the amygdala were observed. This VR-linked activity enhancement for the ambiguous faces may indicate that the participants were processing those emotional faces more intensely. Previous research has shown that the DMN contains crucial nodes supporting the recognition of discrete emotions through conceptualisation and abstraction (Satpute & Lindquist, 2019; Sreenivas et al., 2012). For this reason it has been proposed that there is less involvement of the DMN when less abstraction is needed to create an experience of emotion (see review by Satpute and Lindquist (2019)). Hence, it is reasonable to suggest that greater recruitment of DMN activity when cognitively processing emotional ambiguous stimuli might be related to difficulties in trying to recognize the emotions depicted by the faces. Likewise, since it was easier to recognize the discrete emotion with 100% fearful faces, this may have resulted in less activity of the DMN.
Moreover, decreased activity in the DMN has been consistently reported during the execution of externally-directed cognitively demanding tasks, and increased deactivations have been related to a higher degree of cognitive engagement in the task (Mason et al., 2007; Weissman et al., 2006). In contrast, internally-focused tasks that require participants to actively infer other people’s emotions and thoughts are frequently associated with a positive modulation of the DMN (Bzdok et al., 2012; Decety & Lamm, 2006; Saxe et al., 2006; Schurz et al., 2014). The above-baseline DMN activity in run 1 (i.e. without VR intervention) for fearful female faces suggested that our passive-viewing task did not induce consistent deactivation. Hence, the observed impact of the VR experience on the DMN as a function of stimulus ambiguity is likely related to the cognitive role played by these brain regions in emotion perception (Buckner et al., 2008; DiNicola et al., 2020). Furthermore, the passive-viewing task used in this study did not require the assignment of a definite verbal label to the facial expression, thus it is likely that participants were in a personal experiential mode rather than in a cognitive mode. In line with this, it has been found that the DMN is consistently involved in autobiographic memory retrieval and introspection (Philippi et al., 2015; Whitfield-Gabrieli & Ford, 2012). Our finding of the DMN deactivation for clear unambiguous fearful faces may indicate the DMN’s involvement in these processes. However, further research is required to better understand the differential DMN modulations when presented with faces varying on their emotional ambiguity after the VR intervention.
In the RSA analysis we found a lasting effect of the VR intervention on the DMN, as we observed that the voxel patterns became increasingly different between processing male and female faces across runs in the TPJ and aT. Moreover, for the different steps of the morph continua, activity patterns became more variable within each gender across runs in the PCC, TPJ and fusiform. This gender-based distinction was driven by the bigger decrease of correlation between genders, compared with the smaller decrease of correlation within genders. Moreover, the voxel patterns in the mPFC were more variable for female than male faces throughout the runs, indicating the involvement of the mPFC in gender-based perception. Based on past evidence, these observed patterns of increased variability might be potentially related to a more varied processing of the social-cognitive aspects for the face stimuli (Andrews-Hanna, 2012; Satpute & Lindquist, 2019), a topic which should be further investigated. Moreover, it is interesting to note that in this study male participants were embodied in a virtual female character. In Seinfeld et al.’s (2018) study this resulted in an increased sensitivity in detecting fear in female faces but not in male facial expressions, providing evidence for different behavioural correlates depending on the gender of the stimuli. The increased pattern variability might be related to other factors not specifically linked to the emotion or the gender of the stimuli (e.g., fatigue), but it is unlikely that these additional factors strongly drove the results found for the processing of facial expressions post-VR exposure (i.e., to the same immersive virtual scenario used in Seinfeld et al., 2018), since no significant changes in brain activity patterns across runs were observed for body stimuli. However, future studies might consider including additional control conditions based on the presentation of neutral stimuli such as object morphs.
VR-based clinical interventions, including body ownership illusions, have considerable potential in psychological and behavioural treatments (e.g., training empathy, pain, motor rehabilitation, obesity, etc.). Past studies have explored body ownership illusions using neuroimaging techniques (Blanke et al., 2015; Petkova et al., 2011). However, in this study we used a pre-post experimental design, paralleling as closely as possible the methodology used by Seinfeld et al. (2018), to understand the brain and behavioural impact of a VR intervention based on embodied perspective-taking. To the best of our knowledge, this study is among the first to measure the short-term impact on brain processes associated with affective stimuli processing after a VR intervention. Still, there are some limitations that will need to be addressed by future research. First, we used a within-subject design because of its methodological strengths and its control of inter-individual differences. Since this type of design required multiple repetitions of the stimuli, we observed habituation effects between the runs of the pre-VR scan, complicating the pre-and post VR analysis. However, as explained in the description of our analyses, we controlled for these habituation effects related to repetition by including two runs in both the pre-VR and post-VR scanning sessions. Second, we were interested in including participants with a very specific emotion recognition profile (i.e., difficulties in recognizing fear in female faces). This reduced our sample size (n = 14) and may have lowered power in the present analyses. Also, DMN showed heterogeneity across individual participants, yet showing robust individualized network parcellations, as has been shown by past studies including similar or smaller sample sizes when using large amount of data per participant (Braga & Buckner, 2017; Gordon et al., 2017). Future studies must look into individual variability of DMN to understand whether similar results are obtained with bigger sample sizes and with more data per participant. Third, since in the behavioural screening test fewer body identities than face identities were included, we were unable to perform statistical analysis on this behavioural data. In relation to this topic, it is important to note that the behavioural screening test had a stronger focus on faces (i.e., more identities included) since the primary goal of the behavioural screening was to try to match the emotional recognition profile (i.e., decreased recognition for fearful female faces) of Seinfeld et al. (2018) in a new sample of male participants that had undergone fMRI scans before and after the VR intervention. However, future studies should consider including the same number of facial and bodily identities, to better understand the behavioural impact of the VR intervention. Fourth, in the present study we found a strong identity effect in the behavioural pre- and post-tests, indicating that the behavioural results should be interpreted with caution. Note that the limitations encountered in the present study for the behavioural data analysis do not apply to the analysis of the brain data recorded through the fMRI scans, since here special care was taken to include more identities for face and body stimuli and compare exactly the same stimuli identities between pre- and post-VR brain scans, as well those as across runs.
Some possible explanations for the unexpected effect of personal identity are worth considering. First, actors appearing in emotional stimuli are not equally good across the whole range of emotional expressions. For instance, some actors are much better portraying expressions like fear and others potraying emotions like anger. We know from our extensive experience of constructing stimulus sets (e.g., de Gelder et al., 2015; de Gelder & Van den Stock, 2011) and the facial expression literature (Krumhuber et al., 2020) that some actors are more expressive than others. This type of individual face variability is well known from efforts at putting together a set of different expression images that is balanced for identity and expression. As a consequence, to keep identity stable, sometimes one has to settle for an identity with a less convincing expression. Secondly, although all the morphs were generated with the same methods and pixel-wise steps, the procedure may work out differently at the fine-grained level for different identities. This may be in part related to the fine morphology of the facial features. New methods may allow to morph facial expressions based on real-time motion capture of natural facial movements. However, based on the present experimental design we are unable to discern whether these factors played a role in the present findings. Prior to observing this identity effect, we assumed in line with the literature that only a few identities were needed for the behavioural tests, because the stimuli within the same emotion category condition are more or less equivalent tokens of that same category. However, researchers are increasingly aware of this issue in recent years, with some authors suggesting the stimuli being treated as random-effects, and advocating the use of statistical tools that incorporate mixed-effects analysis to cope with this problem (Westfall et al., 2017).
Another aspect that deserves further research is the role played by the visual perspective in VR, which in the present study was a first-person perspective experience. The specific role played by the visual perspective could be investigated by including a third-person perspective condition (Galvan Debarba et al., 2017; Gonzalez-Liencres et al., 2020) or by using asynchronous multisensory stimulation (Kokkinara & Slater, 2014). Also, we did not examine the effect of arousal, which may be induced by the current VR scenario. This could potentially be examined by behavioural control experiments with interventions elevating arousal level but without VR, or control experiments with different VR scenarios without the domestic violence content. Future studies might also benefit from using ecologically valid VR scenarios to better understand different clinical disorders and to gain knowledge about cognitive processes when participants are immersed in realistic real-life simulations, for example see Reggente et al. (2018). Finally, further research is needed to better understand the impact played by the gender or age of the virtual victim.
Authors contributions
Seinfeld, Sofia: Conceptualisation, Methodology, Validation, Formal analysis (behavioural), Investigation, Writing-Original Draft; Zhan, Minye: Conceptualization, Methodology, Validation, Formal analysis (fMRI), Investigation, Writing-Original Draft, Data Curation, Software; Poyo-Solanas, Marta: Investigation, Validation, Formal Analysis (behavioural), Writing – Review & Editing; Barsuola, Giulia: Investigation; Vaessen, Maarten: Writing – Review & Editing; Slater, Mel: Supervision, Writing – Review & Editing; Funding acquisition; Sanchez-Vives, Maria V.: Supervision, Writing – Review & Editing; Funding acquisition; de Gelder, Beatrice: Conceptualization, Methodology, Writing-Original Draft, Review & Editing, Project administration, Resources, Funding acquisition.
Data Files
We provide open access to the data files (i.e., behavioural, ROI, RSA), fMRI statistical maps, stimuli images, VR questionnaire, and fMRI experiment code used in the present study in the following link: https://osf.io/xmbn3/?view_only=cf6f708f17764f3e843594d36f024e2e.
The conditions of our ethics approval do not permit public archiving of the raw MRI data obtained in this study. Readers seeking access to the data should contact the first author Minye Zhan (zhanminye@gmail.com) or the lead author Beatrice de Gelder (b.degelder@maastrichtuniversity.nl). Access will be granted to named individuals in accordance with ethical procedures governing the reuse of sensitive data. Specifically, requestors must meet the following conditions to obtain the data: the data should be used solely for academic research and should not be publicly archived.
Legal copyright restrictions prevent public archiving of the Interpersonal Reactivity Index and Buss-Perry Aggressiveness Questionnaire, which can be obtained from the copyright holders in the stated references. The code of the virtual reality setup is currently owned by Virtual Bodyworks, interested researchers in obtaining access to the VR setup and specifications might contact the corresponding author of the present paper (b.degelder@maastrichtuniversity.nl).
Declaration of competing interest
The authors declare no conflict of interest.
Acknowledgments
This work was supported by the European Research Council (ERC) FP7-IDEAS-ERC (Grant 12 agreement number 295673; Emobodies), by the Future and Emerging Technologies (FET) 13 Proactive Programme H2020-EU.1.2.2 (Grant agreement 824160; EnTimeMent), by the 14 Industrial Leadership Programme H2020-EU.1.2.2 (Grant agreement 825079; MindSpaces) and by Virtual Embodiment and Robotic Re-Embodiment Integrated Project funded under the European Seventh Framework Programme, Future and Emerging Technologies (Grant agreement 257695). Moreover, this project was also supported by the European Union’s Rights, Equality and Citizenship Programme (2014–2020) under Grant Agreement: 881712 (VRperGenere) and by Generalitat de Catalunya -AGAUR– (2017 SGR 1296) to MVSV. MS is supported by the the European Research Council (ERC) Advanced Grant MoTIVE #742989. The authors report no other biomedical financial interests or potential conflicts of interest.
Appendix A. Supplementary data
The following is the Supplementary data to this article:
Multimedia component 1.
References
- Andrews-Hanna, 2012
J.R. Andrews-HannaThe brain’s default network and its adaptive role in internal mentationThe Neuroscientist: a Review Journal Bringing Neurobiology, Neurology and Psychiatry, 18 (3) (2012), pp. 251-270, 10.1177/1073858411403316
- Arzy et al., 2006
S. Arzy, G. Thut, C. Mohr, C.M. Michel, O. BlankeNeural basis of embodiment: Distinct contributions of temporoparietal junction and extrastriate body areaJournal of Neuroscience, 26 (31) (2006), pp. 8074-8081, 10.1523/JNEUROSCI.0745-06.2006
- Banakou et al., 2013
D. Banakou, R. Groten, M. SlaterIllusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changesProceedings of the National Academy of Sciences, 110 (31) (2013), pp. 12846-12851, 10.1073/pnas.1306779110
- Banakou et al., 2016
D. Banakou, D.H. Parasuram, M. SlaterVirtual embodiment of white people in a black virtual body leads to a sustained reduction in their implicit racial biasFrontiers in Human Neuroscience, 10 (2016), p. 601, 10.3389/FNHUM.2016.00601
- Banakou and Slater, 2014
D. Banakou, M. SlaterBody ownership causes illusory self-attribution of speaking and influences subsequent real speakingProceedings of the National Academy of Sciences of the United States of America, 111 (49) (2014), pp. 17678-17683, 10.1073/pnas.1414936111 U6 – 10.1073/pnas.1414936111 M4 – Citavi
- Baron-Cohen et al., 2009
S. Baron-Cohen, O. Golan, E. AshwinCan emotion recognition be taught to children with autism spectrum conditions?Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 364 (1535) (2009), pp. 3567-3574, 10.1098/rstb.2009.0191
- Bernhardt and Singer, 2012
B.C. Bernhardt, T. SingerThe neural basis of empathyAnnual Review of Neuroscience, 35 (1) (2012), pp. 1-23, 10.1146/annurev-neuro-062111-150536
- Blair et al., 2001
R.J.R. Blair, E. Colledge, L. Murray, D.G.V. MitchellA selective impairment in the processing of sad and fearful expressions in children with psychopathic tendenciesJournal of Abnormal Child Psychology, 29 (6) (2001), pp. 491-498, 10.1023/A:1012225108281
- Blanke et al., 2015
O. Blanke, M. Slater, A. SerinoBehavioral, neural, and computational principles of bodily self-consciousnessNeuron, 88 (1) (2015), pp. 145-166, 10.1016/J.NEURON.2015.09.029
- de Borst et al., 2018
A.W. de Borst, M.V. Sanchez-Vives, M. Slater, B. de GelderFirst person experience of threat modulates cortical network encoding human peripersonal spaceBioRxiv (2018), p. 314971, 10.1101/314971
- Braga and Buckner, 2017
R.M. Braga, R.L. BucknerParallel interdigitated distributed networks within the individual estimated by intrinsic functional connectivityNeuron, 95 (2) (2017), pp. 457-471, 10.1016/j.neuron.2017.06.038e5
This article is free to access.
- Buckner et al., 2008
R.L. Buckner, J.R. Andrews-Hanna, D.L. SchacterThe brain’s default networkAnnals of the New York Academy of Sciences, 1124 (1) (2008), pp. 1-38, 10.1196/annals.1440.011
- Buss and Perry, 1992
A.H. Buss, M. PerryThe aggression questionnaireJournal of Personality and Social Psychology, 63 (3) (1992), pp. 452-459, 10.1037/0022-3514.63.3.452
- Bzdok et al., 2012
D. Bzdok, L. Schilbach, K. Vogeley, K. Schneider, A.R. Laird, R. Langner, S.B. EickhoffParsing the neural correlates of moral cognition: ALE meta-analysis on morality, theory of mind, and empathyBrain Structure & Function, 217 (4) (2012), pp. 783-796, 10.1007/s00429-012-0380-y
This article is free to access.
- Corden et al., 2006
B. Corden, H.D. Critchley, D. Skuse, R.J. DolanFear recognition ability predicts differences in social cognitive and neural functioning in menJournal of Cognitive Neuroscience, 18 (6) (2006), pp. 889-897, 10.1162/jocn.2006.18.6.889
- De Corte et al., 2007
K. De Corte, A. Buysse, L.L. Verhofstadt, H. Roeyers, K. Ponnet, M.H. DavisMeasuring empathic tendencies: Reliability and validity of the Dutch version of the interpersonal reactivity IndexPsychologica Belgica, 47 (4) (2007), p. 235, 10.5334/pb-47-4-235
- Decety and Lamm, 2006
J. Decety, C. LammHuman empathy through the lens of social neuroscience[Thescientificworldjournal Electronic Resource], 6 (2006), pp. 1146-1163, 10.1100/tsw.2006.221
- de Gelder, 2016
B. de GelderEmotions and the body, Oxford University Press, Oxford (2016)
- de Gelder, Huis In’t Veld, & Van den Stock, 2015
B. de Gelder, E.M.J. Huis In’t Veld, J. Van den StockThe Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognitionFrontiers in Psychology, 6 (2015), p. 1609, 10.3389/fpsyg.2015.01609
- de Gelder et al., 2004
B. de Gelder, J. Snyder, D. Greve, G. Gerard, N. HadjikhaniFear fosters flight: A mechanism for fear contagion when perceiving emotion expressed by a whole bodyProceedings of the National Academy of Sciences, 101 (47) (2004), pp. 16701-16706, 10.1073/pnas.0407042101
- de Gelder and Van den Stock, 2011
B. de Gelder, J. Van den StockThe bodily expressive action stimulus test (BEAST). Construction and validation of a stimulus basis for measuring perception of whole body expression of emotionsFrontiers in Psychology, 2 (2011), p. 181, 10.3389/fpsyg.2011.00181
- DiNicola et al., 2020
L.M. DiNicola, R.M. Braga, R.L. BucknerParallel distributed networks dissociate episodic and social functions within the individualJournal of Neurophysiology, 123 (3) (2020), pp. 1144-1179, 10.1152/jn.00529.2019
This article is free to access.
- Fischer et al., 2003
H. Fischer, C.I. Wright, P.J. Whalen, S.C. McInerney, L.M. Shin, S.L. RauchBrain habituation during repeated exposure to fearful and neutral faces: A functional MRI studyBrain Research Bulletin, 59 (5) (2003), pp. 387-392, 10.1016/S0361-9230(02)00940-1
- Galvan Debarba et al., 2017
H. Galvan Debarba, S. Bovet, R. Salomon, O. Blanke, B. Herbelin, R. BoulicCharacterizing first and third person viewpoints and their alternation for embodied interaction in virtual realityPlos One, 12 (12) (2017), Article e0190109, 10.1371/journal.pone.0190109
- Goeleven et al., 2008
E. Goeleven, R. De Raedt, L. Leyman, B. VerschuereThe Karolinska directed emotional faces: A validation studyCognition & Emotion, 22 (6) (2008), pp. 1094-1118, 10.1080/02699930701626582
- Gonzalez-Liencres et al., 2020
C. Gonzalez-Liencres, L.E. Zapata, G. Iruretagoyena, S. Seinfeld, L. Perez-Mendez, J. Arroyo-Palacios, D. Borland, M. Slater, M.V. Sanchez-VivesBeing the victim of intimate partner violence in virtual reality: First- versus third-person perspectiveFrontiers in Psychology, 11 (2020), 10.3389/fpsyg.2020.00820
- Gordon et al., 2017
E.M. Gordon, T.O. Laumann, A.W. Gilmore, D.J. Newbold, D.J. Greene, J.J. Berg, M. Ortega, C. Hoyt-Drazen, C. Gratton, H. Sun, J.M. Hampton, R.S. Coalson, A.L. Nguyen, K.B. McDermott, J.S. Shimony, A.Z. Snyder, B.L. Schlaggar, S.E. Petersen, S.M. Nelson, N.U.F. DosenbachPrecision functional mapping of individual human brainsNeuron, 95 (4) (2017), pp. 791-807, 10.1016/j.neuron.2017.07.011e7
This article is free to access.
- Kokkinara and Slater, 2014
E. Kokkinara, M. SlaterMeasuring the effects through time of the influence of visuomotor and visuotactile synchronous stimulation on a virtual body ownership illusionPerception, 43 (1) (2014), pp. 43-58, 10.1068/P7545
- Kret and de Gelder, 2013
M.E. Kret, B. de GelderWhen a smile becomes a fist: The perception of facial and bodily expressions of emotion in violent offendersExperimental Brain Research, 228 (4) (2013), pp. 399-410, 10.1007/s00221-013-3557-6
This article is free to access.
- Kret et al., 2011
M.E. Kret, S. Pichon, J. Grèzes, B. de GelderSimilarities and differences in perceiving threat from dynamic faces and bodies. An fMRI studyNeuroimage, 54 (2) (2011), pp. 1755-1762, 10.1016/J.NEUROIMAGE.2010.08.012
- Krumhuber et al., 2020
E.G. Krumhuber, D. Küster, S. Namba, L. SkoraHuman and machine validation of 14 databases of dynamic facial expressionsBehavior Research Methods (2020), pp. 1-16, 10.3758/s13428-020-01443-y
This article is free to access.
- Lima et al., 2019
C.F. Lima, A. Anikin, A.C. Monteiro, S.K. Scott, S.L. CastroAutomaticity in the recognition of nonverbal emotional vocalizationsEmotion, 19 (2) (2019), pp. 219-233, 10.1037/emo0000429
- Lundqvist et al., 1998
D. Lundqvist, A. Flykt, A. ÖhmanThe Karolinska directed emotional faces (KDEF). CD ROM from department of clinical neuroscience, psychology sectionKarolinska Institutet (1998)
- Maister et al., 2013a
L. Maister, N. Sebanz, G. Knoblich, M. TsakirisExperiencing ownership over a dark-skinned body reduces implicit racial biasCognition, 128 (2) (2013), pp. 170-178, 10.1016/j.cognition.2013.04.002
- Maister et al., 2014
L. Maister, M. Slater, M.V. Sanchez-Vives, M. TsakirisChanging bodies changes minds: Owning another body affects social cognitionTrends in Cognitive Sciences, 19 (1) (2014), pp. 6-12, 10.1016/j.tics.2014.11.001
- Maister et al., 2013b
L. Maister, E. Tsiakkas, M. TsakirisI feel your fear: Shared touch between faces facilitates recognition of fearful facial expressionsEmotion, 13 (1) (2013), pp. 7-13, 10.1037/a0030884
- Marek et al., 2020
S. Marek, B. Tervo-Clemmens, F.J. Calabro, D.F. Montez, B.P. Kay, A.S. Hatoum, M.R. Donohue, W. Foran, R.L. Miller, E. Feczko, O. Miranda-Dominguez, A.M. Graham, E.A. Earl, A.J. Perrone, M. Cordova, O. Doyle, L.A. Moore, G. Conan, …, N. DosenbachTowards reproducible brain-wide association studies affiliationsBioRxiv, 11 (2020), pp. 15-18, 10.1101/2020.08.21.257758
- Mason et al., 2007
M.F. Mason, M.I. Norton, J.D. Van Horn, D.M. Wegner, S.T. Grafton, C.N. MacraeWandering minds: The default network and stimulus-independent thoughtScience, 315 (5810) (2007), pp. 393-395, 10.1126/science.1131295
- Méndez-Bértolo et al., 2016
C. Méndez-Bértolo, S. Moratti, R. Toledano, F. Lopez-Sosa, R. Martínez-Alvarez, Y.H. Mah, P. Vuilleumier, A. Gil-Nagel, B.A. StrangeA fast pathway for fear in human amygdalaNature Neuroscience, 19 (8) (2016), pp. 1041-1049, 10.1038/nn.4324
- Nierula et al., 2019
B. Nierula, B. Spanlang, M. Martini, M. Borrell, V.V. Nikulin, M.V. Sanchez-VivesAgency and responsibility over virtual movements controlled through different paradigms of brain−computer interfaceThe Journal of Physiology, 18 (2019), pp. 645-655, 10.1113/JP278167JP278167
- Nili et al., 2014
H. Nili, C. Wingfield, A. Walther, L. Su, W. Marslen-Wilson, N. KriegeskorteA toolbox for representational similarity analysisPlos Computational Biology, 10 (4) (2014), Article e1003553, 10.1371/journal.pcbi.1003553
- Peck et al., 2013
T.C. Peck, S. Seinfeld, S.M. Aglioti, M. SlaterPutting yourself in the skin of a black avatar reduces implicit racial biasConsciousness and Cognition, 22 (3) (2013), pp. 779-787, 10.1016/j.concog.2013.04.016
- Petkova et al., 2011
V.I. Petkova, M. Björnsdotter, G. Gentile, T. Jonsson, T.-Q. Li, H. EhrssonFrom part- to whole-body ownership in the multisensory brainCurrent Biology : CB, 21 (13) (2011), pp. 1118-1122, 10.1016/j.cub.2011.05.022
- Philippi et al., 2015
C.L. Philippi, D. Tranel, M. Duff, D. RudraufDamage to the default mode network disrupts autobiographical memory retrievalSocial Cognitive and Affective Neuroscience Electronic Resource, 10 (3) (2015), pp. 318-326, 10.1093/scan/nsu070
This article is free to access.
- Plichta et al., 2014
M.M. Plichta, O. Grimm, K. Morgen, D. Mier, C. Sauer, L. Haddad, H. Tost, C. Esslinger, P. Kirsch, A.J. Schwarz, A. Meyer-LindenbergAmygdala habituation: A reliable fMRI phenotypeNeuroimage, 103 (2014), pp. 383-390, 10.1016/j.neuroimage.2014.09.059
- Reggente et al., 2018
N. Reggente, J.K.Y. Essoe, Z.M. Aghajan, A.V. Tavakoli, J.F. McGuire, N.A. Suthana, J. RissmanEnhancing the ecological validity of fMRI memory research using virtual realityFrontiers in neuroscience, Vol. 12, Frontiers Media S.A (2018), p. 408, 10.3389/fnins.2018.00408
- Satpute and Lindquist, 2019
A.B. Satpute, K.A. LindquistThe default mode network’s role in discrete emotionTrends in Cognitive Sciences, 23 (10) (2019), pp. 851-864, 10.1016/j.tics.2019.07.003
- Saxe and Kanwisher, 2003
R. Saxe, N. KanwisherPeople thinking about thinking people: The role of the temporo-parietal junction in “theory of mindNeuroimage, 19 (4) (2003), pp. 1835-1842, 10.1016/S1053-8119(03)00230-1
- Saxe et al., 2006
R. Saxe, J.M. Moran, J. Scholz, J. GabrieliOverlapping and non-overlapping brain regions for theory of mind and self reflection in individual subjectsSocial Cognitive and Affective Neuroscience Electronic Resource, 1 (3) (2006), pp. 229-234, 10.1093/scan/nsl034
This article is free to access.
- Schönenberg et al., 2014
M. Schönenberg, S. Christian, A.-K. Gaußer, S.V. Mayer, M. Hautzinger, A. JusyteAddressing perceptual insensitivity to facial affect in violent offenders: First evidence for the efficacy of a novel implicit training approachPsychological Medicine, 44 (5) (2014), pp. 1043-1052, 10.1017/S0033291713001517
- Schurz et al., 2014
M. Schurz, J. Radua, M. Aichhorn, F. Richlan, J. PernerFractionating theory of mind: A meta-analysis of functional brain imaging studiesNeuroscience and Biobehavioral Reviews, 42 (2014), pp. 9-34, 10.1016/J.NEUBIOREV.2014.01.009
- Seinfeld et al., 2018
S. Seinfeld, J. Arroyo-Palacios, G. Iruretagoyena, R. Hortensius, L.E. Zapata, D. Borland, B. de Gelder, M. Slater, M.V. Sanchez-VivesOffenders become the victim in virtual reality: Impact of changing perspective in domestic violenceScientific Reports, 8 (1) (2018), p. 2692, 10.1038/s41598-018-19987-7
This article is free to access.
- Setsompop et al., 2012
K. Setsompop, B.A. Gagoski, J.R. Polimeni, T. Witzel, V.J. Wedeen, L.L. WaldBlipped-controlled aliasing in parallel imaging for simultaneous multislice echo planar imaging with reduced g-factor penaltyMagnetic Resonance in Medicine, 67 (5) (2012), pp. 1210-1224, 10.1002/mrm.23097
This article is free to access.
- Slater, 2009
M. SlaterPlace illusion and plausibility can lead to realistic behaviour in immersive virtual environmentsPhilosophical Transactions of the Royal Society B: Biological Sciences, 364 (1535) (2009), pp. 3549-3557, 10.1098/rstb.2009.0138
- Slater and Sanchez-Vives, 2016
M. Slater, M.V. Sanchez-VivesEnhancing our lives with immersive virtual realityFrontiers in Robotics and AI, 3 (2016), p. 74, 10.3389/FROBT.2016.00074
- Slater et al., 2010
M. Slater, B. Spanlang, M.V. Sanchez-Vives, O. BlankeFirst person experience of body transfer in virtual realityPlos One, 5 (5) (2010), Article e10564, 10.1371/journal.pone.0010564
- Spreng et al., 2009
R.N. Spreng, R.A. Mar, A.S.N. KimThe common neural basis of autobiographical memory, prospection, navigation, theory of mind, and the default mode: A quantitative meta-analysisJournal of Cognitive Neuroscience, 21 (3) (2009), pp. 489-510, 10.1162/jocn.2008.21029
- Sreenivas et al., 2012
S. Sreenivas, S.G. Boehm, D.E.J. LindenEmotional faces and the default mode networkNeuroscience Letters, 506 (2) (2012), pp. 229-234, 10.1016/j.neulet.2011.11.012
- Tracy and Robins, 2008
J.L. Tracy, R.W. RobinsThe automaticity of emotion recognitionEmotion, 8 (1) (2008), pp. 81-95, 10.1037/1528-3542.8.1.81
- Tusche et al., 2016
A. Tusche, A. Böckler, P. Kanske, F.M. Trautwein, T. SingerDecoding the charitable brain: Empathy, perspective taking, and attention shifts differentially predict altruistic givingJournal of Neuroscience, 36 (17) (2016), pp. 4719-4732, 10.1523/JNEUROSCI.3392-15.2016
- van Loon et al., 2018
A. van Loon, J. Bailenson, J. Zaki, J. Bostick, R. WillerVirtual reality perspective-taking increases cognitive empathy for specific othersPlos One, 13 (8) (2018), Article e0202442, 10.1371/journal.pone.0202442
- Vuilleumier et al., 2001
P. Vuilleumier, J.L. Armony, J. Driver, R.J. DolanEffects of attention and emotion on face processing in the human brain: An event-related fMRI studyNeuron, 30 (3) (2001), pp. 829-841, 10.1016/S0896-6273(01)00328-2
- Wegrzyn et al., 2017
M. Wegrzyn, S. Westphal, J. KisslerIn your face: The biased judgement of fear-anger expressions in violent offendersBMC Psychology, 5 (1) (2017), pp. 1-12, 10.1186/s40359-017-0186-z
This article is free to access.
- Weissman et al., 2006
D.H. Weissman, K.C. Roberts, K.M. Visscher, M.G. WoldorffThe neural bases of momentary lapses in attentionNature Neuroscience, 9 (7) (2006), pp. 971-978, 10.1038/nn1727
- Westfall et al., 2017
J. Westfall, T.E. Nichols, T. YarkoniFixing the stimulus-as-fixed-effect fallacy in task fMRIWellcome Open Research, 1 (2017), p. 23, 10.12688/wellcomeopenres.10298.2
- Whitfield-Gabrieli and Ford, 2012
S. Whitfield-Gabrieli, J.M. FordDefault mode network activity and connectivity in psychopathologyAnnual Review of Clinical Psychology, 8 (1) (2012), pp. 49-76, 10.1146/annurev-clinpsy-032511-143049
- Wright et al., 2001
C.I. Wright, H. Fischer, P.J. Whalen, S.C. McInerney, L.M. Shin, S.L. RauchDifferential prefrontal cortex and amygdala habituation to repeatedly presented emotional stimuliNeuroreport, 12 (2) (2001), pp. 379-383, 10.1097/00001756-200102120-00039
- Yarkoni, 2009
T. YarkoniBig Correlations in Little Studies: Inflated fMRI Correlations Reflect Low Statistical Power-Commentary on Vul et al. (2009)Perspectives on Psychological Science : A Journal of the Association for Psychological Science, 4 (3) (2009), pp. 294-298, 10.1111/j.1745-6924.2009.01127.x
- Zaki and Ochsner, 2012
J. Zaki, K.N. OchsnerThe neuroscience of empathy: Progress, pitfalls and promiseNature Neuroscience, 15 (5) (2012), pp. 675-680, 10.1038/nn.3085