Rohini Kumar1, Kyla Brannigan1, Lina Teichmann1, Chris Baker1, Shruti Japee1; 1Laboratory of Brain and Cognition, NIMH, NIH
Recognition of facial identity and facial expression are both critical for social communication. One model (Bruce & Young, 1986) proposes that invariant (like identity) and changeable aspects of a face (like expression) are processed by distinct neural pathways (Haxby, Hoffman & Gobbini, 2000). Evidence for this dissociation comes from functional neuroimaging studies, which have implicated the fusiform gyrus in the processing of invariant aspects (Grill-Spector et al., 2004) and the superior temporal sulcus in the processing of changeable aspects of a face (Pitcher et al., 2011). However, the timing of this dissociation has been less studied. Thus, the current study used magnetoencephalography (MEG) and time-resolved classification methods to examine how facial identity and expression processing unfolds in the human brain. Participants viewed videos of emotional faces that varied along two dimensions (six identities and six expressions) while performing an orthogonal target detection task. Linear support vector machine classifiers were used to predict which stimulus type was presented based on the pattern of MEG sensor activity at each time point during a trial. The resulting decoding performance reflects the discriminability of the brain activity patterns elicited by each identity and expression. Results showed successful decoding of both identity and expression and Bayes Factors revealed timepoints when decoding accuracy was significantly above chance. Identity decoding peaked rapidly around 190 ms after stimulus onset, while expression decoding rose slowly and peaked around 900 ms. Temporal generalization analyses revealed greater similarity over time in the representation of expression than identity. Further, representational similarity analyses revealed an early peak in MEG pattern dissimilarity between identities and a later peak in dissimilarity between expressions. Collectively, these results demonstrate distinct neural timecourses for both invariant (identity) and changeable aspects (expression) of a face and future source reconstruction analyses will determine the underlying neural substrate for these effects.
Acknowledgements: NIMH Intramural Research Program