Masahide Yuasa, Keiichi Saito,Naoki Mukawa. 2006. Emoticons convey emotions without cognition of faces: an fMRI study. Conference on Human Factors in Computing Systems. April 22-27, 2006. Montréal, Québec, Canada
Related to the previous post on a testing of McCloud’s “Cartoon Identification Theory” on cartoony vs. realistic images in the brain, here’s a study using fMRI (brain scans) to look at emoticons which at this point are perhaps the most simplified signs for faces we use.
Faces have been the focus of a lot of debate in cognitive neuroscience, particularly about the “face area” in the brain. One side says it’s an area strictly devoted to processing human faces, the other side says that it’s an “expertise” area and it activates because humans are experts at recognizing faces. It is one of the most fiery debates in cognitive neuroscience, and learned about in most all intro classes.
Amazingly, this study shows no activation of this “face area” when looking at emoticons. :-O
Using fMRI, the authors compare Asian style emoticons (non rotated) with averaged faces (photos of multiple faces that have been blended to be more “generic”) that were expressing the emotions of happiness and sadness. Emoticons appeared first on their own, and in a second study embedded within sentences, while non-emoticon signs using the same characters were also used as fillers (i.e. “:O*-<"). They found that photos of faces activate both areas pertaining to emotional valence (right inferior frontal gyrus) and facial recognition (right fusiform gyrus), while emoticons only activate emotional areas but not face areas. That is, as the authors say, “Remarkably, emoticons convey emotions without cognition of faces.”
This finding has very interesting consequences for understanding how brains process varying degrees of complexity in images. The implication here at least is that more simplified faces become tied more explicitly to a “symbolic” meaning as opposed to their iconic meaning of resembling what they look like. That is, more simplified images strip down the meaning to its core meaning disconnected to the iconic reference that they are framed within.
It would be interesting to see a graded approach to this — such as taking different degrees of representation from McCloud’s gradient of “cartoonification” (or to use my term, “haplosis”). Are there different degrees of activation for different representations? Does activation for the fusiform gyrus all suddenly drop off at a certain level of simplification? How does this affect the debate that the fusiform gyrus is an “expertise” area rather than a face area?