Emoji and visual languages

I’m excited that my recent article on the BBC website about emoji has gotten such a good response. So, I figured I’d write an addendum here on my blog to expand on things I couldn’t get a chance to write in the article. I of course had a lot to say in that article, and it was inevitable that not everything could be included.

The overall question I was addressing was, “are emoji a visual language?” or “could emoji become a visual language?” My answer to both of these is “no.”

Here’s a quick breakdown of why, which I say in the article:

1. Emoji have a limited vocabulary set that is made of whole-unit pieces, and that vocabulary has no internal structure (i.e., you can’t adjust the mouth of the faces while keeping other parts constant, or change the heads on bodies, or change the position of arms)

2. Emoji force these stripped-down units into unit-unit sequences, which just isn’t how drawings work to communicate. (More on this below)

3. Emoji use a limited grammatical system, mostly using the “agent before act” heuristic found across impoverished communication systems.

All of these things limit emoji from being able to communicate like actual languages. Plus, these also limit emoji from communicating like actual drawings that are not mediated by a technological interface.

There are two addendums I’d like to offer here.

First, these limitations are not just constrained to emoji. They are limitations of every so-called “pictogram language,” which are usually created to be “universal” across spoken languages. Here, the biggest problem is in believing that graphic information works the way that writing does: putting individual units, each which have a “single meaning,” into a unit-unit sequence.

However, drawings don’t work this way to communicate. There are certainly ways to put images in sequence, such as what is found in the visual language of comics. The nature of this sequencing has been my primary topic of research for about 15 years. When images are put into sequence, they have characteristics unlike any of the ways that are used in these “writing imitative” pictogram sequences.

For example, actual visual language grammars typically depict events across the image sequence. This requires the repetition of the same information in one image as in the other, only slightly modified to show a change in state. Consider this emoji sequence:

This can either be seven different monkeys, or it can be one monkey at seven different points in time (and the recognition of this difference requires at least some cultural learning). Visual language grammars allow for both options. Note though that it doesn’t parcel out the monkey as separate from the actions. It does not read “monkey, cover eyes” and then “monkey, cover mouth” etc. where the non-action monkey just gives object information and the subsequent one just gives action information. Rather, both object and event information is contained in the same unit.

So, what I’m saying is that the natural inclination for grammars in the visual form is not like the grammars that operate in the verbal or written form. They just don’t work the same, and pushing graphics to try to work in this way will never work, because it goes against the way in which our brains have been built to deal with graphic information.

Again: No system that strips down graphics into isolated meanings and puts them in a sequence will ever communicate on par with actual languages. Nor will it actually communicate the way that actual visual languages do…

And this is my second point: There are already visual languages in the world that operate as natural languages that don’t have the limitations of emoji.

As I describe in my book, The Visual Language of Comics, the structure of drawing naturally is built like other linguistic systems. It becomes a “full” visual language when a drawing system is shared across individuals (not a single person’s style) and has 1) a large visual vocabulary that can create new and unique forms, and 2) that those vocabulary items can be put into sequences with underlying hierarchic structure.

This structure often becomes the most complex and robust in the visual languages used in comics, but we find complex visual languages in other places too. For example, in my book I devote a whole chapter to the sand drawings of Australian Aboriginals, which is a full visual language far outside the context of comics (and is used in real-time interactive communicative exchagnes). But, whether a drawing system becomes a full visual language or not, the basis for those parts is similar to other linguistic systems that are spoken or signed.

The point here is this: emoji are not a visual language, and can never be one because of the intrinsic limitations on the way that they are built. Drawings don’t work like writing, and they never will.

However, the counter point is this: we already have visual languages out in the world—we just haven’t been using them in ways that “feel” like language.

… yet.

Comments

Write a Reply or Comment