Natural human communication combines different ways of expressing ourselves. We gesture when we talk, and we combine written language with pictures. However, many theories of language have focused specifically on spoken language. Our research seeks to explore the full range of human expressiveness, especially the ways we combine modalities together.
We have posited a multimodal model of language and communication comprehension, which is summarized in our book, A Multimodal Language Faculty.
This model is an expansion of the Parallel Architecture, a model of language and cognition proposed by the linguist Ray Jackendoff in books like Foundations of Language and The Texture of the Lexicon.
This model proposes that three primary structures that characterize the components of all behaviors (speech, sign, gesture, drawing, etc.) of a Modality, Grammar, and Meaning. These components all use their own independent structures, but all link together in a “parallel” way, meaning that none is more primary than another.
In our model, “language” isn’t an amodal phenomenon behavior that happens to “flow out” into different modalities. Rather, all modalities persist within one system, and different behaviors (speech, gesture, drawings, etc.) involve emergent activation of different substructures. This architecture is able to characterize expressions in both single modalities, and complex multimodal relationships.
Most all the research within the Visual Language Lab explores different aspects of this larger architecture. In addition to our book, recent primary publications exploring this approach are:
You can also watch a lecture on the multimodal Parallel Architecture here: