Comics Generation Thesis

Jason Alderman has completed his (downloadable) Masters Thesis on “Generating Comics Narrative to Summarize Wearable Computer Data.” Here’s the abstract:

As people record their entire lives to disk, they need ways of summarizing and making sense of all of this data. Comics (and visual language) are a largely untapped medium for summarization, as they are already subtractive and abstract by nature (the brain fills in the blanks and the details), and they provide a way to present a series of everyday events as a memorable narrative that is easily skimmed. This research builds upon the work of Microsoft, FX Palo Alto Labs, ATR Labs, and others to further ground the procedural generation in the comics theory of Scott McCloud, et al.

The paper poses some very intriuging ideas, and he does a great job summarizing and comparing a lot of the work that’s been done in comic theory, including my own. Alderman’s paper also has a good discussion of various comics computer programs and a very interesting discussion of adapting comic theory issues into programming code. The appendices also have a wealth of summarized theory and analysis as well. Particularly interesting was the taxonomy of panel types and gutter spacing. Go check it out!

He has some good criticism concerning my old model of visual language grammar though those should be asuaged by my newer work (which can be found at my ComicCon talk next month…hint hint).

And since its been mistaken before, I should point out that I don’t consider my notion of “visual language” to be comparable to or a subset of the “visual language” proposed by Robert Horn (though his work informed my early stuff, and I subsequently developed a personal relationship with him when I was in college. He’s a very nice man and enthusiastic about all things related to visual communication). Horn is more talking about a broader type of visual communication, largely diagrammatic, but mainly from the union of text and image. To me, the visual language is only the visuals (and only in specific conditions), which then unites with the verbal to create a multimodal whole (see my paper, Interactions and Interfaces for more).

Comments

  • That’s an interesting idea. I toyed around with IC a couple times and found it pretty fun. I might even use it when administering psychology experiments, because it would allow me to digitally have an interface that subjects could interact with, potentially showing a panel or whole strip at a time.

    I’ve actually had a “VL” program in my head (and a paper model) for about five years now that I’ve never been able to get off the ground (I don’t code). Some day I suppose…

  • Write a Reply or Comment