One of the interesting findings throughout many of my experiments is that the comprehension of sequential images seems to be modulated by participants’ “comic reading expertise.” These effects are predicted by my theory of “visual language”…
If drawings and sequential images are indeed structured like language, then we should expect varying degrees of “fluency” across individuals based on their experience reading and drawing comics. Previous studies in Japan have supported this, finding that various aspects of comic understanding correlate with age and frequency of reading comics. Not only does this support my idea of “visual language,” but it flies in the face of the assumptions that all (sequential) images are universally understood by everyone equally.
In order to study this type of “fluency,” I created a measurement that calculates a number that can then be correlated with experimental results. In the first use of this metric, I found that brainwaves and reaction times correlated with people’s fluency, and several studies since then have also found similar correlations. This study was predated in time (though not publication date) by my study of page layouts, which also found differences based on people’s backgrounds, which was a precursor to the changes in the way I gathered this type of information.
I’ve now decided to name this metric the “Visual Language Fluency Index” (VLFI) and have decided to make resources available to anyone who might want to use it in their own experiments. Hopefully this can be helpful to anyone who is doing research or is planning to do research on sequential image comprehension.
You can now download a zip folder (direct link) from the Resources page of this site which contains a questionnaire for participants to fill out and an Excel spreadsheet to enter in this data, which will also calculate the VLFI scores. There is also a “read me” file providing documentation about the metric.
I’ll make a final note as well that, although the VLFI score as it currently stands is very useful and has been proven to be a reliable predictor of comprehension in several studies, I’m not satisfied to leave it alone. Studies are already underway looking into how to improve the measurements and scale, which will hopefully make it even more reliable. Should anything change, I’ll post about it here and update the files on the Resources page.
I've noticed that the only creative variable taken into account in the questionnaire is 'drawing' comics. I know your opinion about the creative fragmentation in the production of comics, but I think 'scripting' comics is, however more abstract in nature, also an instance of VL use.
Using my experience as an example, I may say I developed my VL fluency by reading and drawing; nowadays, however, I seldom draw, but I write scripts constantly. I remember when I started writing scripts I'd usually turned to drawing in order to better visualize what I was writing and to check how that worked narratively. But the more scripts I'd write, the less I'd rely on drawing, to the point that nowadays everything goes through my mind. But I'd say that is using VL as well, only in an mental, abstracted form. Just as thinking is usually an instance of aural language use.
I feel that evaluating 'drawing' as the only way in which VL can be applied and exercised is missing an important part of it.
Thanks for the comment. I can understand the desire for more measures related to production. Initially I had very little about production at all, since most participants in my studies are not active comic creators (artists or writers). I think that's probably true of most people. My actual formula only gives a bonus for production, so that comprehension is weighted more heavily (many more people read than draw comics).
I'd be open to adding more measures related to production though. I'm currently testing some additional measures, so I could easily add a line about "writing narratives" too. Thanks for the suggestion!