TINTIN Project

TINTIN Project

Are there cross-cultural patterns in the visual languages used in comics of the world? Do those patterns connect to the spoken languages of the comic creators? Do people’s languages or comic reading experience influence how they comprehend comics?

We are addressing these questions in the TINTIN Project, officially known as “Visual narratives as a window into language and cognition.” The TINTIN Project is funded by a €1.5 Million Starting Grant from the European Research Council.

We have created the Multimodal Annotation Software Tool (MAST) to enable the analysis of visual and multimodal documents. With it, we have created the TINTIN Corpus consisting of 1,030 annotated comics from 144 countries and territories.

The TINTIN Corpus includes data about panels, characters, layout, framing, backgrounds, continuity, compositional structure, emotion, motion events, perspective taking, gender, conventionalization of panels, color, and various other features of the visual languages used in comics.

We are currently in the later stages of data collection. Both MAST and the TINTIN corpus will be made open to researchers.

The TINTIN project is a follow up from the Visual Language Research Corpus which analyzed cross-cultural variation in comics from Asia, Europe, and the United States, and is analyzed in the book The Patterns of Comics.

The Patterns of Comics

Want to read more about the TINTIN Project? Check out our TINTIN Project related blog posts with periodic updates and insights.

Team Members

Our current research team consists of several core staff and various collaborators around the world help find and analyze comics for our corpus and conduct experiments. We welcome additional collaborations, so if you are interested in working with us on this project, please inquire with Neil Cohn for details.

At Tilburg University, we collaborate with faculty members Joost Schilperoord and Myrthe Faber.

Bruno Cardoso was a postdoctoral fellow who designed and programmed the Multimodal Annotation Software Tool (MAST). 

Ana Krajinović is a postdoctoral fellow analyzing the TINTIN Corpus for its typological properties.

Bien Klomberg and Irmak Hacımusaoğlu are PhD students analyzing cross-cultural visual language typology and conducting experiments.

Sharitha van der Gouw is a research associate assisting in annotation and research.

Fernando Casanova (University of Murcia, Spain) is a visiting PhD student who studies interjections in cross-cultural comics.

Maki Miyamoto (Japan Advanced Institute of Science and Technology) is a visiting PhD student who studies ideophones in cross-cultural comics.

Student contributors

Additional assistance has come from Fred Atilla, Anneliek Bastiaanssen, Puck van Bavel, Nikki Born, Freek van den Broek, Iris Degen, Klava Fadeeva, Marleen Gerritsen, Tim Hankart, Kylian van Herwaarden, Kea Kimmel, Matea Mikelin, Daphne Mathijsen, Hester Muller, Lisa Prévost, Annelou Schleckens, Aleksandra Siedlecka, Abe Simons, Yasmilla Stolvoort, Janessa Vleghert, Celine Wetzler, and others.

External Collaborators and contributers

Nanne van Noord (University of Amsterdam) is an Assistant Professor of Visual Culture and Multimedia and is contributing computer vision analyses to the TINTIN Project.

Various scholars have helped with gathering the comics for the TINTIN Corpus:


Our multicultural research corpus has benefited from contributions and donations from several creators and companies. If you would like your comics to be analyzed within our corpus, please contact me!

TINTIN Project Publications


  • Hacımusaoğlu, Irmak and Neil Cohn. The meaning of motion lines?: A review of theoretical and empirical research on static depiction of motion. Cognitive Science. 47 (11):e13377 (Open online)
  • Klomberg, Bien, Irmak Hacımusaoğlu, Lenneke Lichtenberg, Joost Schilperoord, and Neil Cohn. 2023. Continuity, Co-reference, and Inference in Visual Sequencing. Glossa: a journal of general linguistics 8(1). (Read online)
  • Klomberg, Bien, Irmak Hacımusaoğlu, Cas Coopmans, and Neil Cohn. “Sequential meaning-making in language and visual narratives.” In Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 43, no. 43. 2021.(Read online)


  • Atilla, Fred, Bien Klomberg, Bruno Cardoso, Neil Cohn. 2023. Background Check: Cross-Cultural Differences in the Spatial Context of Comic Scenes. Multimodal Communication. (Read Online)
  • Cardoso, Bruno and Neil Cohn. 2022. The Multimodal Annotation Software Tool (MAST). In Proceedings of the 13th Language Resources and Evaluation Conference, 6822‑6828. Marseille, France: European Language Resources Association.

Visual Language Research Corpus (VLRC)

  • Cohn, Neil, Bruno Cardoso, Bien Klomberg, and Irmak Hacımusaoğlu. 2023. The Visual Language Research Corpus (VLRC): An annotated corpus of comics from Asia, Europe, and the United States. Language Resources and Evaluation. (Read online)
  • Hacımusaoğlu, Irmak, Bien Klomberg, and Neil Cohn. 2023. Navigating Meaning in the Spatial Layouts of Comics: A cross-cultural corpus analysis. Visual Cognition. (Read online)
  • Cohn, Neil, Irmak Hacımusaoğlu, and Bien Klomberg. 2023. The framing of subjectivity: Point-of-view in a cross-cultural analysis of comics. Journal of Graphic Novels and Comics. 14 (3):336-350 (Read online)
  • Hacımusaoğlu, Irmak and Neil Cohn. 2022. Linguistic Typology of Motion Events in Visual Narratives. Cognitive Semiotics. 1-26. (Read online)
  • Klomberg, Bien, Irmak Hacımusaoğlu, and Neil Cohn. 2022. Running through the Who, Where, and When: A cross-cultural analysis of situational changes in comics. Discourse Processes. (Read online)

Popular writing

  • Klomberg, Bien. 2023. Beeldtaal. VakTaal: Tijdschrift van de Landelijke Vereniging van Neerlandici. 36(2/3), 32-33.
  • Hacımusaoğlu, Irmak. 2023. Visual language? What even is that? Visual Language Theory and motion in comics. The Cognizer. (Read online)
  • Hacımusaoğlu, Irmak. 2023. Görsel Dil mi, O da Ne?: Görsel Dil Teorisi ve Çizgi Romanlarda Hareket. Medium. (Read online)

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 850975).