Problems with Closure, part 4

In my last post, I pointed out the assumption that pictures are not connected to any mental apparatus. I now continue on to show how that affects analysis of sequential images…

Assumption #3: Absence of Mind

By minimizing the contribution of the mind, a simple theory like closure can easily emerge. The images’ meanings are “out there in the world,” so all the mind needs to contribute is possible ways to pull those meanings together. Since no mind is found in the actual images, its placed instead between the images. Transitions just become a surface grafted onto this encompassing unifying process, where the “mind” “fills in the gaps.”

But, what is it “filling in the gaps” with? It must carry some information in order to do this.

Of course, the non-mental explanation says that we understand closure because we’ve had experiences in life that allow us to combine events in images. True enough. This is an appeal to the things being referenced. However, it still can’t escape the mental part of receiving those experiences and drawing upon them to understand images (i.e. doesn’t the mind then have to do something in order to make those experiences understood?).

This view casts the mind as a “magic box.” Stuff goes in, a conscious understanding is reached, but how did it do it? Cognition! Ok, yes, that’s true, but now tell me what that cognition is and how it works. You can’t just say “the mind does it” – you need to say what the mind does to be able to say that “it” does anything. Otherwise you’re just making an empty statement.

Closure doesn’t really say anything about the content of the panels, saying that meaning is created in the space between them. It cedes out a non-role to the “mind,” thereby passing the buck of meaning making to the ether. This makes closure essentially a faux cognitive process. And this is also why it can be extended to apply to just about anything at all.

Instead of a non-principle like “closure,” we can lay out mental schemas for events (and more) in our minds that allow for understanding sequential panels. Rather than a generalized magic that the “mystical mind” performs, this actually identifies the contribution of the mind.

My first model had three of these:

1) Environmental Phrase: unified various environmental elements at the same state
2) View Phrase: combined the same element at the same state
3) Temporal Phrase: unified elements of state changes

These “phrase structures” could then embed into each other, forming a hierarchy showing exactly what the mind brings to the table. While the panels are linear, the structures of understanding are not. Note also, by formulating these rules, they inherently pose constraints to which sequences come out.

My newer approach builds off of this further to stipulate actual grammatical roles, while rejecting the schemas above (because they don’t work entirely). You can see a glimpse of this new approach in the essay “Initial Refiner Projection”, though that’s only a small part of it.

In all of these, a contribution of the mind is identified. It is not magically glossed over, and it imbues the power of meaning making to the images themselves in concert with given mental rules.

Once you come to this conclusion though, it raises some other important questions:

Where do these mental schemas come from? (learning or genetics?)
How many are there, and how do they work?
Do these structures connect to other mental domains?

All of these are very important questions, and just the sort of thing that will hopefully occupy a good deal of time and effort in cognitive science in the years to come.

Problems with Closure: Part 1, Part 2, Part 3

Comments

Write a Reply or Comment