Inhering in words

I’ve been reading ‘The Organisation of Mind’ by Shallice and Cooper. It’s not the book I thought it would be but that is possibly a good thing. It’s not an easy read, at least not for me, and completing each chapter has felt like a minor intellectual victory. The opening chapters are a densely and meticulously argued framework for how we should do cognitive neuropsychology. The authors use that framework to gradually develop an understanding of thought and consciousness. Given that it is so meticulous and given that it is clearly based on the authors’ long experience and deep understanding of the topic writing blog-notes about it seems like a travesty… ah well… I’ve got to gather my thoughts somehow.

One of the central arguments in the book is that the field of cognitive neuropsychology rests on a number of foundational approaches:

  • behavioural experiments on normal, healthy humans;
  • behavioural experiments and close investigation of the brains of (groups of) patients with specific damage to parts of the brain;
  • computational modelling of cognitive processes;
  • functional imaging of the brains of individuals (healthy and damaged) doing certain activities.

They point out that each of these approaches is ‘slightly flaky’ (to borrow their phrase) – grounded in certain methodological assumptions (which may or may not be true) and each with it’s own strengths and weaknesses. No one approach is sufficient to help us understand how the brain works and how thought emerges. Instead, we need all the approaches, carefully integrated, and when the lines of evidence from these approaches converge then we can be confident that the model being proposed, for some aspect of cognition, is robust.

What struck me was the importance of having an adequate understanding of how our brains handle a given cognitive task. We need to understand how that is broken down into inter-related sub-processes because without that we can’t be sure that the behavioural experiments researchers carry out or the colourful images of fMRI scans mean the things we ascribe to them.

Chapter 6 is ‘On the semantic elements of thought’. This topic is crucial to my project of object handling because after our experience of an object we give voice to that experience through words and those words link to ideas which exist in a network of other ideas. So this chapter draws on the foundational approaches noted above to sketch out how words are handled in our brains.

They argue that each word must have a representation and that needs to be given physical form somewhere in the brain. They conclude that semantic representations have different levels. There needs to be a “basic network of subsystems that each has different computational functions and different anatomical bases. The subsystems include once concerned with the categorisation of visual form, and of other visual qualities, similar ones for auditory and tactile sensory systems, and ones for object manipulations, and for spatial representations and transformations among others. Many semantic representations are based on neuronal assemblies that cross this network of subsystems.” (p241)

Earlier in the chapter (p218) they present an image of the computational architecture of semantic cognition proposed by Jeffries and Lambon Ralph (2006), below:

semanticsThis is a connectionist model (what I previously knew as a ‘neural network model’) with separate units processing each modality and those units having bidirectional links to the central semantic network. What is encouraging (for me) is that this approach fits well with the ideas I had previously draw from the experimental work of Struiksma and Postma on the spatial language of blind and sighted people. In the model above the semantic representation is amodal whilst Struiksma & Postma argue that it is supramodal. I’d need to check this carefully because I’m sure that Struiksma and Postma’s conception of supramodality included some form of two-way relationship between each representation and the underpinning modalities and I don’t understand the difference between that and the bidirectional links between units shown above.

The key thing here is that our idea of a thing contains this range of modalities within it; they are not lost in the representation.

Studies have shown that semantic tasks (as distinct from phonological tasks) stimulated a swathe of activity in the posterior temporal cortex through to the parietal cortex, primarily in the left hemisphere (though these findings are subject to the methodological problems associated with fMRI studies) (p218). This is important when beginning to think about the symptoms of people with conditions like semantic dementia, indeed much of the work done to isolate the neural bases for semantic representations involved studying people with semantic dementia.

These neuronal assemblies have an (Hopkins) attractor structure!!! How to explain that… Sometimes, in trying to explain gravity, people use the illustration of balls resting on a stretched but elastic sheet; the balls sink down creating little wells that other smaller objects on the sheet will roll into if they come too close.

stretched sheet

Demonstration of gravity wells

The image comes from this video

The attractor structure is a computational modelling approach which does something similar, only now the sheet represents a semantic space, each ball represents a particular semantic representation, and each well becomes “the sinks of each basin of attraction corresponding to each learned item” (p214). What this attractor structure allows you to do (I think) is model how our move from a set of inputs/prompts to evoking a particular idea or object that they are in some way similar to.

“The acquisition of individual attractors appears to be critically based on one or more of the subsystems discussed above – core subsystems for that concept – so its meaning representation becomes an onion-like structure as far as the importance of the different parts of the assembly are concerned with the inner layers of the onion being based in these core subsystems.” Critical amongst the core subsystems is the one that is derived from the need to (visually) distinguish between objects – it has to represent complex and apparently arbitrary conjunctions of attributes and corresponds to the hub unit in the model proposed above (I think).

However the inner structure of the representation is critical for realising its meaning. “Here the organising principle must be based on the internal logic of the concept.” Shallice and Cooper argue that abstract concepts should be viewed from the perspective of this organising principle. They further argue that for such concepts the symbolic split between concepts and operations becomes artificial – who am I to argue!

 

Advertisements

About Bruce Davenport

Museum educator and researcher.
This entry was posted in Cognition, Objects. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s