Imagine an object handling session: the workshop leader has got out the objects, there on the table or in the middle of the circle of people seated on the floor. What’s going on at that point? As other authors have pointed out, the participants are (probably) looking at the object and imagining or anticipating picking it up. But there’s a problem with the language here: whenever I use the word ‘imagine’ I think of something visual, i.e. imagine = I picture myself in my “mind’s eye” picking up the object. But this, it seems, is misleading – according to the paper by Fagioli et al..
The act of planning an action is really an act of rehearsal which involves the same processes as the act itself. “[An] action could be economically represented by one or several perceptual events. However, most actions will be constructed of several subgoals and hence a sequence of perceptual events rather than a single perceptual event. The selection and linkage of precompiled motor subroutines stored in lateral premotor cortex is suggested to be realized by the medial premotor cortex or supplementary motor area. This area contains highly abstract sequence neurons which code the temporal order of action components independent of the movement type, i.e., engaged muscles or action goal. In that way, lateral and medial premotor areas together contribute to action representation and selection.”
What’s more interesting is that this planning is not visual but neither is it simply a series of motor commands.
“Based on the assumptions that (a) premotor action representations are not necessarily ‘‘motor‘‘ but may also be ‘‘sensory’’, ‘‘sensorimotor‘‘, or ‘‘supramodal’’, and (b) can be triggered either internally or externally, it could be hypothesized that sequential representations may be where premotor cortex comes into play, no matter whether they are biological or nonbiological (i.e., abstract).”
“Taken together, the empirical data and the functional–anatomical properties of the premotor cortex point to a bi-directional link between motor and perceptual representations; [thus] supporting the view that object perception and action planning are mediated by integrated sensorimotor structures. According to this logic, perceiving an event and planning an action are functionally equivalent to a large degree: both perceiving and action planning consists in activating perceptually derived codes that are associated with action programs. If so, it is plausible to assume that to-be-perceived events (‘‘perceptions’’) and to-be-generated events (‘‘actions’’) are coded and stored together in a common representational domain.”
Furthermore what the workshop leader (in the imaginary handling session) says about what the participants will do will shape their planning (perhaps unsurprisingly). “Even though the task [in the experiment] was purely visual, premotor areas were strongly activated: a fronto-parietal prehension network when subjects monitored for shape deviants, areas involved in manual reaching when they monitored for location deviants, and a network associated with tapping and uttering speech when monitoring for temporal deviants. Hence, activation was highest in areas that are known to be involved in actions that would profit most from information defined on the respective stimulus-feature dimension. This might point to an important integrative role of the human premotor cortex in the anticipation of perceptual events and the control of actions related to these events.”
But the anticipation of the act shapes the way that we look at the object, or rather leads us to pick out certain features above others depending on the nature of the action. “As predicted from our extension of the pragmatic body map approach of Schubotz and von Cramon, we were able to show that preparing for grasping and reaching as such is sufficient to prime size and location information, respectively. This suggests that an intention to act sets up and configures visual attention in such a way that the processing of information about the most action-specific and action-relevant stimulus features is facilitated. Apparently, planning an action is accompanied by pre-tuning action-related feature maps, so that the access of information coded in these maps to action control is sped up.”
“With respect to perception, the processing of information has been shown to be biased towards task-relevant dimensions, such as color or shape. As a consequence, the features of an object that are defined on task-relevant (and therefore biased) dimensions will be more heavily weighted and, thus, more strongly represented (i.e., activate their representational codes more strongly) than other, task-irrelevant features. If it is true that action plans and perceptual events are cognitively coded in the same fashion and even share a representational domain, it follows that dimensional weighting should not only apply to perceived events but to to-be-produced events (i.e., actions) as well. More concretely, making a particular dimension relevant for action should prime the corresponding dimension in perception, so that action-related perceptual objects should be weighted accordingly.”
So, whether the object is in a glass case or whether it’s there for us to hold will shape the way we look at it.
What emerges is a picture of perception, planning/rehearsal and action as one integrated bundle designed to make the act as smooth and efficient as possible.
What has begun to bother me, though, is that our common language is insufficiently nuanced. Words such as ‘thought’, ‘imagination’ or ‘memory’ no longer seem adequate for the task of capturing the subtlety and complexity of object cognition. But too many academics have gone wandering up dark, linguistic back-alleys of subject specialism and have never quite managed to find their way out again…
Fagioli, S., Hommel, B. & Schubotz, R.I. (2007) ‘Intentional control of attention: action planning primes action-related stimulus dimensions’, Psychological Research, 71, 22–29