The perception-memory continuum

One of the wonderful things about reading through this material about perception is that I come across phrases like “the perception-memory continuum”, which (frankly) sounds a bit too Star Trek to be true. Another phrase which I have recently relished (and shall come back to in due course) is “autonoetic consciousness”. The two are, I think, not unrelated but for the moment I want to stick with the former.

Having noted in the previous entry Ward’s comments about an alternative model for memory based on a “perception-memory continuum along the ventral visual stream”, I was delighted to come across a paper arguing the case for just this model and extending the idea of the model even further. The paper, by Cowell et al. (2010) is clearly advocating a position which the authors are placing in tension with accepted models of neural functioning. So the paper needs to be seen as just that, one moment in a scientific discussion. Nonetheless, the paper provides much food for thought and some really interesting insights. So…

At the heart of this is the notion of the ventral visual stream (VVS)… Conscious seeing occurs when visual information enters through the eyes, is converted into nerve signals which pass down the optic nerves and ultimately end up at the visual cortex in the occipital lobe, right at the back of the brain. (Rather bizarrely, there is a separate pathway for visual perception which we are not consciously aware of but which our brains can respond to!) Anyway… the 2 Streams model concerns what happens to the information once it has reached the visual cortex. Proponents of this model suggests that the information is dealt with in 2 streams; a dorsal stream which goes up towards the parietal lobe and a ventral stream which goes down to the medial temporal lobe – the VVS.

(The link is to the Wikipedia entry which contains a nice illustration of this. The article also highlights that this is a contentious topic. Nonetheless, Cowell et al. support their argument with reference to experimental data, including fMRI data, so I’m going to go with them for the time being.)

Another key point about the ‘standard’ VVS model is that it argues for “the dissociation of recognition memory into two cognitive processes: familiarity-based recognition, and recollection/recall. Furthermore, these processes are proposed to be mediated by distinct and dissociable anatomical regions, usually the perirhinal cortex and hippocampus, respectively.” (Cowell et al., 2010) (The perirhinal cortex and the hippocampus are structures deep within the brain, at the end of the VVS.)

In their argument, Cowell et al. make a number of fascinating points:

Object recognition/discrimination does not occur at just one place on the VVS. Rather that the VVS is a hierarchical structure: within this structure objects are not considered immediately as a whole but features of the object are represented, simple features are represented in more posterior regions of the hierarchy (i.e. towards the back of the brain) and progressively more complex conjunctions of these features are represented in later, anterior stations, reaching the level of a ‘‘whole object’’ in perirhinal cortex. So if objects can be recognized/discriminated based on simple features alone then the experience of recognition emerges from the posterior regions of the VVS. However, if two very similar objects are presented then it may be that they can only be discriminated at the level of the whole object and so the experience of recognition emerges from activity in the perirhinal cortex. The authors call this, ‘representational complexity’.

The authors extend the notion of the VVS to include both the perirhinal cortex and the hippocampus arguing that what the hippocampus does is bring in additional representational complexity which includes “the unique conjunction of an object with time, space, and context”.

This is really interesting as it provides a framework for explaining the some effects of 2 different forms of dementia. As the authors note: “An additional strand of evidence for the role of hippocampus in perceptual discrimination comes from a series of human studies on scene perception by Lee et al. (2005a,b) which found that patients with selective damage to hippocampus were impaired at the discrimination of spatial scenes. In a further study, Lee et al. (2007) tested patients with Alzheimer’s disease, who are known to have greater damage in hippocampus than in other MTL areas, and patients with Semantic Dementia, who typically have greater pathology in perirhinal cortex than in other MTL regions. The Alzheimer’s patients were impaired on scene, but not face discriminations, relative to healthy subjects, whereas the Semantic Dementia patients performed worse than controls on face, but not scene discriminations. This double dissociation reinforces the suggestion that both perirhinal cortex and hippocampus are involved in perceptual discrimination, but that their differential involvement is determined by the level in the stimulus representation hierarchy at which they operate.”

So far this has focused on object discrimination, but the authors draw on empirical work done by Tyler et al. (2004), which “presented color pictures of objects and asked subjects to name the object at either a specific level (e.g., rhinoceros, hammer) or a domain level (e.g., living or man-made).” The researchers then observed neural activation during these two tasks. Their observations showed that “domain level naming activated posterior regions of occipital cortex and fusiform gyrus bilaterally, but the activation did not extend as far in the anterior direction as in specific level naming.” From this Cowell et al. link object labeling with ‘memory’ to propose that memory and perception are integrated and move together through this hierarchy of object representation from simple (posterior) to complex (anterior).

This integration of visual perception and memory (if it is true) is helpful when thinking about how seeing and handling objects can stimulate memories. Although the empirical experience of object handling would seem to require a level of integration between visual and haptic perceptions of an object. Those experiences would, in certain circumstances also require a further level of integration of the memory of the context of the object, mediated by the hippocampus (?), to be integrated with the memory of learnt processes involving the object.

It is worth noting again that this model proposes the idea that the memory for an object is not stored in one location but that the act of remembering can emerge from anywhere along the visual-perirhinal-hippocampal stream depending on how great a level of representation complexity is required to achieve the act of remembering.

This, in turn, opens up another notion that is worth considering – familiarity. I’ve wondered about this for a while. What does it mean to say that someone (or something) looks like someone (or something) else? [Facial recognition appears to be a separate process to object recognition (see Kriegeskorte et al. (2007) cited in Cowell et al., 2010), so I’m loathe to go too far down that path, but I am intrigued by the experience of seeing someone who reminds me of someone else, given the huge number of variables that must define a face. The experience of similarity would suggest some tolerances or variances amongst these variables which can be detected.]

Cowell et al. present the standard model of recollection – familiarity as comprising two distinct processes. In this model, recollection is assumed to be a high-threshold, ‘‘all-or-none’’ process, in which an item either evokes recollection, giving rise to a high-confidence judgment of recognition, or the item evokes no recollection at all. When recognition fails, the subject is assumed to resort to familiarity, which can be modeled by a completely separate, signal detection process.

Following their earlier argument, it comes as no surprise that they argue for an integrated process of recognition / familiarity where the distinction is not in the process but in the degree of complexity in the representational demand needed to match objects with prior knowledge or experience. Think for instance of the time when you saw someone and you knew that you knew them but you couldn’t think where from. Eventually you meet them again in more usual settings and you recognize them properly. The act of recognition in the second case occurs when “the hippocampus resolves the object-level ambiguity, by virtue of its higher-level spatio-temporal conjunctive representations.” However, Cowell et al., insist that recognition is still possible through the conjunction of lower level representations.

Again, this last point can inform thinking on object handling. The use of period-style reminiscence rooms work through creating these “higher-level spatio-temporal” conjunctions. They are helpful to the process of recollection but, clearly, not necessary as recollection can occur without them at the point of the rich, object-focussed conjunctions mediated by the perirhinal cortex.

Finally, Cowell et al. propose neural mechanisms for object recognition: “In the model of object recognition (Cowell et al., 2006), the mechanism for encoding an object in perirhinal cortex involves the sharpening of its representation. In the brain this would correspond to many neurons dropping out of the representation, but a few becoming more active in response to the encoded stimulus. That is, similar to other models of recognition memory (e.g., Norman & O’Reilly, 2003), the model suggests that enhancement of some neural responses as well as reduction in other neural responses is part of the code for familiarity, and indeed enhancement in a subset of neocortical neurons is found, empirically (see paper for further references).” I’m not sure if this helps, but knowing it makes me feel better!


Cowell R.A., Bussey T.J., Saksida L.M. (2006) ‘Why does brain damage impair memory? A Connectionist model of object recognition memory in perirhinal cortex’, Journal of Neuroscience, 26, 12186–12197

Cowell, R.A., Bussey, T.J. & Saksida, L.M. (2010) ‘Components of Recognition Memory: Dissociable Cognitive Processes or Just Differences in Representational Complexity?’, Hippocampus, 20, 1245–1262

Kriegeskorte N., Formisano, E., Sorger, B., Goebel, R. (2007) ‘Individual faces elicit distinct response patterns in human anterior temporal cortex’ Proceedings of the National Academy of Science USA, 104, 20600–20605

Lee A.C.H. et al. (2005a) ‘Specialization in the medial temporal lobe for processing of objects and scenes’, Hippocampus, 15, 782–797

Lee A.C.H. et al. (2005b) ‘Perceptual deficits in amnesia: Challenging the medial temporal lobe ‘mnemonic’ view’, Neuropsychologia, 43, 1–11

Lee A.C.H. et al. (2007) ‘Differing profiles of face and scene discrimination deficits in semanticdementia and Alzheimer’s disease’, Neuropsychologia, 45, 2135–2146

Norman K.A. & O’Reilly R.C. (2003) ‘Modeling hippocampal and neocortical contributions to recognition memory: A complementary-learning-systems approach’, Psychological Review, 110, 611–646

Tyler L.K. et al. (2004) ‘Processing objects at different levels of specificity’, Journal of Cognitive Neuroscience, 16, 351–362

About Bruce Davenport

Research associate at Newcastle University. Previously a museum educator and researcher.
This entry was posted in memory, Perception. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s