Is This How Memory Works?

By Neuroskeptic | January 27, 2013 9:46 am

We know quite a bit about how long-term memory is formed in the brain – it’s all about strengthening of synaptic connections between neurons. But what about remembering something over the course of just a few seconds? Like how you (hopefully) still recall what that last sentence as about?

Short-term memory is formed and lost far too quickly for it to be explained by any (known) kind of synaptic plasticity. So how does it work? British mathematicians Samuel Johnson and colleagues say they have the answer: Robust Short-Term Memory without Synaptic Learning.

They write:

The mechanism, which we call Cluster Reverberation (CR), is very simple. If neurons in a group are more densely connected to each other than to the rest of the network, either because they form a module or because the network is significantly clustered, they will tend to retain the activity of the group: when they are all initially firing, they each continue to receive many action potentials and so go on firing.

The idea is that a neural network will naturally exhibit short-term memory – i.e. a pattern of electrical activity will tend to be maintained over time – so long as neurons are wired up in the form of clusters of cells mostly connected to their neighbours:

The cells within a cluster (or module) are all connected to each other, so once a module becomes active, it will stay active as the cells stimulate each other.

Why, you might ask, are the clusters necessary? Couldn’t each individual cell have a memory – a tendency for its activity level to be ‘sticky’ over time, so that it kept firing even after it had stopped receiving input?

The authors say that even ‘sticky’ cells couldn’t store memory effectively, because we know that the firing pattern of any individual cell is subject to a lot of random variation. If all of the cells were interconnected, this noise would quickly erase the signal. Clustering overcomes this problem.

But how could a neural clustering system develop in the first place? And how would the brain ensure that the clusters were ‘useful’ groups, rather than just being a bunch of different neurons doing entirely different things? Here’s the clever bit:

If an initially homogeneous (i.e., neither modular nor clustered) area of brain tissue were repeatedly stimulated with different patterns… then synaptic plasticity mechanisms might be expected to alter the network structure in such a way that synapses within each of the imposed modules would all tend to become strengthened.

In other words, even if the brain started out life with a random pattern of connections, everyday experience (e.g. sensory input) could create a modular structure of just the right kind to allow short-term memory. Incidentally, such a ‘modular’ network would also be one of those famous small-world networks.

It strikes me as a very elegant model. But it is just a model, and neuroscience has a lot of those; as always, it awaits experimental proof.

One possible implication of this idea, it seems to me, is that short-term memory ought to be pretty conservative, in the sense that it could only store reactivations of existing neural circuits, rather than entirely new patterns of activity. Might it be possible to test that…?

ResearchBlogging.orgJohnson S, Marro J, and Torres JJ (2013). Robust Short-Term Memory without Synaptic Learning. PloS ONE, 8 (1) PMID: 23349664

CATEGORIZED UNDER: papers, science
  • campaigner

    I Prefer to think of short term memory as a mathematical model, where we have a memory of a memory and a memory of a memory of a memory… I could expand on this theory if anyone out there is interested or indeed can remember this.


    Maker a model of it in a neural network simulator. Something for Blue Brain?

  • Torbj√∂rn Larsson, OM

    I don't think I would be surprised if short-term memory works like that. The ability to group near-neighbor connections means it is a symbol-like process, and the first neural nets that mimicked how the brain learns works like that. That predicts how brains can't be overtrained, while algorithms are susceptible to that.

    As for testing, I would test the symbol-like spatial clustering (which, I assumes the near-neighbor connections tend to map to). Testing for reactivations would be one way, but perhaps one could also test for if associated memories tends to be spatially close too.

  • neuromusic

    Some of the details of the model are unique, but the basic theory that short term memory could be maintained in network states isn't novel. In fact there is an entire set of models (of varying levels of biological plausibility) known variously as liquid state machines or reservoir computing models that exhibit this type of memory. as for testing, the difficulties are twofold… 1. Anatomically, Does this connectivity exist? 2. Physiologically, does this activity exist?

  • Trevor Bekolay

    It's not clear to me how this type of short-term memory would actually remember anything more than a few binary values. These clusters effectively have two states: on and off; this isn't a whole lot of information to work with, and wouldn't scale well in practice.

    There are lots of models of short-term memory (working memory) that are implemented in spiking neurons and can actually remember the types of representations that would be required for actual cognition. For example, in the Spaun model, we use simple recurrent connections to enable this type of memory (similar to the liquid state machines others have mentioned in this thread). A paper on the working memory model in Spaun can be found here.

  • Anonymous

    I've been reading this article on the arXiv back in 2010. It's very elegant & 'simple' model. A somewhat similar model has been proposed by Harvard scientists (2004) ( It is worth noting that there is a reliable way to test (or simulate) these two models, so far only in vitro (
    Yet, there are some points that these models don't account for. Namely, the same neuron has multiple synapses that are separately involved in supporting short-term and long-term memory, so synapses engaged in STM should be separated from that of LTM. It's not clear whether this model can explain (or predict) in which way proteins required for LTM influence the synapses that were altered to produce STM. It is also not clear whether there should be distinct types of networks for the perceptual & STM functions in order to maintain memory during perception of stimuli with varying behavioral and an informative significance.
    Despite all of this, proposed model is quite original, mathematically beautiful & correct, that significantly distinguishes it from other computational neuroscience models

  • GamesWithWords

    “One possible implication of this idea, it seems to me, is that short-term memory ought to be pretty conservative, in the sense that it could only store reactivations of existing neural circuits, rather than entirely new patterns of activity. Might it be possible to test that…?”

    You mean could you store some novel stimulus in short-term memory? That is, I'm fairly certain I don't have a pre-existing representation of an animal with the head of a goat, the body of a salamander, and wings. But I'm pretty sure I could remember that image over a short delay…which you are probably doing as you read this! There are probably around 100,000 published papers with relevant data.

    But wait — we might not have pre-existing networks for this mythological monster, but could you just have three different networks firing, one for the head of a goat, one for the body of a salamander, and one for wings? Sure, but presumably those are parts of networks (I doubt I've ever thought of the body of a salamander sans had before), so activating them should activate the rest, and you'll end up with a goat, a salamander, and some kind of bird, which is not what we wanted.

    Even if you don't buy that last argument, you've got a problem. Your claim would be that as long as we can assemble the new image out of pieces of things we've seen before (for which we have memories and thus neural networks), it doesn't count as a “new” network. Thing is, I can go reductio on you. So far as we know, your brain represents all images as combinations of gabor patches. So anything new you see can always be represented in terms of gabor patches you've seen before (gabor patches are very simple). So now you're stuck arguing that there is no such thing as a new stimulus, which isn't a very useful position to have.

  • Neuroskeptic

    The more I think about this model, the more I wonder, is it really a model of memory?

    This seems like an elegant mechanism by which neural activation could be 'sticky' over time – but is that short term memory?

    For example this morning I walked past a black dustbin. A few seconds after I saw it, I could recall seeing “a black dustbin” but I couldn't see it as a persistent visual image, or even remember much about it other than it was a black dustbin.

    Is that persistent neural activation? I suppose it could be, but it would have to be pretty selective – it would have to be persistent activity of the 'concept' of a black bin rather than the sensory impressions.

    But even if it was persistent activity in the black bin modules, wouldn't that mean that after seeing a black bin I'd be thinking about one? But I wasn't – I was remembering one, at a specific time.

  • trrll

    You may not have a pre-existing representation of a goat-headed chimera, but you likely have pre-existing neural representations of a goat, the concept of a head in the most general sense, the concept of a chimera, etc. So activate all of these together in a kind of hierarchical linkage, and you could have the representation of a novel chimeric animal, but comprised of familiar elements

  • Neuroskeptic

    trrl: Right. The way the authors put it is that each module/cluster encodes one 'bit of information'.

  • GamesWithWords

    @trrll: I agree that's very intuitive. As I said in my first comment, the problem with this turtles-all-the-way-down story is its lack of explanatory value. You end up concluding that there are pre-existing neural networks for every possible concept or percept, so the notion of pre-existing neural network loses any meaning.

  • Kapitano

    Which short term memory are we talking about?

    From what I read, there are at least two – a sensory buffer of 4-5 seconds, and a kind of 'working area' that lasts 5-6 minutes. And possibly a 'things I did today' area of 6-7 hours.

    The buffer lets you remember the beginning of a sentence when you're at the end – and if you're paying attention, it gets 'copied' into the working area, where you can juggle ideas around for a few minutes without having to keep them all at the front of your consciousness simultaneously.

    I guessing this sympathetic resonance model would only apply to a few seconds worth of experiences?

  • Elise G

    This is very interesting. I think that we can’t remember short term memory things because we do not cluster our memories in short term memory. For example in long term memory we use clustering so that our brain groups memories in a certain pattern that allows us to remember it. This is why when you are playing a memory game it is easy to relate certain words together to help allow you to remember the words. Plus we might not be able to remember short term memories as well due to the fact that we only have a limited amount of capacity in short term memory.



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar