The Synapse Memory Doctrine Threatened?

By Neuroskeptic | December 27, 2014 5:46 am

In a provocative new paper, a group of UCLA biologists say that the leading theory for how memory is stored in the brain needs a rethink. But is it really time to throw out the textbooks?

In their study, published in Elife, authors Shanping Chen, Diancai Cai, and colleagues examined the formation of synapses, connections between neurons. They used neurons from Aplysia, a sea slug whose rather simple nervous system is popular among learning and memory neuroscientists.

aplysiaChen, Cai et al. took two neurons from an Aplysia, one sensory and one motor neuron, and put them together in a dish. When placed together these two neurons spontaneously grow synaptic connections.

Repeatedly adding 5HT (serotonin) to the dish caused these connections to strengthen – a primitive form of ‘learning’ called long-term facilitation (LTF). The number of synaptic connections (called varicosities) between the sensory and the motor neuron increased rapidly after 5HT ‘training’. This is textbook stuff.

Things got rather interesting, however, when Chen, Cai et al. studied the individual varicosities. They found that 48 hours after a 5HT ‘training’ session, many of the new varicosities that had been formed during training had disappeared. Even some of the original varicosities, the ones that had existed before the training, had also vanished. These ‘lost’ connections were however balanced out by lots of new varicosities, so the total number of connections stayed the same 48 hours after training.

This is hot because it suggests that a ‘memory trace’ was not stored in the form of synapses. Rather, it suggests that the sensory neuron itself has a memory of how many synapses it ought to be forming – with the actual synapses being merely an expression of this memory. This is pretty radical: it amounts to saying that the location of memory is not in the synapses, but (probably) in the cell nuclei of presynaptic neurons.

Chen, Cai et al. ran several other experiments that they say provide evidence against synapse-based memory in Aplysia. For example, they trained Aplysia to be sensitive to repeated mild electric shocks. It’s known that this memory (long term sensitization, LTS) can be ‘erased’ with the help of a drug that blocks synapse formation (a protein synthesis inhibitor). This eliminates the extra synapses that are thought to constitute the memory.

However, strikingly, Chen, Cai et al. show that the memory isn’t actually erased. They say that it can be reactivated by means of a “reminder” stimulus (the green line on the graph below), which implies that the memory wasn’t really lost when the synapses were destroyed. Rather, they say, the non-synapse memory must have persisted inside the neurons themselves.


They conclude that

Long-term memory (LTM) is believed to be stored in the brain as changes in synaptic connections… Here, we show that LTM storage and synaptic change can be dissociated… These results challenge the idea that stable synapses store long-term memories.

But there’s a big problem here. Sure, it’s biologically plausible that a neuron could ‘remember’ the total number of synapses that it forms e.g. via epigenetic regulation of some gene(s) that promotes synapse formation. That would be very cool and Chen, Cai et al.’s results are very cool.

The problem, however, is that a given neuron may form synapses with thousands of others, and it matters which of these connections are strong and which are weak. The pattern, not the total, is important. Yet there’s no known mechanism by which a neuron could store a molecular ‘map’ of its own connections and their differing strengths.

To put it another way, while it’s easy to see how a neuron could ‘store’ a scalar variable using epigenetics, it’s much harder to imagine that it could store a vector of values.

In the case of Aplysia, this ‘vector problem’ doesn’t really arise, but that’s because Aplysia has a minimalistic nervous system. It’s hard to see how Chen, Cai et al.’s non-synaptic memory could work for mice or monkeys, let alone humans. The authors do acknowledge this point:

Could a nonsynaptic storage mechanism based on nuclear changes mediate the maintenance of associative memories, particularly those induced in complex neural circuits in the mammalian brain, where a given neuron may have 1,000s or 10,000s of synaptic partners? An obvious difficulty confronting any hypothetical nuclear storage mechanism in the mammalian brain is how the appropriate number of connections can be maintained in a synapse-specific manner after learning has occurred.

However, they have no answers for this crucial question. Here’s all they say about how it could be resolved:

Possibly, there are nonsynaptic ways for neurons to communicate that ensure specificity of associative synaptic plasticity in the face of the significant lability of synaptic structure documented here.

This is entirely speculative, and in my opinion, there’s no need to posit an unknown (and on the face of it, implausible) non-synaptic signalling mechanism between cells. Rather, it would be more parsimonious to assume that Chen, Cai et al. have discovered a system for dynamically regulating overall synapse density at the level of single cells.

This one-cell mechanism might suffice for memory storage in simple nervous systems like Aplysia, but in humans the Chen-Cai process might be involved, not in memory per se, but in synaptic scaling – an interesting phenomenon that the authors don’t discuss, but I think that their discovery might help illuminate it.

Sadly, the media coverage of this paper has glossed over these concerns, helped along by a rather optimistic press release. According to the media, these results might well be relevant to humans and could lead to new hope for dementia patients. Here’s HuffPo‘s take for instance (where the word “Aplysia” doesn’t occur until paragraph #5)

New research offers a glimmer of hope for patients in the early stages of Alzheimer’s… There may be a way for lost memories to be restored in the brain… “As long as the neurons are still alive, the memory will still be there, which means you may be able to recover some of the lost memories in the early stages of Alzheimer’s”, [senior author] Glanzman said.

Frankly, this is a gigantic leap that goes far beyond what Chen, Cai et al. actually found. Until the ‘vector problem’ is resolved I see no reason to throw out the textbooks just yet.

ResearchBlogging.orgChen S, Cai D, Pearce K, Sun PY, Roberts AC, & Glanzman DL (2014). Reinstatement of long-term memory following erasure of its behavioral and synaptic expression in Aplysia. eLife, 3 PMID: 25402831

CATEGORIZED UNDER: animals, papers, science, select, Top Posts
  • edhazer

    Why don’t science writers properly acknowledge the lead author’s name? This is a study from Glanzman’s group carried out by Chen and Cai et al.

    The critique however is spot on.

    • Neuroskeptic

      I wasn’t sure whether to say “Chen and Cai, et al.” or “Chen, Cai et al.” so I decided to go with the latter because it’s shorter (!)

  • Awithonelison

    *sigh* My friends will certainly be sharing the HuffPo link on facebook. I actually think the findings are cooler without the speculation.

    • Neuroskeptic

      The findings are extremely cool. But the speculation risks giving false hope in the face of Alzheimer’s. Then again, maybe a little hope is what we need…

      • Jespersen

        “A future cure for Alhzeimer’s” is being mentioned in literally every single pop neuroscience article ever. I think most sufferers have become immunized to hope by now.

        • Guest

          They’ve forgotten all about it anyway (I’m so sorry but I couldn’t resist)

      • Awithonelison

        Hope, yes. False hope, no. I feel the same way about the recent in-vitro HIV findings. My Mom has Alzheimer’s, and I’m trying to be pragmatic about it. My risk may be increased, may not be. I might be helped by new findings, she definitely won’t. I may be cheered by a new finding, but I’m not going to celebrate until there’s something concrete.

        • Anechidna

          Genetic tendency is not that positively linked to Alzheimers and the trigger for this disorder is not fully known. A lot of speculation and hypothesis. Like you there has been a line within my family but it isn’t conclusive enough to cause me to worry.

          • Awithonelison

            I know that the genetic risk is not statistically absolute, but my Dad also has dementia (not sure what type, but not Alzheimer’s) and I’m on medications that also are being studied for correlations with later neurodegenerative diseases. Having tried to get help for my parents for years now and finding out how difficult it is, I’d rather hope for the best and plan for the worst.

          • Anechidna

            In the beginning aluminim was thought to be responsible for Alzhiemers but now I see it is being placed in the category of an Auto Immune system disorder like MS is etc. There are several things that can cause Alzhiemers like states and I know that with one of them they believe it accounts for around 25% of those diagnosed with it and is fixed by a simple surgical procedure and hey presto a normal person, others are causd by lack of fluids etc and I do wonder what the role of chemicals in our foods, our laundry products personal care products that we never had until marketing decided we need them.

            Likewise I was diagnosed with early onset but that hasn’t happened my health has gone in the opposite direction due to a serendipitous event. A close friends auto immune system has decided all hydro carbons are a negative and to ensure their health I and quite a few others in the family have given up all deodorant, laundry products, cleaners etc anything with a fragrance and my capacity to stay focused has improved dramatically to now where the medics are saying doesn’t have that disorder or diagnosis. Surprisingly I lead a clean and odour free existance using bicarb soda and vinegar. My sample size is about 25 people and growing at the rate of about 3 per year.

  • feloniousgrammar

    Without a doubt, every cell in our bodies— human and not— have lives of their own and are always at work and working with others. Unfortunately, as far as memory is concerned, most of us don’t have a very good librarian in our brain, so we forget almost everything about most of our lives, eventually; and we often can’t call up memories we do have stored when we need them, or think we do.

    The human brain will always an enormous set of puzzles, which makes it endlessly fascinating; but, the game of hyping cures for diseases is a cruel one.

    • Clark Schulze

      True memory is not like a recorded video. It changes and re-forms continuosly over time. When we think about a memory we add new wet clay to the memory Statue and new shapes to the memory. They can not be stored like a simple recording and looked up by a librarian. Each time a memory is accessed it is changed some. Memories of pumpkin pie at Christmas dinner use fact memories to replay the pie memory. The often reinforced memory of how pumpkin pie tastes and reinforced memories of family Christmas gatherings. Reinforced by new gatherings or photos or old movies or other people’s stories. No librarians.

  • Jespersen

    I have an annoying tendency to mentally translate what I read about biological neural networks into *computational* neural networks (which adopt an equivalent doctrine: all learning is established by adjusting connection weights) and your critique is spot-on: unless the neurons can magically store vectors, nothing of this sort could possibly work for the kind of learning you would expect of the human neocortex.

  • Marko

    “They say that it can be reactivated by means of a “reminder” stimulus…which implies that the memory wasn’t really lost when the synapses were destroyed. Rather, they say, the non-synapse memory must have persisted inside the neurons themselves.”

    These results may be explained in the light of the recent hypothesis that long-term memory could be (non-synaptically) stored in the extracellular matrix ( which can also provide a necessary mechanism for the formation and maintenance of specific synapses (

    • Neuroskeptic

      That’s a very interesting suggestion.

      However I would say that such an ECM mechanism would represent, perhaps not synaptic storage per se, but certainly it would be a form of memory located amongst the synapses, rather than in the nucleus of the presynaptic cell as this paper argues.

  • Comment_Comment

    Good summary and analysis of this interesting paper. Two important points to also consider.

    First, the behavioral training used is known to *not* produce synaptic outgrowth (Wainwright et al., 2001, PMID: 12019331). This makes the interpretation of non-synaptic storage very precarious.

    Second, looking at the figures where synapses in culture were tracked over time, it is very hard to make sense of what is vs. is not identified as a synapse. There was no second rater making these decisions, and therefore no measure of inter-rater reliability. This suggests some caution in the finding of synaptic turnover in the culture experiments, as this apparent lack of specificity could possibly be due to poor reliability in identifying synaptic contacts and tracking them across measurements. For such a provocative finding, it would have been great to have evidence that the synaptic changes observed were characterized reliably.

    • Neuroskeptic

      Thanks for the comments, Comment_Comment. Your second point is especially interesting – perhaps some kind of automated image analysis (“blob detection”) would be useful here to avoid any possibility of rater bias.

  • Pingback: 12/28/2014- Neuro OnAir (draft) | Neuroscience Hub()

  • Pingback: Lion's Mane Mushroom (Hericium erinaceus) - Smart Drugs()

  • Zenstrive

    Memories are stored in DNA..

    • Christopher Roditis

      hopefully not or your children will remember you writing this

      • Ph.C.

        Could be epigenetic, which are, in the most part, not transferred to subsequent generations.

  • Ph.C.

    ELife is an amazing openaccess scientific journal. One of the thing that make is so remarkable is that they display the comments of the reviewers and the responses of the author. The reviewers state clearly one of my main concern with the conclusion drawn by the authors saying “Here, we show that LTM storage and synaptic change can be dissociated…”. The comment is the following:

    “The language used in the current manuscript has the potential to confound and confuse. For instance the statement that “LTM is independent of synaptic change in Aplysia” is certain to confuse. What is meant is that memory is covertly encoded and stored, possibly in the cytoplasm and possibly in the nucleus; this memory signal induces synaptic changes necessary for memory expression. For a point as subtle, specific and interesting as this, loose statements such as “memory is independent of synaptic change” are confusing as well as unproductively provocative. The exaggeration of the radical nature of these results sometimes make them appear unnecessarily paradoxical, potentially reducing their impact.”

    In my perspective, their conclusion is a very bold claim. Memory is certainly NOT independent of synaptic plasticity, but perhaps is not solely driven by it.

    • David Glanzman

      I heartily agree with Ph.C. about the excellent features of eLife as a scientific journal. Also, the review of our paper was quite thorough–indeed, it was perhaps the most thorough review that any of my papers has been subjected to– but also extremely fair. I highly recommend eLife to neuroscientists who might be considering whether to submit their manuscripts there.

      Ph.C. is mistaken, however, regarding the eLife reviewer’s comments; those comments do *not* refer to the sentence in the abstract of the published paper cited by Ph.C.; rather, they refer to a sentence in the original manuscript. We agreed with the reviewer’s comments and revised the sentence accordingly. The language of the final sentence, whose point Ph.C. disagrees with, was acceptable to the reviewer.

      Regarding the issue of synaptic change and memory, I
      certainly agree that memory involves synaptic plasticity. In fact, I have worked for the last 34 years in the field of synaptic plasticity and memory. In our eLife paper, however, we distinguish between the expression of long-term memory (LTM) and the storage of LTM. The expression of LTM undoubtedly involves synaptic alteration. But the evidence in our paper
      suggests that LTM is not stored as synaptic change, because the synaptic changes that occur during long-term learning can be reversed, yet the LTM persists. This is indeed a “very bold claim,” and we stand by it.

      • Ph.C.

        This is an excellent reply, thank you for the clarification. I misunderstood the intended message in the quoted claim and made false assumptions accordingly. I also must admit i laughed when you said to have “worked for the last 34 years in the field of synaptic plasticity and memory”; certainly you wouldn’t trow the synapses after so much time studying them!

        Looking forwards for subsequent studies.

        • David Glanzman

          Thanks for your comment. I’m glad to have given you a chuckle. (Sorry, but I don’t know the meaning of the word “trow”.)

          We’re working on additional experiments now, so I hope to have some new results soon.

          • Ph.C.

            Throw away was meant to be written*.

  • Curt Welch

    I don’t know much about biology, and mostly only work with computer neural learning networks (not meant to actually emulate real biological neurons). But might I suggest that if there is very likely at least two separate types of learning at work in the brain at the level of individual neurons. Perception based learning, or pattern learning, is likely driven by the data — by the activity of the neurons and synapses. Aka classical conditioning. The other, must be a form of reinforcement learning, that is driven by an external control (reward) signal. Aka, operant conditioning.

    The learning described in this paper, where neurons learn now many synapses to maintain, under the serotonin control signal, sounds like it fits learning needed for operant conditioning. It’s a form of training individual cells “how to act”.

    But if that is combined with a form of learning that’s triggered by synaptic activity, then there may not be a need for any other form of “vector” memory. So if one learning mechanism controls the strength of growth of synapses based on the cell A making Cell B fire, or not fire, and Cell B then regulates all it’s synapses by the second type of learning system, then activity that causes some neurons to grow, will be balanced by the other learning, reducing synapses for the neuron.

    The current computer networks I experiment with require two types of learning like this working together in each cell and it seems very reasonable that real biology must follow a similar path.

    One type of learning is needed to make the network at a whole adjust to the patterns that are found in the sensory data (a type of data compression learning really). This sensory data compression learning shows up externally as classical conditioning effects in our behavior. When two sensory signals correlate, they “wire together” as a form a data compression in the networks (aka that act as “the same thing”) logically.

    The second learning, of reinforcement learning, is what drives our behavior. It controls what the systems “likes” to do. It defines the optimization target for our behavior. It’s what makes us eat food, instead of eating rocks.

    The first form is how the network learns to “see” and “understand” the world, the second is how the network learns “what” to do. To cope with the scaling problems inherent here I believe both forms of learning must be happening at the same time, in each neuron. It can’t be divided at the macro level into separate larger clusters of neurons forming two different learning “modules”. Both types of learning must take place at a micro level and possibly in the same neuron at the same time.

    So it sounds to me, that this work is just helping to uncover the idea that there is more than one type of learning system at work at the same time in the brain. Not that there is only “one” type of learning, and that it has moved from the synapse, to inside the cell.

    • Neuroskeptic

      How interesting, thanks for the comment.

      Are you saying that in a neural network with those two learning mechanisms, learning can operate purely on the level of overall synaptic strengths of individual cells, with no need to operate (directly) on the vectors of connection weights?

      Or have I misunderstood…?

      • Curt Welch

        I think operant conditioning _might_ be able to operate on the overall synaptic strength of an individual cell. That is, if we think of the cell firing as the “behavior” being selected by conditioning, then when a behavior is rewarded, it’s behavior is strengthened. That simply means it’s more likely to happen again in the future. Aka, the average activity of the cell will increase relative to the activity of other cells that have been “punished” (had their activity reduced).

        But then classical conditioning, would need to operate as different learning on the same cell. Classical conditioning, is basically the idea expressed as “neurons that fire together, wire together”. Or at least, a close parallel to that concept.

        The operant condition would need to be triggered by some global reward signals, such as chemicals in the brain that would act on large groups of neurons in parallel. But how the chemicals affect each neuron would be biased by how active the neurons had recently been. The more recently active the neuron had been, the more the chemical would need to change it’s behavior. So this idea follows the serotonin signal as described in this paper.

        But classical conditioning could be happening as well, but that could be triggered by the activity of the synapses and the learning could be implemented (for example) as the changes to synaptic strength.

        So we could have one learning process at work changing individual synaptic strengths, to implement classical conditioning, and a second reward based learning at work, that controlled the total number of synapses (and as a result, the average activity level of the neuron).

        In the computer learning algorithms I’ve worked with, I’ve had to implement these two different types of learning in the same “neurons” using different parameters, so that one learning was adjusting one parameter, and the other learning algorithm adjusted another.

        Let me also point out (to help people understand what I’m thinking above) that I believe that the way to make a network of neurons work together, so that we see classical and operant conditioning emerge at the macro level (network as a whole), is to build the network out of smaller learning modules, that each implement on their own, classical and operant conditioning without any need to understand what the rest of the network is doing.

    • Alan Eskuri

      I really like this analogy and it makes sense, even to a mechanical engineer with rudimentary knowledge of computer science. I’m also a bit of a neuroscience hack with a focus on autism spectrum conditions. Recent work has shown a lack of synaptic pruning in autism – so would this correlate to an inability to “unlearn”?

      • Curt Welch

        No clue. All learning, is by definition also a process of unlearning. If something inhibits “unlearning” it’s also going to inhibit learning to the same extent. I’m not aware that autistic people lack the ability to keep learning over time. It strikes me that autism (the very little I know about it mostly form experience with a few autistic people), is more a case of brain that is “tuned” so as to give it a different range of learning power that a more typical human brain. That is, our brains are always limits don what we can learn. Just as one person may have strong innate visual learning skills and weak auditory skills, and another may have strong auditory sills, but weak visual skills, or one has strong concrete sensory learning skills (photographic memory) but very weak abstract learning, and others have strong abstract learning, but weak concrete sensory learning ability. Our brains are tuned in many different ways to control the scope of what is easy and hard for each of us to learn. I would strongly suspect Autistic people have a brain that is simply tuned very differently in the range of what they find easy, and hard to learn, from the rest of us. As we learn more about the specifics of how the brain implements these different types of learning, I think it will become clear just what sort of physical difference exist in an autistic person from the rest of us that gives them these different sets of strengths and weaknesses. But I don’t know enough about autism, or the brain, to even hazard a guess as to what is at work.

    • Anonymouse

      Classical conditioning and operant conditioning are both forms of reinforcement learning and don’t differ in whether a cue is paired with a reward, but with respect to the kind of cue that is being learned as a predictor – a voluntary action of the subject (operant conditioning) or an involuntary experience of the subject (classical conditioning).

      I don’t understand your post. This is what I think you’re saying about your ANNs: There are two learning mechanisms, which learn the contingencies of the world vs. how to manipulate those to reach your goals, like feeding on something. That would be called cognitive control and I don’t see how it relates to the paper at all.

      What you’re proposing with respect to memory formation is a decoupling of the learning of the structure of synaptic connections (“which neurons wire with which others”) and the (serotonin-driven) learning of the number (or strength) of connections. Yet, if those connections are lost, where is the information which neurons wire together stored? What is the gain of that decoupling?

      • Curt Welch

        Operant and Classical conditioning certainly aren’t just two different names for Reinforcement Learning. Only Operate conditioning is reinforcement learning. Classical conditioning is a type of unsupervised learning.

        Operant conditioning controls the association between stimulus and actions. It controls what behavior is selected, for each type of stimulus. Or more accurately, it controls the probability distribution of what behaviors are selected. It works by an agent producing a behavior in response to a stimulus, and having that followed by a reward or a punishment which then adjusts the probability of the previous stimulus->response association being strengthened or reduced.

        So with operant conditioning (learning with rewards), we can train the agent to bark in response to a red light, and roll over in response to a green light. We do it by punishing and rewarding the behaviors after the agent has selected them.

        Classical conditioning on the other hand, is a process by which stimulus signals are linked together.

        So lets say we have three lights that act as stimulus signals, and using rewards, we train the agent to bark in response to the red light, roll over in response to the green light, but we don’t train any action in response to the blue light.

        If we light up the blue light, the agent in effect just ignores it as if it were not even there. It doesn’t make the agent bark, or roll over.

        Once trained, we activate the red light, and make the agent bark. But we don’t do any more training with rewards. We just active the red light, and the agent barks as trained. But now, we active the blue light, at the same time we active the red light. Every time the red light comes on, we also turn on the blue light, and because the red light is coming on, the agent barks as trained.

        But in time, the agent using classical condition, will start to treat the blue light, as if it were the same as the red light — becuase it’s been paired together.

        And in time, we can active the blue light, without the red light, and the agent will bark just as if we had turned on the red light.

        We transfer the programming of the red light, to the blue light, by classical conditioning. We trained the agent to bark in response to the blue light, without using any external reinforcement learning rewards or punishments. We trained it to act as if they red light and the blue light, “were the same stimulus” — becuase for the most part, we always turned the red light and blue light on at the same time.

        The two different stimulus signals (red and blue) became associated by classical conditioning.

        So how the agent interprets complex stimulus signals is controlled by classical conditioning — which is a type of learning that is driven by the correlations of the input signals.

        But what response the system produces is controlled by reinforcement — external rewards and punishment events.

        For example, the front a a cat, such as it’s face, and ears, and eyes and whiskers, has almost nothing in common with it’s legs, and claws, and tail.

        But yet, when we see the back end of a cat, we recognize it as a “cat”. We see it as the same type of stimulus as when we see the face of the cat — even though these two visual stimulus signals are 100% different. But yet, these totally different stimulus signals, have been associated as being “the same thing” — aka a cat.

        Learning to do this type of pattern recognition is not a reinforcement learning problem. It requires no rewards or punishments. It’s learned by associations — by the fact that these two very different stimulus signals happen close together in TIME. They have high temporal correlation. The system learns that when it sees the pattern we know of as a cat face, there’s a high probability that the system will also detect a cat tail very soon in time, and inversely, if it sees the cat-tail pattern, it knows there’s a high probability it will soon see a cat-face. The end result is that these two stimulus signals become associated because they are highly predictive of each other.

        Once associated like this, whatever S->R the agent learns through reinforcement, will be applied to all the stimulus signals that are closely related.

        This stimulus association though temporal correlations I claim is the very foundation of how perception in the brain works. And it’s nothing other than classical conditioning learning. It’s all driven by temporal associations of stimulus signals.

        Operant conditioning is driven by associations of rewards and punishments with S->R actions.

        If you can express both types of learning as one algorithm, that would be great, but I don’t grasp how that would happen (though I have mused over the idea myself). Classical conditioning works without the application of any rewards or punishments. It’s how the brain’s perception network is built from the statistical properties of the raw sensory data.

        The network can learn to detect “cats” simply by being exposed to cats — with no rewards or punishments at all. In Other words, “cat” concept, is an innate property of sensory data, and not something that has to be trained into the agent by rewards.

        What sort of behavior the agent produces in response to a cat, however is trained by rewards.

        I claim, that to solve the problem of Strong AI, that these two types of learning are both necessary and sufficient. Both are needed, but nothing more is needed to create general strong AI. (aka AGI).

        • sometimes_science

          you are slightly confused about classical and operant conditioning. The Rescorla-Wagner model, which is a precursor to temporal difference reinforcement learning and which also forms the basis of our understanding of dopamine spike-firing in the midbrain, is a model of classical conditioning. It explicitly predicts that if you subsequently pair your red and blue lights together after learning to bark after the red light, you will NOT learn to bark to the blue light. It is called the Kamin blocking effect, and is one of the most important experiments in experimental psychology in the last 100 years.

  • SergioPissanetzky

    It would be interesting to know if the total *length* of the new dendrites is shorter than that of the original ones. It is now known that dendritic trees in brains are optimally short. Could this be the mechanism that *makes* them short? Each dendrite is replace by another equivalent dendrite, which is shorter, then the total number remains the same, and the tree becomes optimally short. This idea may also answer the vector problem. In any case, *the same number* is an enormous discovery.

  • David Glanzman

    My coauthors and I are grateful to Neuroskeptic for this excellent summary of our paper. Also, I agree with Neuroskeptic that a major problem for a nonsynaptic, epigenetic model of memory storage like the one we propose is the lack of a concrete mechanism for preserving synapse specificity when the lost synapses are regenerated (referred to by Neuroskeptic as the “vector problem”). In the Discussion of the original manuscript we proposed that neurons might communicate the necessary “vector” information to each other through exosomal exchange of small non-coding RNAs (microRNAs, piRNAs, etc.); the reciprocal epigenetic changes induced by the small non-coding RNAs in the neurons of a given neural network might provide the “molecular map” of the appropriate connections and their differing strengths necessary to account for our data. The reviewers judged these comments as too speculative and recommended, no doubt rightly, that they be eliminated from the final manuscript. In any case, the absence of a nonsynaptic mechanism for preserving synapse specificity is admittedly a significant difficulty for our model.

    I respectfully disagree, however, with Neuroskeptic’s statement that the vector problem doesn’t arise in Aplysia due to its minimalist nervous system. Yes, the Aplysia central nervous system is small, but it’s not that small, possessing approximately 20,000 neurons. Also, response specificity has been demonstrated for classical conditioning in the intact Aplysia (see R. D. Hawkins et al., PNAS, 1989); therefore, some degree of synapse specificity is required in the neural network that mediates this form of associative learning. It is true, however, that synapse specificity doesn’t arise in the case of our cell culture system, which comprises just one presynaptic sensory neuron and one postsynaptic motor neuron.

    • Neuroskeptic

      Many thanks for the comment! I’m glad you thought that my summary was an accurate one. It’s a fascinating study. As soon as I saw the abstract it went right to the top of my “to blog” list!

      Regarding the “vector problem”, you point out that response specificity has been observed in Aplysia. But I wonder whether this specificity would survive the re-consolidation blockade + reminder procedure that you used in this study?

      Might it be that the “recovered” memory is actually different from the original (specific) memory trace and is more of a generalized oversensitivity of the whole circuit? That might be interesting to look at.

      • David Glanzman

        Thanks very much for your interest in our paper. Much appreciated!

        We don’t know whether or not the response specificity observed in classical conditioning in Aplysia would survive reconsolidation blockade. We have not yet investigated whether the memory for classical conditioning can undergo reconsolidation, although I would guess that it certainly can. (So far, we have only looked at the effect of reconsolidation blockade on the memory for sensitization, a nonassociative form of learning.) It’s a good question.

        The question of whether the recovered memory differs from the original memory trace is important. It’s related to the question of whether the modest additional training that we found to “restore” the long-term memory (LTM) might, instead, have formed an entirely new memory. The argument against this idea is that the training did not induce LTM in naive (untrained) animals. However, one could argue that reconsolidation blockade left behind some “priming” mechanism that facilitates the induction of new LTM.

        To provide definitive answers to these questions will require repeating our cellular/synaptic investigations in the intact animal, which will not be as easy as it sounds!

  • MOnodb Bart

    What is the code in which LTM is written? There is a possible key . . .

  • Comment_Comment

    Science 2.0 — awesome that you’re online explaining and clarifying this cool paper.

    Fair point about the Wainwright paper. But is there any direct evidence that the protocol you used produces synaptic outgrowth?

    As for the classification of synapses–I was commenting on the reliability of classifying new vs. old synapses, which is the measure that is of central importance in the culture studies. It’s not about blinding (which helps ensure validity and was used), nor about automation (though that would be cool), or even about the reliability of synaptic counts (which does seem well established). My point is that only one observer classified the synapses as new vs. old, and it is not completely clear how they did it nor if the classification is reliable. Establishing reliability of the new vs. old classifications is essential here, and wouldn’t have required much investment of time/effort. Maybe the data is online so it can be crowdsourced?

    • David Glanzman

      Thank you for your comments.

      No, there is no direct evidence that the specific sensitization training protocol we used produces synaptically related structural growth in the animal; all of our morphological investigations were performed using sensorimotor cocultures. Having said that, there is a wealth of evidence from the work of Craig Bailey and Mary Chen that training protocols, like ours, that result in long-term (> 24 hr) behavioral sensitization produce enduring increases in the number of synaptic varicosities within the abdominal ganglion of intact Aplysia (see, for example, Bailey and Chen, J. Neurosci., 1989).

      Regarding the classification of synaptic varicosities as new or old, I don’t think that is as difficult a judgment to make as you believe. If you examine the sample fluorescence micrographs in our paper I think you will find that in most cases it’s pretty evident whether or not a varicosity is new or old. A skilled observer can make this judgment fairly readily. What is admittedly difficult, on the other hand, is deciding whether or not a specific thickening on an axonal branch is or is not a varicosity; that is why we developed the specific criteria for varicosity classification described in our methods. Also, we counted every varicosity that contacted the postsynaptic motor neuron, regardless of its size, as a single varicosity. We realize this lends some imprecision to our measurements, because a large varicosity in a fluorescence micrograph may actually represent several smaller varicosities that cannot be optically resolved due to the spatial spread of the fluorescence signal. (Larger varicosities also contain more active zones than do smaller ones. See S. Schacher et al., Cold Spring Harb. Symp. Quant. Biol., 1990.) This is an inherent problem in quantifying fluorescent microscopic structures.

      I’m sorry, but I don’t believe it’s useful to crowdsource neurobiological data like ours.

  • Pingback: MINDWARZ | Big Brain Stories of 2014()

  • Pingback: Does the Leading Theory for How Long-Term Memory Is Stored Need a Revision? - BrainSpeak()

  • Pingback: 12/28/2014- Neuro News | Neuroscience Hub()

  • Pingback: The Synapse Memory Doctrine Threatened? | The Historyscoper's Science and Technology Watch Blog()

  • Pingback: Reality Check: The Science of IXth Generation: Hidden Files – AiPT!()



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar