Evidence for “Unconscious Learning” Questioned

By Neuroskeptic | July 3, 2015 8:55 am

Can we learn without being aware of what we’re learning? Many psychologists say that ‘unconscious’, or implicit, learning exists.

But in a new paper, London-based psychologists Vadillo, Konstantinidis, and Shanks call the evidence for this into question.

Vadillo et al. focus on one particular example of implicit learning, the contextual cueing paradigm. This involves a series of stimulus patterns, each consisting of a number of “L” shapes and one “T” shape in various orientations. For each pattern, participants are asked to find the “T” as quickly as possible.

Some of the stimulus patterns are repeated more than once. It turns out that people perform better when the pattern is one that they’ve already seen. Thus, they must be learning something about each pattern.

contextual_cueing

What’s more, this learning effect is generally seen as being unconscious because participants typically cannot consciously remember which patterns they’ve actually seen. As Vadillo et al. explain

Usually, the implicitness of this learning is assessed by means of a recognition test conducted at the end of the experiment. Participants are shown all the repeating patterns intermixed with new random patterns and are asked to report whether they have already seen each of those patterns. The learning effect… is considered implicit if… participants’ performance is at chance (50% correct) overall.

In most studies using the contextual cueing paradigm, the learning effect is statistically significant (p < 0.05), but the conscious recognition effect is not significant (p > 0.05). Case closed?

Not so fast, so Vadillo et al. The problem, essentially, is that the lack of recognition might be a false negative result. As they put it

Null results in null hypothesis significance testing are inherently ambiguous. They can mean either that the null hypothesis is true or that there is insufficient evidence to reject it.

In contextual cueing and other unconscious learning paradigms, a negative (null) result forms a key part of the claimed phenomenon. Unconscious learning relies on positive evidence for learning and negative evidence for awareness.

Vadillo et al. say that the problem is that negative results are

Surprisingly easy to obtain by mere statistical artefacts. Simply using a small sample or a noisy measure can suffice to produce a false negative… these problems might be obscuring our view of implicit learning and memory in particular and, perhaps, implicit processing in general

They reviewed published studies on the contextual cueing effect. A large majority (78.5%) reported no significant evidence of conscious awareness. But, pooling the data across all studies, there was a highly significant effect, with a Cohen’s dz = 0.31, which is small, but not negligible.

Essentially, this suggests that the reason why only 21.5% of the studies detected a significant recognition effect, is that the studies just didn’t have a large enough sample size to reliably detect it. Vadillo et al. show that the median sample size in these studies was 16, so the statistical power to detect an effect of dz = 0.31 with that sample size is just 21% – which, of course, is exactly the proportion that did detect one.

It seems therefore that people do have at least a degree of recognition of the stimuli in a contextual cueing experiment. Whether this means the learning is conscious as opposed to unconscious is not clear, but it does raise that possibility.

Vadillo et al. emphasize that they’re not accusing researchers of using small sample sizes “in a deliberate attempt to deceive their readers”. Rather, they say, the problem is probably that researchers are just going along with the rest of the field, which has collectively adopted certain practices as ‘standard’.

This is actually a decades-old debate. For instance, over 20 years ago, the senior author of this paper, David Shanks, wrote (Shanks and St John, 1994) reviewed the evidence for implicit learning in several psychological paradigms concluding that “unconscious learning has not been satisfactorily established in any of these areas.”

I would say that in general, there is an asymmetry in how we conventionally deal with data. We hold positive results to higher standards than negative ones (i.e. we require a positive result to be <5% likely under the null hypothesis, but we don’t require a negative result to have >95% statistical power.)

This asymmetry generally ensures that we are conservative in accepting claims. But it has the opposite effect when a negative result is itself part of the claim – as in this case.

ResearchBlogging.orgVadillo MA, Konstantinidis E, & Shanks DR (2015). Underpowered samples, false negatives, and unconscious learning. Psychonomic Bulletin & Review PMID: 26122896

CATEGORIZED UNDER: papers, science, select, statistics, Top Posts
ADVERTISEMENT
  • D Samuel Schwarzkopf

    Interesting topic! Similar points can probably be made about a much larger literature about unconscious stimulus processing in vision science. In those experiments you often include a control experiment to test whether participants were aware of some critical stimulus attributes – and non-significant discrimination performance is taken as evidence that they weren’t.

    My own way to deal with this is to have a fairly large number of trials in such control experiments. They should have the power to detect subtle above-chance performance at the single participant level. If not then you can’t really trust the findings. (This is incidentally also what is wrong with a lot of the psi literature, most notably Bem’s (in)famous experiments: I think you just can’t believe an average of 51% correct is real even with an n=100 when the number of trials *per* participant was 12!)

    In one of our as-yet unpublished experiments that studied unconscious processing we actually had subtle above-chance performance in such a test – suggesting that the participants were aware of *something* or at least aware in some trials. This could have easily been missed with a less sensitive design.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Thanks, that’s a very interesting perspective.

      Playing the Devil’s Advocate for a second, though, do we need to question whether very subtle (small effect size) conscious awareness can “explain” the implicit learning effects if the latter are more robust?

      Can a small effect make us change our minds about the nature of a big effect?

      • D Samuel Schwarzkopf

        Good point. I guess I was thinking about it more in the context of the unconscious processing experiments. In those often the main effect size of interest used to argue for unconscious processing is often very small as well (although not always).

        Anyway, I am not sure that it follows that a weak awareness effect cannot result in strong learning effects. The claim about unconscious processing of any kind is that it is occurs without *any* awareness. If there can be any doubt that participants were actually aware, even if just partially or in some trials, the claim is essentially refuted. At the very least you’d have to have a control experiment in which the visibility is matched to see whether the effect isn’t similar in that situation. (Sorry, I may not be very clear here – too hot to think!)

        • Temp

          Might it also be possible that when a participant in an experiment is aware of a stimulus on one trial it affects processing on other “unconscious” trials? In this way, a small effect could be parleyed into a larger one? There was a recent article in Consciousness and Cognition about such a thing

          • D Samuel Schwarzkopf

            Not sure, but I wouldn’t rule it out. Certainly another factor that is important is that when you test for awareness you need to replicate the conditions from the main experiments as accurately as possible. There was a nice study not long ago showing that when you just test the purportedly invisible condition observers may be at chance – but when you intermix the invisible and visible trials they will perform above chance. They called that “priming of awareness”:

            http://www.ncbi.nlm.nih.gov/pubmed/24474824

      • Miguel Vadillo

        Just a brief comment about the small size of effects in awareness tests. In contextual cueing experiments the learning effect is measured accross hundreds of trials. Awareness, in contrast, is measured with only a few trials (typically around 24). Given this asymmetry, it is hardly surprising that the effect size of awareness is smaller than the effect sizes of learning, because the latter is less affected by noise. As we discuss in the paper, this is evident even in situations where learning is clearly conscious. In “explicit” contextual cueing paradigms, the effect size of awareness can be well above d = 1. But the learning effect can easily reach d = 6. In any case, thanks for discussing our paper!

        • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

          Thanks, that’s very interesting!

          • Miguel Vadillo

            I hope so :) Thanks!

  • http://www.sowi.uni-kl.de/psychologie Thomas Schmidt

    The whole problem has already been solved: We can demonstrate
    unconscious learning or perception WITHOUT using unconscious stimuli!
    Here is how (Schmidt & Vorberg, 2006). 1) Find a measure that
    indicates that learning or perception have taken place, e.g., some
    priming effect in response time or accuracy. 2) Measure this effect
    while varying the visibility of the critical stimulus experimentally,
    e.g., by visual masking or the control of attention. 3) Try to find
    an experimental manipulation that INCREASES the indirect priming
    effect while at the same time DECREASING the visibility of the prime.
    In vision, such double dissociations have been demonstrated using
    response priming and metacontrast masking, which can develop in
    opposite directions when the time between prime and target is varied
    (Vorberg et al., 2003, and many others). They do not require that the
    critical stimulus is “invisible”, and they generate no trouble
    with “confirming the null hypothesis”.

    • D Samuel Schwarzkopf

      I may be missing something but doesn’t this idea fail to establish that visibility really decreased for the observers? There could well be perceptual learning counteracting the masking. Moreover, how can you call this unconscious when observers are conscious? I would regard that as a binary factor. While there can be partial or subtle awareness, *any* awareness is relevant.
      I should look at your citations though – presumably that makes it clearer.

      • http://www.sowi.uni-kl.de/psychologie Thomas Schmidt

        Double dissociations can (and should) be established on the level of individual observers. The point of our paper is: Can we refute a model which says that there is only one source of information, conscious information, that informs both effects (priming measure and visibility measure)? Or are we forced to assume a second source of information? The logic is that if we observe a double dissociation, we immediately know that it cannot be the case that both measures are monotonically related to only a single source of conscious information. — Put another way: we’re not showing that the information is unconscious, we’re showing that assuming ONLY conscious information cannot explain the data.

        • FH

          There is no valid logic by which double dissociations can evidence the presence of a structure or module ( http://csjarchive.cogsci.rpi.edu/2001v25/i01/p0111p0172/00000042.PDF )

          I think the display of experimental control provides all the “evidence” one would need.

          There are always design options for ‘getting rid’ of the an expected null result: study the functional form of the trade off between frequency-recall (higher frequency, higher recall) and frequency-latency (higher frequency – lower latency).

          The latencies at repetition frequencies at which recall is indiscernible from chance (starting at 0 repetitions) should show some curve with negative growth (slope < 0 = faster to respond as repetitions increase) until the repetition frequency at which recall is sig. different from chance is reached. This should be possible to measure in a blocked within subjects design.

          So a 'slope' < 0 in the 'no recall' range would indicate indicate implicit learning… But if the gain in speed of responses is much larger at repetition frequencies of which a participant is aware (compare slopes between recall / no-recall frequency ranges) then why 'bother' with evidencing implicit learning in contrast to explicit learning?

          They're both examples of adaptive behavior due to interaction with non-random structure in the environment. That is, after-effects of experienced events can be measured in some, but not all observables, but both are evidence of memory / redundancy somewhere in the system.

  • Wouter

    It has been said before, and I’ll say it again: it’s really time to adopt Bayesian statistics as the standard. Aforementioned problems of null-results are non-existent in a Bayesian world.

    • http://www.sowi.uni-kl.de/psychologie Thomas Schmidt

      No, they are perfectly preserved in the Bayesian world. The Bayes solutions comes at the expense of new assumptions. For instance, Bayes factors do not solve the problem of “affirming the null hypothesis”, because the result will depend on assumptions about the prior distribution of effect sizes under the alternative hypothesis. Double dissociations are preferrable because they make fewer assumptions about the measurement process.

    • D Samuel Schwarzkopf

      Theoretically, I agree with Thomas Schmidt here – Bayesian inference doesn’t solve the question what level of support for H0 we should aim for in this case (which depends on the prior). Pragmatically though I think you are right because the default binomial Bayesian test is probably well suited for showing this. In the experiments I described below this would reveal inconclusive BFs – if you are erring on the side of caution you shouldn’t interpret those results as confirming the lack of awareness.

  • MNix

    I’m going to point out the dubiously common sense answer here: Subjects are not learning the configurations of the patterns at all. They are learning the locations of the T’s on the patterns and then looking in the familiar locations first when a new pattern is presented. If the pattern is a repeat, they find the T faster. I suspect that if the pattern were entirely different with the T in the same location and orientation, the time would also improve.

    In either case, when subsequently asked if they remember the repeat patterns, the honest answer is no because the subjects did not pay attention to or retain the patterns of the L’s, because they were irrelevant to the search.

  • Pingback: Interesting Reading: Week 32 | Brain Damage Ninja()

  • Pingback: NeuroAgile Quick Links #14 | Agile Pain Relief()

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+