Negative Results, Null Results, or No Results?

By Neuroskeptic | June 14, 2016 2:28 pm

What happens when a study produces evidence that doesn’t support a scientific hypothesis?

fixing_science

Scientists have a few different ways of describing this event. Sometimes, the results of such a study are called ‘null results’. They may also be called ‘negative results’. In my opinion, both of these terms are useful, although I slightly prefer ‘null‘ on the grounds that the term ‘negative’ tends to draw an unfavorable contrast with ‘positive’  results. Whereas, my impression is that ‘null’ makes it clear that these are results in their own right, as they are evidence consistent with the null hypothesis.

Yet there’s another way of talking about evidence inconsistent with a hypothesis – such results are sometimes treated as not being results at all. In this way of speaking, to “get a result” in a certain study means to find a positive result. To “get no results” or “find nothing” means to find only null results – which, on this view, have no value of their own, serving only to mark the absence of some (positive) findings.

This ‘non-result’ idiom is common usage in science – at least in my experience – but in my view, it’s misleading and harmful. A null result is still a result, and it contributes just as much to our knowledge of the world as a positive result does. We may be disappointed by null results, and we may feel that they are not as exciting as the results we hoped to find, but these are really nothing more than our own subjective responses. The view that null results aren’t really results is at the root of much of publication bias and motivates p-hacking.

The only true “non”-results are results that are so low quality that they are uninformative, either due to poor experimental design or errors in data collection. These failed results may be, on the face of it, either positive or negative.

CATEGORIZED UNDER: FixingScience, science, select, Top Posts
ADVERTISEMENT
  • polistra24

    The null hypothesis is a peculiar artifact anyway. It was set up as a ‘starting point’ that could be disproved by one counterexample, in the hope of mapping biology and behavior onto symbolic logic. Trouble is, you can’t do that. No real question in biology or behavior is EVER certain or precise enough to be firmly disproved with one counterexample. If a question CAN be disproved so easily, it’s a trivial abstraction, not worth the expense of studying.

    • allannorthbeach

      “If a question CAN be disproved so easily,…”
      I’ve found the answer is to just disapprove of the bloody things…no question!

  • NeuroMDL

    “A null result is still a result, and it contributes just as much to our knowledge of the world as a positive result does.” – I strongly agree with the first part of this sentence, but not the second part. In practice, positive results have more inferential power than null results. If you make a positive observation (“I saw a pig flying!”), that’s fairly strong evidence that pigs fly; if you do not make a positive observation (“No flying pigs spotted”), then your result is only as good as your methods (“No flying pigs spotted in Springfield, IL, during the day, in minimally cloudy conditions). Maybe flying pigs only exist in the rainforests of Kauai’i, or only fly at night. There are nearly always many, many reasons (even aside from flawed methods) that a result might not show up in a given experiment. And that nearly-unavoidable multiplicity of explanations makes null results fundamentally weaker than positive results. Which I think is important to acknowledge.

    This is not to say that null results aren’t valuable – the near-total absence of them in the literature has surely led to many wasted hours and dollars as different labs explore the same dead ends. I have very recent personal experience with this problem, as I have just published a paper on a null result. And it was a slog.

  • Pingback: Indistinguishable from Magic 6/15  – Disruptive Paradigm()

  • allannorthbeach

    “Non” results are far better-known as ‘fizzogs’.

  • https://egtheory.wordpress.com/ Artem Kaznatcheev

    I think it might be productive to draw some conceptual distance between “results” and “measurements”. Experimental results are usually measurements, but measurements need not be results. An non-experimental results can also be not measurements. The examples of null, negative, and non-“results” that you mention are all examples of measurements. And if the method of measurement is sound then all measurements have some utility (as you mention at the end).

    However, for a measurement to become a result I think it needs to inform or develop the theory or model that is under consideration. The whole point of null/negative/non-“results” is that they tend to now make that development of the theory; at least in the paper that they are presented in. As such, they are simply not results, just measurements. Of course, those same measurements might _become_ results in later work that find ways to integrate them into the narrative of its theory/models.

    I guess my take away is that we should try to cut down on the overly optimistic language of “results” even if we preface them with negatives. And instead write something like “we measured this, and found no results for our model”.

  • Pingback: No results | Locoeco()

  • Peder Isager

    I absolutely agree with the main argument of the article, and I think much good could come from considering the value of p>.05 results.

    I am however concerned that researchers often misinterpret “null results” in the form of p>.05 as direct evidence for no effect. Since the p-value only reflects the uncertainty of whether your effect size is really larger than zero, if my effect size of p>.05 it only means that my study can’t confirm whether the effect is larger than zero or not. This might be because the effect is really null of course, but it might also be because my N=5, or the variance is enormous, and my poor research design never stood a chance of detecting anything but a huge effect. The p-value cannot inform me about this.

    I think null-effects are most informative if they are related to theory somehow (e.g. If you tested the FlyingPigsAreEverywhere-hyphesis, not finding flying pigs in Springfield might be theoretically interesting). Else I am inclined to agree with @neuromdl:disqus. Also, if I want to argue that a non-significant result be treated as as evidence for no effect I must provide some additional statistic to support this, such as a Bayes factor. Else there is no way of knowing whether my large p value stems from the real effect being null, my N being low, or the population variance being huge, since they all contribute to the p statistic.

    Perhaps “null effect” is also a misleading concept then, in the same way that “significant effect” often is? I.e. we want to say something about the true effect size (is zero, is large, is important etc.) but use terminology that tells us nothing about this.

  • Pingback: Weekend reads: Naughty journals; whistleblowers' frustration; new misconduct definition? - Retraction Watch at Retraction Watch()

  • urilabob

    Given the scope of neuroskeptic, “his own field, and beyond”, there’s some ambiguity about how widely these comments are intended to apply. My own field, computer science, can in principle be almost entirely mathematised. So it’s not uncommon to see papers with results that were provable consequences of previous knowledge, the authors just failed to notice (since hypotheses are often of the form “my wrinkle W will improve algorithm A in metric M”, these pointless results are heavily biased toward null hypotheses). They are unlikely to survive critical review, nor should they. While CS is an extreme, I think any highly mathematised field would have similar issues. I’m happy to accept neuroskeptic’s views in the biological sciences (and even more so in pharmaceuticals because of cherry-picking issues). But we do need to recognise that they are not applicable science-wide.

  • https://forbetterscience.wordpress.com Leonid Schneider

    Problem with Null results is that absence of evidence is not evidence of absence. What they can disprove though, is the methodology (when done thoroughly enough), so that other scientists will not waste time with inappropriate approaches when testing a certain hypothesis. And of course Null-result studies can demonstrate that someone’s published “breakthrough-novelty-impact” result could not have been obtained by the method described, which is an intriguing finding indeed.

  • joseph2237

    Non-results are misleading. There is always something even if it has hundred zeros before a one. Sometimes we go looking for one result an end up with another we think is not in range but it is exactly that which limits research to known acceptable expectations and not break through.
    Mathematicians view zeros as useless but against zeros may means there is something wrong or missing from the equations.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar
+