The Tufnel Effect

By Neuroskeptic | April 6, 2011 7:11 am

In This Is Spinal Tap, British heavy metal god Nigel Tufnel says, in reference to one of his band’s less succesful creations:

It’s such a fine line between stupid and…uh, clever.

This is all too true when it comes to science. You can design a breathtakingly clever experiment, using state-of-the-art methods to address a really interesting and important question. And then at the end you can realize that you forgot to type one word when writing the 1,000 lines of code that runs this whole thing, and as a result, the whole thing’s a bust.

It happens all too often. It has happened to me three times in my scientific career to date and I know of several colleagues who had similar problems. Right now I’m currently struggling to deal with the consequences of someone else’s little mistake.

Here’s one cautionary tale. I once ran an experiment involving giving people a drug or a placebo. When I crunched the numbers I found, or thought I’d found, a really interesting effect which was consistent with a lot of previous work giving this drug to animals. How cool is that?

So I set about writing it up and told my supervisor and all my colleagues. Awesome.

About two or three months later, I decided for some reason to reopen the original data file, which was in Microsoft Excel. I happened to notice something rather odd – one of the experimental subjects, who I remembered by name, was listed with a date-of-birth which seemed wrong: they weren’t nearly that old.

Slightly confused – but not worried yet – I looked at all the other names and dates of birth and, oh dear, they were all wrong. But why?

Then it dawned on me and now I was worried: the dates were all correct, but they were lined up with the wrong names. In an instant I saw the horrible possibility: mixed up names would be harmless in themselves, but what if the group assignments (1 = drug, 0 = placebo) were wrongly paired with the results? That would render the whole analysis invalid… and oh dear. They were.

As the temperature of my blood plummeted I got up and lurched over to my filing cabinet where the raw data was stored on paper. It was deceptively easy to correct the mix-up and put the data back together. I re-ran the analysis.

No drug effect.

I checked it over and over. Everything was completely watertight – now. I went home. I didn’t eat and I didn’t sleep much. The next morning I broke the news to my supervisor. Writing that e-mail was one of the hardest things I’ve ever done.

What had happened? As mentioned, I had been doing all the analysis in Excel. Excel is not a bad stats package and it’s very easy to use, but the problem is that it’s too easy: it just does whatever you tell it to do, even if this is stupid.

In my data, as in most people’s, each row was one sample (i.e. a participant) and each column was a variable. What had happened was that at some point I’d tried to take all the data, which was in no particular order, and reorder (sort) the rows alphabetically by subject name to make it easier to read.

How could I screw that up? Well, by trying to select “all the data” but actually only selecting some of the columns. I must have reordered them, but not the others, so all the rows became mixed up. And the crucial column, drug=1 placebo=0, was one of the ones I reordered.

The immediate lesson I learned from this was: don’t use Excel, use SPSS, which does not allow you to reorder only some columns. Actually, I still use Excel for making some figures but every time I use it, I think back to that terrible day.

The broader lesson though is that if you’re doing something which involves 100 steps, it only takes 1 mistake to render the other 99 irrelevant. This is true in all fields, but I think it’s especially bad in science, because mistakes can so easily go unnoticed due to the complexity of the data, and the consequences are severe because of the long time-scale of scientific projects.

Here’s what I’ve learned: Look at your data, every step of the way, and look at your methods, every time you use them. If you’re doing a neuroimaging study, the first thing you do after you collect the brain scans is to open them up and just look at them. Do they look sensible?

Analyze your data as you go along. Every time some new results come in, put it into your data table and just inspect it. Make a graph which just shows absolutely every number all on one massive, meaningless line from Age to Cigarette’s Smoked Per Week to EEG Alpha Frequency At Time 58. For every subject. Get to know the data. That way if something weird happens to it, you’ll know. Don’t wait to the end of the study to do the analysis. And don’t rely on just your own judgement – show your data to other experts.

Check and recheck your methods as you go along. If you’re running, say, a psychological experiment involving showing people pictures and getting them to push buttons, put yourself in the hot seat and try it on yourself. Not just once, but over and over. Some of the most insidious problems with these kinds of studies will go unnoticed if you only look at the task once – such as the old “randomized”-stimuli-that-aren’t-random issue – which has also happened to me, although it wasn’t my fault in that instance.

Trust no-one. This sounds bad, but it’s not. Don’t rely on their work, in experimental design or data analysis, until you’ve checked it yourself. This doesn’t mean you’re assuming they’re stupid, because everyone makes these mistakes. It just means you’re assuming they’re human like you.

Finally, if the worst happens and you discover a stupid mistake in your own work: admit it. It feels like the end of the world when this happens, but it’s not. However, if you don’t admit it, or even worse, start fiddling other results to cover it up – that’s misconduct, and if you get caught doing that, then it is the end of the world, or of your career, at any rate.

CATEGORIZED UNDER: fMRI, methods, science, statistics
  • Stephen Wood

    Preach it, brother…

  • Anonymous

    “100 steps, it only takes 1 mistake to render the other 99 irrelevant.

    LOL, how many thousands lines (100 of thousands…) in the most insignificant piece of software?
    Does that teach you as lesson?

  • Neuroskeptic

    Well yeah, this whole post is about lessons learned from mistakes.

    Software is a good example actually because while any mistake can cause a bug, the vast majority will be quickly noticed; forget a semicolon in C and it just doesn't compile.

    But the insidious ones are the ones that seem to work fine, but give the wrong output. Unfortunately these are especially bad in science because there is rarely a clearly-defined “right” output when you're doing research.

  • Andrew Wilson

    The thing I like about Excel is that if you use formulas and cell references, you can always retrace your steps to the raw data. My SOP is to have a worksheet with the raw, unaltered data; all transformations, reorganisations, etc happen on other worksheets that point explicitly to specific cells. Need a copy of that data point? Cell reference, not copy paste. Transforming anything? Formulas using cell references.

    The advantage is that once you've debugged the hell out of such a worksheet, all you ever need to do is paste in different raw data. If you find a mistake and need to correct it, the fix will propagate through all the data analysis.

    Your point is well taken, but Excel is far too useful when used wisely. I've cocked up my fair share of data analyses, but I always caught it because of things like the above.

  • Disgruntled PhD

    This has happened to me. I imputed some survey data once, ran all my analyses, and had almost finished a paper when i opened up my csv file and found impossible values in some of my columns.

    Fortunately, I use R and LaTeX to write papers, so changing this was a simple matter of going back to my original data, changing one line of code and pressing Meta-n s (i also use Emacs).

    My moral is don't use excel, dont use SPSS, use R.

  • Catherina

    yupp – keep*paper*copies – a student of ours lost half a year because she hadn't and at some point, she had (inadvertently, in Excel) assigned all highest values to the first animal, all second highest values to the second animal and so on. The R=1 did make us suspicious, but by the time we saw that, the original data had been lost…

  • Neuroskeptic

    Yeah aaper copies and frequent electronic backups stored in different places (so you don't over-write the good data with bad data) are best.

    Emailing data to yourself is the safest way, if the data is small enough. With big data, CDs or hard drives and/or making use of your institutional backup system.

  • SteveBMD

    At first, I thought the “Tufnel Effect” might have something to do with Likert scales that go to 11.

    BTW, nice work with the umlaut.

  • veri

    Use R & TeX? R should tame your beloved Excel. You could also do manual calculations with data portion (be generous with decimals), compare that portion to computerized one, if results are same go ahead with rest.

  • veri

    Test the data heaps? Like check power, rank, Bonfer etc. R gives you a concise summary of every calculation, difficult not to catch data entry errors. You can even use plain text files instead of Excel, or both to check.

  • A Bitter Pill

    Ugh, I had that happen with Excel too. Hate that. I still use Excel for outcome measure, but I have to tell you, I double check every single time I do a sort to make sure it sorted correctly.

  • Kitty

    Nice one, SteveBMD

  • Kevin

    1) The problem isn't so much science, as it is sloppy science. What I mean is, even though there is rarely a clearly-defined “right” answer, you can still write checks to make sure your algorithm, worksheet, whatever, behaves as it should. Some people say you should do this before you even analyze the data. For instance, if you want to know “does my data belong to class X?” you can first create some pseudo-data that is definitely from class X, and your method better detect it. Obviously you can still improperly construct that pseudo-data, but it's a much better solution. Confession: I don't always do this.

    2) At some point you have to trust people, unless you're going to learn everything about everything, especially as data analysis gets complicated.

  • Neuroskeptic

    Kevin: 1) I agree.

    2) I disagree, I think you don't have to understand someone's work to check it. You can take some known input and check that it gives the right output, treating them as a black box. OK you need to understand it a bit to know how to check it, but you don't need to know the details.

    e.g. if you're getting someone to run lab tests on samples you collected. Even there, though, I have had enough bad experiences to nowadays routinely throw in some controls e.g. if it's a saliva test, put my own saliva in three of the bottles, if those three don't come back the same to within the margin of error, I know something has gone wrong.

    There does come a point where you can't check everything e.g. if you are relying on a large software package you can't check every feature.

  • Dan K

    This is really unconscionable. You should be making your data manipulation errors in LibreOffice, after which you should be making your analysis errors in R.

  • DJ

    excellent post – I sometimes wish I could drag myself away from Excel for the same reason you mention. It's just too damn easy to make that mistake of jumbling the data. I always keep a raw backup – and do regular spot checks of random data points from my analyzed files against the raw files. Another good place to store raw data in Excel format is Google Docs. But, yeah, probably better in the long run to move towards SPSS and/or R.

  • Neuroskeptic

    I'm glad to hear I'm not the first person to fall foul of the Curse Of The Excel Sort Function.

  • Anonymous

    great stuff!



No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.


See More

@Neuro_Skeptic on Twitter


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar