Can Science Work Without Trust?

By Neuroskeptic | August 16, 2014 8:26 am

What would happen if scientists stopped trusting each other?

Before trying to answer this question, I’ll explain why it has been on my mind. Science fraud, questionable research practices, and replication have got a lot of attention lately. One issue common to all of these discussions is trust. Scientists are asking: can we trust other scientists to be honest? Is peer review based on trust? Is the act of discussing these issues itself eroding trust? What can we do to restore trust?

fixing_science

But what is trust, in a scientific context, and where does it come from?

Let’s consider the most common kind of scientific communication, the experimental paper. In a paper, the authors assert that they did a certain experiment, that they found certain results, and that these results imply a certain conclusion.

Now, scientists are generally not supposed to take the last step of this chain on trust. We’re supposed to be skeptical of claims that results imply particular conclusions, and we’re expected to evaluate conclusions critically on the strength of the results. Applying this kind of critical analysis is a big part of peer review.

By contrast, scientists are expected to trust that the authors are telling the truth about the methods and the results. We can challenge the authors’ interpretations, and we can even interpret the results as meaningless artifacts, but we can’t suspect the authors of lying about the hard facts of what they did and what they found. But why not? Why should I believe the authors? They might well have an incentive to lie. ‘Nice’ results get published in hot journals, and that gets you promotions, grant money, and influence. So why believe someone when they claim to have got some nice results?

I think there are three possible reasons to believe.

One reason is what I’ll call idealistic trust. In this case, we trust a given person because of who they are. We believe that they would not deceive us. ‘I can’t believe a fellow scientist would do such a thing’. Such trust is implicit. I think it’s this, idealistic, kind of trust that people are worried about when they speak of trust in science being damaged by scandals and fraud cases.

But there’s another kind of trust. I can reasonably trust your data if I can be confident that, were it fraudulent, this fraud would be discovered, sooner or later, and that you would be punished for it. In other words, I can trust you if I believe that it is not in your interests to lie. We could call this pragmatic trust. Unlike idealistic trust, this doesn’t require me to have a high opinion of you as a person. I might see you as a crook who would happily commit fraud if you thought that you could get away with it – but so long as I believe that you wouldn’t get away with it, I can trust you.

I think that both idealistic and pragmatic trust exist in science today. But it would seem that if one of these kinds of trust were to decline, the other would need to be bolstered to maintain trust in science overall. If we can’t trust each other for idealistic reasons, we’d need tougher investigation and enforcement of misconduct, to make pragmatic trust work.

Or is there an alternative? There’s transparency. Transparency removes the need for trust, by allowing readers to see the evidence with their own eyes. For example, if the authors provide the raw data, instead of just the summary results, this might allay my doubts. It’s easy to fabricate numbers in a spreadsheet. It’s harder, perhaps impossible, to fabricate fMRI data, microscope images, handwritten questionnaires. As fraud-busting statistician Uri Simonsohn put it in the title of one of his papers, when it comes to raw data we should Just Post It.

But – would even this be enough? It may be difficult to conjure data out of thin air. But that doesn’t mean it’s difficult to misrepresent data. I might conduct an experiment using certain methods, and then present the results as if I had used different methods, implying a false conclusion. For instance, I might secretly ‘spike’ some of my samples in such a way as to change the results of my tests. Such subterfuge really happens. When it’s been suspected, it has sometimes led to Orwellian measures – people have resorted to placing cameras around the lab to record what scientists are up to.

It seems to me that if scientists stopped trusting each other, such Big Brother measures would be the only way that scientists would be able to convince each other of their claims. I suspect that few researchers would be willing to work under such conditions.

  • Cyril

    on transparency: in psychology like neuro-imaging, we are lacking of quality control tools – and I’d also add we rely too often on weak statistical methods. Typically few people check there raw data for artifacts, being weird reaction times or abnormal time courses / weird beta values in fMRI data ; such tools will increase the trust in the data (if I post my data + the quality control, you’ll trust me more). Similarly for stats, most of time we rely on ordinary least squares without checking assumptions (iid of residuals) – if I’d show the residuals, and/or show alternative methods giving the same results, again you’d trust me more (well not always in a recent paper the reviewer had more trust in removing outliers with standard deviation and using OLS than removing outliers using the mad-median rule and trimmed mean based method .. I guess there is a long to go ..)

  • http://neuroscimed.WordPress.com/ Pierre Mégevand

    Very good post. I think that sharing data and methods (reagents, mouse strains, computer programs) should be done systematically as part of the communication of scientific results. One of the objectives is to promote honesty and accountability, but there are other benefits, e.g. pooling results of ‘rare’ observations (e.g. those that depend on uncommon circumstances or on exceptionally expensive apparatus), balancing the inequalities in funding across scientists from different labs and countries, promoting more in-depth analysis of complex datasets that might not have been explored thoroughly the first time around. An additional important point is that most science is funded by governments, thus the general public. Data should thus be conceived of as belonging to the public. Accordingly, initiatives for data sharing should be encouraged and supported by the bodies that fund science.

  • Pingback: Can Science Work Without Trust? – Discover Magazine (blog) | Executive Training Dubai()

  • Pingback: Can Science Work Without Trust? - Discover Maga...()

  • https://twitter.com/iucns iucns

    I’m also in favor of greater transparency and data-sharing,
    not least because it facilitates meta-analyses. As Ben
    Goldacre likes to say, “sunlight makes for a good disinfectant”.

    In my view, two other forms of trust also come into play.
    Firstly, a different form of Pragmatic Trust, such that one
    has to put a certain amount of confidence in published
    results for scientific progress to work. Similarly, if one
    were to assume that all pieces on the nightly news were
    fraudulent, it would be impossible to get anywhere in life
    and society. This is not to say that big names in science
    or news get everything right, just that an extreme position
    would most likely be untenable.

    Secondly, IMHO there’s an aspect of Communal Trust which
    is similar to Kant’s golden rule. Assuming that I as a scientist
    would not falsify or manipulate data, there is a certain
    assumption that other scientists would not do this either.
    Again, I don’t mean this as blind faith, but as trust that is
    given as a form of upfront credit.

    In sum, I believe that science works on a Culture of Trust
    without which collaboration, collegial enquiry, and scientific
    progress would be utterly impeded.

    • Brian Freeman

      One of the weaknesses in this (“secondly”) rationale is that it is highly subject to changing cultural influences. As the author pointed out with the “pragmatic” reason for trust:

      “…but so long as I believe that you wouldn’t get away with it, I can trust you.”

      It doesn’t matter if *I* think they wouldn’t get away with cheating. What matters is if “he/she/they” think they can get away with cheating. When I was in school, cheating was loathsome and scandalous and rarely happened. Today? ….Not nearly so much, from what I hear, and very common.

  • Brian Freeman

    This well-written piece highlights the “Big Dilemma of Science”. Scientific standards are lofty — but they are only as good as the humans applying them. …And humans are very — well — *human*. We lie, cheat, steal, and most of all, we make lots of honest mistakes. In the current standards of American Culture, all of the above are totally unacceptable, and the more they occur within a given academic discipline, the more distrust there is for that discipline. (i.e. “Guilt by association.”)

    Transparency is the only reasonable answer, but often the most difficult to implement for “the masses” who don’t understand how and why statistical studies are done as they are. And frankly, I think that most of “the masses” don’t want to understand — they simply want to believe what makes them feel better about themselves. Statistical studies usually lack transparency so much that they can support any belief desired.

    But the author did miss one critical and frequent cause for scientists lying about their research: Emotional investment. I think that sometimes, they don’t consciously realize they are lying — but have become so invested in believing their results that they cannot allow themselves to believe otherwise. The consequences of admitting that they may have been wrong about some conclusions is simply too horrible to contemplate. This is when they may start unintentionally skewing their results, and having more “faith and trust” in their own research than they should have.

    Every scientist *should* be their own worst critic of their work– simply to help keep themselves *objective* about their work. Few are — particularly in the medical sciences — because so much of that research deeply affects the lives of others.

    • Matthew Slyfield

      “Transparency is the only reasonable answer, but often the most difficult
      to implement for “the masses” who don’t understand how and why
      statistical studies are done as they are. And frankly, I think that
      most of “the masses” don’t want to understand — they simply want to
      believe what makes them feel better about themselves. Statistical
      studies usually lack transparency so much that they can support any
      belief desired.”

      There is an answer for that. Papers heavily dependent on statistical methods should be reviewed not only by other scientists in the field, but by academic and/or professional statisticians. If a statistician won’t sign off on your statistical results, your paper is no good.

  • DS

    Transparency is a must and online post publication review, as the default means of review, could be a boon for what you have called pragmatic trust. There is nothing like world-wide embarrassment to discourage outright fraud, p-value fishing/hacking and the ubiquitous problem of methods based on wishful thinking. Pre-publication review in many ways is presently acting as a foil to criticism and a pre-staging ground for such scientific malfeasance (intentional or otherwise).

  • Pingback: Scientists, what happens when we stop trusting each other? #science | CauseScience()

  • Meca

    All science is based on this trust. The control is in the social science. That the duty
    of any doctor to his disciples (human resources) is, first and
    foremost, to form it in the ethics. I am interested in psychometrics and
    although this area things are loose (no longer communicates almost
    nothing) think of falsifying data has no chance, it is a matter of
    “intellectual honesty”. We all know who is who in the area in which we
    develop. However I like and sounds interesting to get the scope of Open
    Access to the data also.

  • Pingback: Can Science Work Without Trust? – Neuroskeptic | DiscoverMagazine.com | Marshall University Composition Program()

  • wooter

    It seems that science has joined the ranks of professional sports where it’s all about the money and no longer about playing fairly by the rules. It may be trite, but it’s still true that the love of money is the root of all evil. Sports before big money showed up is almost unrecognizable from today. Sponsorships and endorsements make multi millionaires, or even billionaires out of guys that are good at hitting a ball with a stick. The recent scandal (and suicide) in Japan highlights not only the pressure to be at the front of research, but the illogic of cheating and expecting to get away with it in a peer reviewed environment.

  • Pingback: Weekend reads: Peer review abuse, a journal that will print anything for $1,200, PubPeer faces legal threats | Retraction Watch()

  • skopitone

    “They might well have an incentive to lie.”
    Well for me that’s the heart of the problem. How come that in science, which is the pursuit of truth, one can have an incentive to lie?
    You also give the answer: Hot papers.

    THIS is the rotten limb of science that needs to be severed, not trust. We’ll always have to rely on trust, as one can make up raw data as easily (and easier everyday) as making up “summary results”. Even if a GoPro was filming me doing my experiments, I’d find plenty of occasions to mess around IF I WANTED TO.

    The only way to not rely on trust would be to ask reviewers, not only to critically assess a manuscript, but to repeat each and every experiment before accepting it.

    No, the good way to restore trust is to cut the incentive to lie: evaluate researcher on the long-term impact of their research, rather than on short-term evaluations by three referees.

  • Pingback: Improve Brain Memory IQ Mind Brain News and Informative Articles | Brain Training » Can Science Work Without Trust?()

  • Eugene Ciancanelli

    Science exists in the world of human behavior. People and institutions like to hear good news and rewards flow to those who carry the good news to us. In contrast everyone has heard the old adage “Shoot the messenger”, which is the reward to those who bring negative or what is perceived to be bad news. In science, bad news can be negative results or disparagement of some popular goal or corporate/government project. Climate Change is a current example, where those scientists, who go to any lengths to support the popular and governmental position are rewarded, while scientists who question or produce evidence to the contrary are disparaged and persecuted. When such conditions occur, many scientists surrender their integrity to join the popular religion that masquerades as science with the blessing of government and institutions.

  • Pingback: Should we trust scientists? The case for peer auditing | To infinity, and beyond!()

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »