Rich People May Not Be So Unethical

By Neuroskeptic | May 23, 2012 6:47 pm

There was quite the stir a few weeks back about a psychology paper claiming that rich people aren’t very nice: Higher social class predicts increased unethical behavior.

The article, in PNAS, reported that upper class individuals were more likely to lie, cheat, and break traffic laws.

However, these results have been branded “unbelievable” in a Letter to PNAS just published. Psychologist Gregory Francis notes that the paper contains the results of 7 seperate experiments, and they all found statistically significiant socioeconomic effects on unethical behaviour.

Those 7 replications of the effect “might appear to provide strong evidence for the claim” – one study good, 7 studies better, right? – but Francis says that actually, it’s too good to be believed.

Each of the studies was fairly small, and the effects they found were modest, and only just significant. So the observed power of the studies – the probability that a study of that size would detect the effect that they did, in fact, find – was only about 50-88% in each case.

Think of it this way: if you took a pack of cards and discarded half of the black ones, then shuffled the remainder, a random card from the deck would most likely be red. But even so, it would be unlikely that you’d pick seven reds in a row.

The chances of all 7 studies finding a positive result – even assuming that the effect claimed in the paper was real – is just 2%, by Francis’s calculations.

Ow.

He concludes “The low probability of the experimental findings suggests that the data are contaminated with publication bias. Piff et al. may have (perhaps unwittingly) run, but not reported, additional experiments that failed to reject the null hypothesis (the file drawer problem), or they may have run the experiments in a way that improperly increased the rejection rate of the null hypothesis (4).

What might have happened? Maybe there were more than 7 studies and only the positive ones were published. Maybe the authors peeked at the early data before settling on the sample size, or took other outcome measures that showed no effect and went unreported. See also the 9 Circles of Scientific Hell.

Or maybe not. Piff et al respond in their own Letter, firmly denying that they ran any other unpublished experiments, and saying that they “scrutinized our data collection procedures, coding protocols, experimental methods, and debriefing responses. In no case have we found anything untoward.” They go on to criticize the method Francis used to get his magic 2% figure, which they point out relies on some debatable assumptions.

Even if you buy the 2% figure, it doesn’t mean that the true effect is zero; it might be real, but exaggerated. Ultimately it all becomes rather murky and subjective, which is why I think we need preregistration of research, which would prevent any possibility of such data fiddling, and also remove the possibility of false accusations of it… but that’s another story.

ResearchBlogging.orgFrancis, G. (2012). Evidence that publication bias contaminated studies relating social class and unethical behavior Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1203591109

  • Anonymous

    Goldman-Sachs, JP Morgan, AIG, Madoff, Facebook, etc bilked and are bilking trillions of dollars out of the targets of their ponzi schemes.

    Just because it's legal doesn't mean it is ethical.

    You get rich by being unethical. You avoid jail by CYA.

    • wiserd911

      “Goldman-Sachs, JP Morgan, AIG, Madoff, Facebook, etc bilked and are
      bilking trillions of dollars out of the targets of their ponzi schemes.”

      Those with power, on average, do have a greater capacity to damage others with their unethical behavior. They can steal millions rather than just hundreds.

      But you’re cherry picking your test subjects here, hardcore. Are you looking for average harm done by a group, or modal ethics of people in that group?

      There are authors, doctors, etc. who produce large amounts of wealth. A guy who makes $100,000 as a doctor and donates $10,000 is probably better for society than a guy who makes $20,000 and donates $5,000.

  • Anonymous

    Your comment is beyond dumb because you make one huge assumption and one huge implication: The assumption that all rich people got rich through unethical means, and the implication that unethical behavior therefore does not exist in people who are not rich (because they did not resort to unethical behavior and thus failed to become rich).

    Obviously, both of these statements are ludicrous. There are plenty of rich individuals who are not under investigation and likely did not use unethical behavior to attain their financial status. These “ethical” rich people likely comprise the vast majority of the wealthy population, and anyway, the burden of proof for unethical behavior should be on the accuser, rather than the accused (innocent until proven guilty and all that).

    Likewise, there are millions of examples of unethical behavior in populations that could be classified as “not rich.” I suggest checking your local jail or prison for hundreds of living, breathing demonstrations.

  • http://www.blogger.com/profile/07314450642021911177 Andy McKenzie

    You keep asking for pre-registration of scientific studies. I agree would be good, if logistically challenging, but why not diversify your preferences a bit? E.g., ask for a prediction market on scientific ideas, where people can bet on whether they think that the results in a given experiment will/would replicate or not. This has the advantage of being post-hoc (so less logistically challenging) and more democratic.

  • http://www2.psych.purdue.edu/~gfrancis/ Greg Francis

    PNAS does not encourage extended discussions of these kinds of issues, so I have written up a rebuttal of the counterarguments described by Piff and colleagues. A copy can be found at

    http://www2.psych.purdue.edu/~gfrancis/Publications/FrancisRebuttal2012.pdf

    It's somewhat technical, but the gist is that none of their counterarguments undermine the conclusion.

  • http://www.blogger.com/profile/03966730543740949237 red

    Without having read the Francis paper, I can say that post-hoc “power” calculations are meaningless. The probability you quote from Francis that all 7 papers being positive is 2%, is incorrect. That probability is 1.

    One has to use fairly careful language here, and it's quite easy to draw very wrong conclusions is one doesn't do so.

  • Anonymous

    Anon2

    I didn't imply “that unethical behavior therefore does not exist in people who are not rich”. You did that.

  • Nitpicker

    While I sympathize a little with him, I find that Greg Francis is waging a bit of a crusade with this publication bias argument. This is not the first time he has published such a rebuttal of other people's findings. I don't think this sort of argumentation is going anywhere. Authors will respond that they did not do anything wrong. In one recent response I read (need to look for it) they actually applied the same tests to a replication showing that their result passed the threshold of not showing evidence of publication bias.

    Maybe I am naive but what I'd say here is this: sure, a 2% probability of detecting an effect is pretty unlikely. But rather than being publication bias, couldn't this reflect the fact that the estimate of effect sizes (and thus power) of the individual experiments is actually too low? There may very well be factors that lead for such consistently poor effect sizes even if the population effect is large.

    Just ask yourself the question, how many experiments would the authors have had to have run in order to get these 7 significant weak results? Is is likely that this is what really happened?

  • Nitpicker
  • http://www1.psych.purdue.edu/~gfrancis/home.html Greg Francis

    @red: You are correct about _these_ 7 experiments having a probability of 1. More generally, one should not discuss the probability of events that have already happened. The power analysis provides a description of the probability of _a_ set of 7 experiments that have these characteristics. From that perspective one can talk about probabilities. Post-hoc power has been abused in the past, and it is not very precise. However, it is not quite meaningless. You are right about the language though, and it's possible I said it wrong in a few places. The phrasing is commonly used incorrectly in discussions of hypothesis testing.

    @Nitpicker: I provided a rebuttal to the authors of the i-Perception paper (at the link you provided). I think the case for bias is pretty convincing. The number of needed studies is actually not that large if the studies are run incorrectly. The easiest way to do it is with an improper sampling technique called optional stopping. I describe this in my rebuttal, and the link is in my earlier comment. There are actually a lot of reasons to suspect that effect size estimates are too big rather than too small.

    I would not describe my analyses as a crusade. I do think that scientific results should be subject to criticism.

  • http://www.blogger.com/profile/06647064768789308157 Neuroskeptic

    Andy McKenzie: That's a great idea as well. But I am currently banging the registration drum and you can only bang one drum at once.

    Greg Francis: Thanks for the comments!

  • http://www.blogger.com/profile/16444028162526426333 Marcus Munafo

    There's another way of framing this – the fact that the observed power is close to 50% in most cases indicates that the p-values for these experiments were very close to the magic 0.05 threshold.

    Calculating the sample size to achieve 80% power in a replication attempt should therefore indicate the need for a much larger sample size.

    The fact that the sample size of the subsequent experiments is smaller than in the original experiment (with the caveat that they weren't direct replications) suggests that at the very least the authors were very lucky, even if the effect size estimate from the first experiment was accurate and not an over-estimate.

    This doesn't necessarily indicate publication bias, data massaging or anything like that – with enough labs around the world running a sufficient number of studies, some will happen upon a sequence of results like this just by chance. We just never get to hear of the ones that don't work out….

  • Nitpicker

    @Greg: thanks for the reply. I saw your response to the authors yesterday when I found the link. It's a nice response and I like that you are making efforts not to be hostile.

    I can see how optional stopping might exacerbate the problem a lot. I do wonder if this isn't really underlining a main problem with classical stats though as I have no doubt that the authors' intentions were in the right place. The experiments link together and test related hypotheses. And nobody will probably ever know if there was any optional stopping here or not.

    Regarding my crusade comment, you are right that results should be up for scrutiny but my point was really that simply going after one such example after another probably isn't very helpful. It backs people into a corner and you get these rebuttals and counter-rebuttals but in the end I don't know if this resolves anything. I do like your conclusion to suggest cooperation on the matter. Such efforts may result is a positive change.

  • Ivana Fulli MD

    Andy McKenzie 23 May 2012 20:54

    ///why not (…) ask for a prediction market on scientific ideas, where people can bet on whether they think that the results in a given experiment will/would replicate or not. This has the advantage of being post-hoc (so less logistically challenging) and more democratic.///

    This “more democratic and less logistically challenging” proposals of yours is simply fascinating but how will it work in the real-world ?

    Precisely:

    1) Who would get a voting right in your democratic election and will some voters get to be “more equal than others”?

    2) How to cast a “scientific” vote with confidence if negative studies are allowed to remain hidden from public views?

    About your “less logistically challenging since post-hoc” I will just have to trust your being more clever and educated in stats than a poor middle -age clinician.

  • Ivana Fulli MD

    Greg Francis 24 May 2012 01:06

    ///I would not describe my analyses as a crusade. I do think that scientific results should be subject to criticism.///

    Not only that but be “research grants” and research positions and academic tenures should be given only to people able to understand that -even in psychology and neurosciences.

  • omg

    Science testing the ethics of rich folks.. … lol.

  • Ivana Fulli MD

    omg,

    More like science publications testing the ethics and skills of scientists -to my mind but still always nice to benefit from your sense of humor.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »