“Positivity Ratio” Criticized In New Sokal Affair

By Neuroskeptic | July 16, 2013 6:04 pm

British psychology student Nick Brown and two co-authors have just published an astonishing demolition of a top-ranked paper in the field of positive psychology: The Complex Dynamics of Wishful Thinking

One of the authors of the critique is Alan Sokal, the physicist who, in 1996, famously wrote a parody of then-fashionable postmodernist theorizing and had it published as a serious paper in a cultural studies journal, thus sparking years of controversy.

It might happen again. The target this time is the ‘critical positivity ratio’ – the idea that if your ratio of positive to negative emotions is over a certain value, 2.9013, then you will ‘flourish’; any lower and you won’t.

The ‘critical positivity ratio’ is a popular idea. Fredrickson and Losada’s 2005 paper on it has been cited a massive 964 times on Google Scholar, just for starters.

And yet – that paper is complete rubbish. As are Losada’s previous papers on the issue. I criticize a lot of papers mysef, but this one really takes the biscuit. It’s an open and shut case.

As Brown et al write, the idea of a single ‘critical ratio’ that determines success or failure everywhere and for everyone is absurd in itself:

The idea that any aspect of human behavior or experience should be universally and reproducibly constant to five significant digits would, if proven, constitute a unique moment in the history of the social sciences.

But even were there a magic ratio, it wouldn’t be 2.9013. The whole analysis in the 2005 paper was based on taking a poorly-described dataset and then making it fit a mathematical model, purely by means of elementary misunderstandings.

Losada observed positive and negative emotions change over time, and that we can model this process in the form of a Lorenz system. The Lorenz system is a mathematical function famous for being pretty (e.g. ooh!).

There are infintely many Lorenz systems, based on three set-up ‘parameters’, each of which can be any number. It turns out that Losada set two of those three variables to the values used by a geophysicist in 1962, who picked them purely to make a pretty illustration for his paper about air flow.

If you set up a Lorenz system in exactly this way, and set it running, you can get a number out, 2.9013. This number is meaningful only within this particular system, with those particular paramaters.

Yet by means of an epic series of assumptions, Losada declared this meaningless quantity to be the Key to Happiness and Success. There’s loads more detail in the Brown et al paper, and it’s surprisingly readable for something so depressingly stupid.

As Brown et al say:

One can only marvel at the astonishing coincidence that human emotions should turn out to be governed by exactly the same [Lorenz] equations that were derived as a deliberately simplified model of convection in fluids, and whose solutions happen to have visually appealing properties.

An alternative explanation – and, frankly, the one that appears most plausible to us – is that the entire process of “derivation” of the Lorenz equations has been contrived to demonstrate an imagined fit between some rather limited empirical data and the scientifically impressive world of nonlinear dynamics.

But why has it taken eight years for someone to point this out, given the size of the claim combined with the paucity of the evidence?

[The 2.9013 critical positivity ratio] would, if verified, surely require much of contemporary psychology and neuroscience to be rewritten; purely on that basis we are surprised that, apparently, no researchers have critically questioned this claim, or the reasoning on which it was based, until now.

The Emperor’s New Clothes analogy is horribly overused, and but in this case, it seems apt – or at least, I hope so.

The alternative is worse: that no-one spoke out simply because no-one in the field of positive psychology could see anything wrong with it.

On that note, it would obviously be wrong to dismiss all of positive psychology research just because of one bad paper. However, positive psychologists do have a case to answer, for letting this get 964 citations.

For example, the guru of the field, Martin Seligman, quoted the Losada 2.9 ratio in a talk, although he did warn that it should not be taken as universally valid.

Everyone who cited this either did so without understanding it, or didn’t bother to check.

ResearchBlogging.orgBrown, NJL, Sokal, AD, & Friedman, HL (2013). The Complex Dynamics of Wishful Thinking: The Critical Positivity Ratio American Psychologist DOI: 10.1037/a0032850

  • William Booth

    They’re so wrong.

    Everyone should know it is 42.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      42.00019

      • http://www.ncbi.nlm.nih.gov/pubmed heedless

        Only in windows 95

        • tonyr62

          You win the Obscure Reference that Amybody under the Age of 45 will not Understand Award!

          • Charles Miller

            How many kids understand: “You sound like a broken record”?

    • Patrick Carroll

      Indeed. The number you get when you multiply six by nine.

    • http://najoll.wordpress.com/ NJ

      Yes: in a sense (admittedly a fairly vague one), Douglas Adams anticipated this act of debunking!

  • http://petrossa.me/ petrossa

    That’s what you get when you mistake psychology for a science. When i read the article on excess bias http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001609 my first thought was that they might as well just use that as a template for anything in high interest fields of science.
    The old “Why Most Published Research Findings Are False” http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020124 will hold true for as long as humans do science.

    • http://www.facebook.com/don.meaker Don Meaker

      Yet every psychology student is tested regarding Freud’s Id, Ego, and Superego. Compare that with chemistry tests on “dephilogistated air”.

      • http://petrossa.me/ petrossa

        Yeah you’re right. chemistry tests on “dephilogistated air” are more logical and trustworthy than the ramblings of an addict. At least they sound somewhat scientific, whilst Freud’s completely missing the point, making up some convoluted abstract concept of what he believed was the human mind (condemning untold millions to suffer the ‘therapies’ of borderline quacks) don’t come even close to anything scientific

        • Rooger

          And yet meta-analyses show the efficacy of psychotherapy. “That’s what you get when you mistake psychology for a science.” If you’re bringing up Freud in regards to psychology, you’re obviously getting your awesome insight from television. Take a psych research methods course AND maybe do a little reading on the history of science. You’ve got this!

          • http://petrossa.me/ petrossa

            Lol. I wrote a piece on that. Links go in the spam box so if you want to read it, go to my homepage and read May 2012: Psychology has tested psychology. It is great.
            If you check the efficacy against the studies made by psychologists the confirmation bias factor gets to be exponential. It’s a total circular logic. ACME says that ACME products are the best.

      • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

        I was a psychology student once and I wasn’t tested on Id’s etc. 30 years ago maybe, but nowadays, Freud’s nonsense is mostly forgotten… replaced by new nonsense.

  • http://blogs.discovermagazine.com Ryan Tham

    I don’t think it’s a hoax, or if it is the authors have been committed to the prank for nearly a decade:

    http://www.positivityratio.com/

    • Charles Miller

      Alas, if someone can make a lot of hay with a goofy idea, what motivates them to stop? They’ll just keep on presenting the false science until someone steps up and asks good questions. Careers can be made around junk science. Just troll the diet-book section of your local book store.

  • Pingback: Sokal Strikes Again | Choice & Inference

  • http://neuroautomaton.com/ Zachary Stansfield

    This appears to be an extreme case, but I am starting to believe that this sort of nonsense is very common in many fields. My sense is that modern academics are either too happy to cite the conclusions of a big paper without critically evaluating it, or they simply cannot be bothered to write a critique of nonsense.

    Nonsense would almost seem like an obvious target for academics looking to hit pay dirt. But if large parts of your field are built upon faulty assumptions and rhetorical flourishes to disguise mountains of circular logic, then it must be pretty difficult indeed to build a critique without sinking your own work…

    • RogerSweeny

      That is certainly the case in education research.

    • ThomasVeil

      Just wondering here as a non-scientist: When the paper was published, doesn’t it mean it was peer-reviewed? And isn’t peer-review exactly the idea that other scientists can rely on it without rechecking it? I assume if you cite a lot of studies, it would be next to impossible to evaluate all papers.

      On the other hand – I would also think that it invalidates all studies that rely on it. I’ve read that that is how it works with data points usually: Your final error margin is at least as big as the biggest error margin among the data points. Typically its even bigger… so if you use one faulty study to base your work on (and probably some more), the conclusion would likely be wrong.

      • Charles Miller

        One of the most devious ways of publishing is to select data so that one’s pet notion or model is upheld. This is hard to catch, as the “offending” data is just left out.

        I did hear of a case of this (from someone who rotated through a lab). It is one of the most rotten things to have happen.

    • Charles Miller

      Certainly the widespread availability of computers, plotting and analysis programs can help turn garbage into pretty garbage.

      It’s a little frightening to hear older people (say 40+) state with great admiration that their grade school child or grandchild can make a powerpoint presentation while in 4th grade.

      Big whoopee.

      Yes, they do book reports in my daughter’s school. But the kids still might not know what a sentence is (let alone that most difficult concept: The Paragraph).

  • Sanjay Srivastava

    Do not neglect Fredrickson’s balanced, thoughtful response. She concedes, in an entirely open-minded and undefensive mannter, that the math was wrong. She then sorts through what is baby and what is bathwater:

    “It bears underscoring that the claims Losada and I made in our 2005 AP article (Fredrickson & Losada, 2005) were supported by three interwoven elements: psychological theory, mathematical modeling, and quantitative data. Here I unthread the now-questionable element of mathematical modeling from this braid, which leaves us in territory familiar to most psychological scientists, that at the interface of theory and data. While perhaps not as compelling as the trio of theory and data buttressed by mathematical modeling, the resulting duo nevertheless remains a strong and dynamic one.”

    http://psycnet.apa.org/psycarticles/2013-24731-001.pdf

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Fredrickson’s response is a fine defense of the baby that remains after the bathwater is thrown away. It doesn’t explain how the baby got so deep in the bathwater in the first place – or what the lessons to be drawn from this are.

      • David Palmer

        Nor does it explain why the baby is a transparent liquid.

        (I assume. I am positive that I don’t want to spend $12 and however long it takes to read her justification. How positive am I? 2.9013 positive.)

      • boballab

        To figure out how the baby got so deep in the bathwater is simple: They held it under until it stopped moving. When it comes to statistics if you torture the data long enough it will eventually give you what you want.

    • Dale Barr

      OK, so Fredrickson is astute enough to dissociate herself from the mathematics that formed the basis of her publication with Losana, now that Brown et al. have exposed it for what it is (cf. Harry Frankfurt, 2005). To me, her defense just sounds self-serving. The noble response would have been to step up and retract the paper in question. After all, she has essentially admitted that she had insufficient understanding of the mathematics that formed the basis of the paper, and has made no credible defense of its primary claim.

      It irks me that the 965 citations to the paper (and counting) still contribute to Fredrickson’s very high ranking among psychologists on Google Scholar, and will do so forever more. The Brown et al. paper is but a tiny needle in a giant citation haystack for future researchers to sift through. It also irks me that the laypersons who happen to read “Positivity” and are led to believe that the 3-to-1 ratio has a “scientific” basis (by the subtitle of the book, no less!) have little chance of actually discovering the falsity of this claim, or at least that the author has updated her view to the more plausible, but less sciency-sounding, idea that more positivity is (usually) better for you—depending on the context, of course!

      I am glad that American Psychologist published the critique, but now that the paper in question has been shown to be fatally flawed, isn’t a retraction in order?

      • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

        It’s hard to say but personally if I were her, I might want to retract it – for the sake of the rest of her work.

        The ‘Losada math’ is currently a major blot on her published work.

        She ought to cut it out and make a clean break – which is what she tries to do in her response to the Brown et al critique, but so long as the paper is still out there with her name on it, the association will continue.

  • calling all toasters

    Papers with dubious methodology may be rampant, but it kind of boggles my mind that a paper whose conclusions are prima facie absurd got published. Then again , it IS American Psychologist.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Yes – like I said, I read a lot of flawed papers, but this is just another level.

      • http://www.ncbi.nlm.nih.gov/pubmed heedless

        Was there nobody involved in the review who had ever taken college level math or CS?

        My first thought on seeing the image was “I’ve seen that somewhere before.”

        • Charles Miller

          My first thought: That is a pretty picture, but how is that complex relationship justified?

  • Pingback: Instapundit » Blog Archive » SCIENCE: Psychology Paper On “Positivity Ratio” Demolished….

  • stephen barron

    Sounds like models “proving” anthropogenic climate change.

    • jhertzli

      Let’s see… In response to an article about a paper misusing
      weather-related math in psychological research, you’re comparing it to global-warming research. Does this mean weather-related math is being misused to analyze weather?

      OTOH, some psychologists have tried commenting on the supposed psychology of global warming. I suppose that would be an example of the misuse of psychology research in weather analysis.

      • Smack

        No,

        I don’t think he was trying to compare the two as an apples to apples analysis seeking to glean some insight into psychology from the ‘science’ of climate change. I think he was referring to the similarities in manipulation of data sets and mathematical modeling to push a pre-determined result and call it science.

      • http://www.facebook.com/don.meaker Don Meaker

        Edward Lorenz showed in his 1963 paper “Deterministic Nonperiodic Flow” that the Navier Stokes equations, which describe fluid flow with changes in temperature and density, are chaotic due to their nonlinearity. That means non trivial prediction from a finite set of past states is not possible. So yes, that would mean that weather or climate models that pretend to predict future temperature/density states from past states would be invalid.

  • Byron

    “Social Text” is NOT a sociology journal.

    Sociology has enough problems without being saddled with that.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Wooops! Fixed.

  • dave72

    Losada should help the global warming hysterics develop some new models. They don’t know $hit about mathematical model either.

    • http://www.facebook.com/don.meaker Don Meaker

      Edward Lorenz did. Check ‘Deterministic Nonperodic Flow.’

  • gonzo

    **cough** global warming **cough**

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      Wow, there’s a whole crop of comments from climate skeptics! Did the zoo just install Wifi in the ape enclosure?

      Actually, that’s not fair. You could easily train an ape to tell the difference between a line pointing up and a horizontal one.

      • docmerlin

        And this is why bad methodology keeps around. …because people who don’t know that field (like yourself) automatically assume that the people who are claiming bad methodology are cranks.
        Climate research, in general, is full of horrible methodology. Its not quite as bad as what you described for this paper, but its still pretty terrible.

      • jhertzli

        I think we can match each other fool for fool. AGW is almost as good as HBD in provoking nonsense from both sides.

  • Pingback: “Positivity Ratio” Criticized | Boulder is a Stoopid Place

  • http://chicagoboyz.net/ TMLutas

    We are massively underfunding replication studies by independent people who would win serious reputation points by knocking down any bogus studies they find. I would think that knocking down or replicating others’ studies would be a natural starting point for teaching how to do studies, but we don’t really do it.

    • richard40

      How about this idea. The journals that now publish studies will have a replication quota, where 50% of the papers they will publish from now on will not be new studies, but studies that attempt to replicate or critically examine existing studies. And studies that clearly demonstrate replication failure, or clearly discredit an existing study, will be given publication priority in the replication quota.

      • http://chicagoboyz.net/ TMLutas

        Fine, if you can swing it but I see plenty of practical issues with it on first amendment issues. Instead, consider the idea of making an open access journal, set up templates for students to contribute to and submit replicating papers via this mechanism with significant rewards for bringing causing actual retractions.

        • richard40

          I would not mandate it by law, because of the free speech issues you cited, its just a reform the journals themselves should voluntarily adopt. One thing we could do though, if any of the research is fed funded, require that half the funded research go to replication, with provisions to favor grants that will disprove a known journal paper.

          • http://chicagoboyz.net/ TMLutas

            This tack is better but still runs aground on two counts:

            1. The journals don’t have the money to do it
            2. The journals demonstrably aren’t interested in doing it.

            An outside force with different incentives might actually work. How to structure the incentives is the question and who would do the work with the paltry amount of funds likely available.

          • richard40

            Money is not a factor for the journals. They have money to publish a fixed number of papers a year, and can choose which ones those are, so why not choose more replication papers.
            You are right that the journals may not do it for other reasons, like sensational new findings bring in more readers than replications would. Also perhaps their submitters prefer to write new finding to replications, although that dynamic might change if replications got published at a higher rate.
            Your idea of a journal dedicated to replications is also reasonable, although as you yourself state who would pay for it.

          • http://chicagoboyz.net/ TMLutas

            Your objection on the who would pay front is also reasonable. I think that students will pay. Taking down bogus science would demonstrate real ability in the scientific realm and be an advantage in searching for jobs. Divided up among, let’s say, the US’ scientific undergrads and grad students, the cost might end up being a penny per head, in other words trivial.

            I do not actually know how much such a journal would cost and it is not my field. The numbers may not work out, but somebody who knows the detailed issues should run the calculations.

          • richard40

            Not sure about assessing involuntary fees on the students. Why not just charge fees for those getting papers published. And why science grads, isn’t the main problem area this article cited with psychology and sociology, not conventional science.

          • http://chicagoboyz.net/ TMLutas

            I was thinking of this as part of a class lab fee and replication being part of the standard training of undergrads in science.

          • richard40

            Yes, but why not start your program with sociology and psychology grad students, where this article says there is the most serious problem with lack of replication. Why change things and asses an extra fee on science grads, where things are generally being done right now.

    • Charles Miller

      I believe this is not so much an issue of replication, but a failure of peer review, which is a more serious problem. The results of the paper could have easily been replicated – by use of the same unjustified mathematical model. And a 2nd pool of peer reviewers might miss the problem again.

      The issue is one of the appropriateness of using a relatively complex model to “explain” data. I studied neurophysiology for 20 years and never had any data set that I would have fit circles to, mainly because such a function implies mathematical “memory” and needs justification.

      THe problem, from my outsider’s POV, is that a fancy mathematical model was inappropriately used. Period. Unless the peer-review process is solid, the replication process won’t work, but will waste time and money.

      Frankly, I believe this example should serve as a warning to not go too far with data.

      • http://chicagoboyz.net/ TMLutas

        I think we agree that somebody questioning the justification of the model is what was needed in this case. Are we likely to get the needed examination through replication studies or enhanced peer review? I see how replicators are just as likely and perhaps more likely to question the underlying assumptions than peer reviewers. Reviewers are anonymous in general and thus suffer little loss of reputation for backing poor science. Replicators would have their names attached to their papers and thus would be incented to do a better job in questioning assumptions.

        No human system is perfect, but pushing replication is much more likely to create improvements than trying to push improvements in peer review.

        • Charles Miller

          I think it is much more likely for peer review or Q&A at meetings to catch this rather than a general call for replication studies.

          I agree that there is a bias against replication; very hard to get federal funding.

          But in this case, WHY replicate, when the model was never justified, nor could be? This was a bad paper, a bad use of not-understood models.

          This should be a wake-up call for those in non-quantitative areas of research that it is not okay to create fluff with curve-fitting programs with no knowledge of the rules for using curve (model) fits. The literally loopy model was an arbitrary choice and speaks poorly of the field. The question is, I think, does this field want to police itself and know its limits?

          The authors needed a math consultant and it seems that the whole field needs some training in the use of mathematical models, if that paper can get 1000 citations.

          • http://chicagoboyz.net/ TMLutas

            Something’s generally not working right in science as we’re undergoing a general rise in fraudulent findings (still low but a troubling slope to that curve). You ask why replicate? I say why not? It’s obvious that more and more nonsense is getting past the existing systems you are putting forward so why not replicate and as a first step make sure that the models make sense. Examining assumptions would have caught this. I think that examining assumptions is the first step in replication. You seem to think that replication is something that’s done blindly.

          • Charles Miller

            But why replicate a really bad study that has obvious flaws? why not write a rebuttal letter (which in most journals is taken very seriously)?

            Replicating this stinker paper gives it a legitimacy it does not deserve. Frankly, it would be a waste of time and resources.

            To your point about falsification… yes, this is a very troubling aspect. It seems to affect medical science / bioscience more than other areas.

            Personally, I think this is what happens when universities continue to exert more and more pressure to publish and even turn a paper around to a sale-able item (read: Translational Research). Unfortunately, the NIH has become infaturated with Translational Research… it simply isn’t good enough to ask a good research question and proceed to address it.

            After hearing Neil deGrasse Tyson speak on science in the U.S., I am convinced that we are not a country with in-the-bones respect for science or scientists. Like so much in our society, it is become commodified… as in just another commodity. But that is NOT the mindset of the truly curious (and persistent) researcher. Universities are in the process of breaking the goose that lays the golden eggs. Particularly as they downgrade more and more research positions to “contingent employee” positions. It’s a recipe for further degradation of science in the U.S.

          • http://chicagoboyz.net/ TMLutas

            I’m not going to branch out because this thread is simply too long already, though it is tempting to point out your off topic errors.

            Of course you don’t continue to do a replication study when you spot bogus models. You shift over to a rebuttal, which is what the subject of the original article did once he was convinced that he had found a true bogus model. The question is how do you increase the eyeballs on studies so that they are actually examined. Replication studies would do that. You haven’t laid out how your peer review/Q&A sessions would accomplish it better.

          • Charles Miller

            Probably not the most endearing way to start a reply, by “tempting to point out your off topic errors.”

            I stand by my view. It would speak poorly of anyone who chose to replicate a stinker study.

            Best to ya.

          • http://chicagoboyz.net/ TMLutas

            The US bashing and Galileo ahistorical fantasy were not called for.

            Best to you as well.

          • Charles Miller

            “US bashing?” “Galileo ahistorical fantasy”?

            It might be better to just agree to disagree. You think Galileo reaped pecuniary rewards with his gravitational experiments? Or do you think I’m just making those experiments up?

            Perhaps just consider that you might not have a strong grasp of the state of science in the U.S. Do you have any understanding of the traditional funding success rates for an NIH grant and the current success rates? And I am specifying the success rate for any funding cycle.

          • http://myindigolives.wordpress.com/ Ellie K

            He is NOT bashing the USA! We are EXCEPTIONAL, except for some silly folks that Charles mentioned. You need not point out his “off topic errors” because there were none. Charles and the NeuroSkeptic are correct. Period. End of story.

          • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

            Agreed, replication is all very well but in this case there’s no meaningful finding to try and replicate.

  • Ben Franklin

    But it all made for such a nice story… Isn’t that the motto of most endeavors in the postmodern age?

  • Smack

    “The whole analysis in the 2005 paper was based on taking a
    poorly-described dataset and then making it fit a mathematical model,
    purely by means of elementary misunderstandings…”

    This quote reminds of some other ‘science’… Perhaps global cooling? No, that was outdated after the 1970′s. Perhaps, global warming? No, that seems to have become outdated after the projected models failed to come anywhere near the actual observations. Perhaps, global climate weirding? Isn’t that the newly accepted dogma? I don’t know since I have trouble keeping up with the latest ‘trends’ in perspective manipulation.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      I believe the new dogma is that global average temperatures are rising. Except it’s not new because that’s been the dogma for decades, and it’s not a dogma because it’s based on the fact that global average temperatures are rising.

      • richard40

        They stopped rising 15 yrs ago though, which none of the climate models predicted.

        • calling all toasters

          Oh, look, it’s the 1998 Truthers!!!!

  • http://www.likelihoodofconfusion.com Ron Coleman

    OK, but it’s a little ironic that at least this article considers this to be a meaningful metric: “The ‘critical positivity ratio’ is a popular idea. Fredrickson and Losada’s 2005 paper on it has been cited a massive 964 times on Google Scholar, just for starters.”

  • richard40

    One of the big problems is the publishing dynamics work against papers that criticise other papers or try to replicate studies. So nobody in that field tries to critically examine another finding, or try and replicate, because that paper never gets published. Only the papers with the next startling new (unreplicated unexamined) finding get published.
    This does not happen in physics or chemistry, where any startling new finding is always subjected to replication and criticism, and those papers get published too. For example, they had that big temporary new flash with cold fusion, but it did not last long, because replication failure and critical papers soon shot it down.
    I suspect a similar thing, lack of replication and criticism, is going on too often in climate science as well.

  • Pingback: Science Hoaxer Blasts Paper Over Bogus Math | Living Biology

  • Pingback: Science Hoaxer Blasts Paper Over Bogus Math «

  • Lisa Sansom

    You might enjoy reading Barb Fredrickson’s rejoinder as well. While Fredrickson clearly admits that she can’t speak to the math, she does speak to the other psychological empirical studies that she (and others) have conducted to try to find a “tipping point” ratio. it is a very professional and open commentary, and furthers the science and her thinking on the matter. Both parties are open and professional and they say that this is how science progresses – that it is “self-correcting” and only grows through this sort of discussion and replication (or not). To characterize a published article as “rubbish” and only critique, without adding constructively to the discourse, is to deny science its true and honest pursuit. It does you no credit, and doesn’t help with the discovery process. I hope that you will acknowledge Barb Fredrickson’s constructive response with as much energy (or three times more?) than you gave to the Brown et al. paper.

    • http://blogs.discovermagazine.com/neuroskeptic/ Neuroskeptic

      I’m going to cover Fredrickson’s response in a follow-up post but in brief: Neither Brown et al, nor I, criticized Fredrickson’s work as a whole, the focus was always the Losada equations.

      It is very possible that Fredrickson’s research is excellent overall, but that would not change the fact that this one 2005 paper was, and I struggle to think of a more polite term, rubbish.

    • Charles Miller

      There is the basic problem of the choice of the mathematical model. In this case, the authors used a non-function, which, in itself brings up problems. I’d think the very shapes of the plots (closed loops!) would alert one to a rather unusual system.

      The fact that two of the three model parameters were not even explored is definitely an issue, also. It suggests that the complexity of the chosen model was not justified.

      To me, this seems like an issue where the authors had access to curve-fitting routines on their plotting program and experimented with them without sufficient knowledge. A danger of the more powerful analysis software packages that are readily available.

      • http://myindigolives.wordpress.com/ Ellie K

        Very polite response and likely too! I believe Neuroskeptic is absolutely correct, but you have described a plausible scenario as to HOW it could have happened.

  • Pingback: I’ve Got Your Missing Links Right Here (20 July 2013) – Phenomena: Not Exactly Rocket Science

  • Pingback: God's iPod - Uncommon Descent - Intelligent Design

  • Pingback: Nothing positive about this ratio : Dangerous Intersection

  • Pingback: Positivity ratio all wrong | Random (and not) Musings

  • Pingback: Positivity: Retract The Bathwater, Save The Baby - Neuroskeptic | DiscoverMagazine.com

  • Pingback: The Best Stuff We’ve Read This Week | Stuff You Should Know

  • Rooger

    I also heard negative talk about her lab’s statistical findings before this stuff ever came up. “You play in dirt, you get dirty.”

  • Pingback: Gjør positiv psykologi oss lykkelige og mindre (p)syke? | Psykologisk behandling & Psykoterapiforskning

  • Pingback: Suhtarv, mis võib muuta sinu elu ja teisi akadeemilisi tarkusi @ Erinevad signaalid

  • Pingback: When the Information Isn’t Good, Nothing’s Good: The Unpopular Path of Questioning Science | Grant Atkins

  • Pingback: (False?) Positive Psychology Meets Genomics - Neuroskeptic | DiscoverMagazine.com

  • Pingback: (False?) Positive Psychology Meets Genomics | Nagg

  • Pingback: (False?) Positive Psychology Meets GenomicsFresh News Today | Fresh News Today

  • Pingback: The Sound of Schools: to Catch the Light | Schools & Ecosystems

  • http://www.happiness1st.com/ Happiness 1st

    Anyone who understands what makes humans sustainably happy would immediately recognize, as I did, that no ratio would work for everyone. A positively focused person who deliberately applies skill-based techniques to maintain her state of happiness could be happy without outside reinforcement/compliments/propping up.

    Someone who has developed self-critical habits of thoughts could receive a hundred the number of positive comments Losada recommended and still not feel good because her own beliefs would contradict the positive reinforcement and they would not be received.

    For example, if you tell someone who believes he is ugly that he is handsome, his mind does not receive or believe the compliment. His mind brushes it aside, giving it a variety of meanings (she wants something, she is just trying to make me feel better, etc.)

    Any ratio would have to be specific to an individual and even that could change over time if the person was deliberately cultivating a better perspective or in an abusive relationship where self-esteem was declining or a wide variety of other situations.

    I don’t need to analyze the data to see the flaws in this number. I would ask what underlying understanding of human happiness the researchers were missing that led to this conclusion.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Neuroskeptic

No brain. No gain.

About Neuroskeptic

Neuroskeptic is a British neuroscientist who takes a skeptical look at his own field, and beyond. His blog offers a look at the latest developments in neuroscience, psychiatry and psychology through a critical lens.

ADVERTISEMENT

See More

@Neuro_Skeptic on Twitter

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »