You Can't Derive Ought from Is

By Sean Carroll | May 3, 2010 10:27 am

(Cross-posted at NPR’s 13.7: Cosmos and Culture.)

Remember when, inspired by Sam Harris’s TED talk, we debated whether you could derive “ought” (morality) from “is” (science)? That was fun. But both my original post and the followup were more or less dashed off, and I never did give a careful explanation of why I didn’t think it was possible. So once more into the breach, what do you say? (See also Harris’s response, and his FAQ. On the other side, see Fionn’s comment at Project Reason, Jim at Apple Eaters, and Joshua Rosenau.)

I’m going to give the basic argument first, then litter the bottom of the post with various disclaimers and elaborations. And I want to start with a hopefully non-controversial statement about what science is. Namely: science deals with empirical reality — with what happens in the world. (I.e. what “is.”) Two scientific theories may disagree in some way — “the observable universe began in a hot, dense state about 14 billion years ago” vs. “the universe has always existed at more or less the present temperature and density.” Whenever that happens, we can always imagine some sort of experiment or observation that would let us decide which one is right. The observation might be difficult or even impossible to carry out, but we can always imagine what it would entail. (Statements about the contents of the Great Library of Alexandria are perfectly empirical, even if we can’t actually go back in time to look at them.) If you have a dispute that cannot in principle be decided by recourse to observable facts about the world, your dispute is not one of science.

With that in mind, let’s think about morality. What would it mean to have a science of morality? I think it would look have to look something like this:

Human beings seek to maximize something we choose to call “well-being” (although it might be called “utility” or “happiness” or “flourishing” or something else). The amount of well-being in a single person is a function of what is happening in that person’s brain, or at least in their body as a whole. That function can in principle be empirically measured. The total amount of well-being is a function of what happens in all of the human brains in the world, which again can in principle be measured. The job of morality is to specify what that function is, measure it, and derive conditions in the world under which it is maximized.

All this talk of maximizing functions isn’t meant to lampoon the project of grounding morality on science; it’s simply taking it seriously. Casting morality as a maximization problem might seem overly restrictive at first glance, but the procedure can potentially account for a wide variety of approaches. A libertarian might want to maximize a feeling of personal freedom, while a traditional utilitarian might want to maximize some version of happiness. The point is simply that the goal of morality should be to create certain conditions that are, in principle, directly measurable by empirical means. (If that’s not the point, it’s not science.)

Nevertheless, I want to argue that this program is simply not possible. I’m not saying it would be difficult — I’m saying it’s impossible in principle. Morality is not part of science, however much we would like it to be. There are a large number of arguments one could advance for in support of this claim, but I’ll stick to three.

1. There’s no single definition of well-being.

People disagree about what really constitutes “well-being” (or whatever it is you think they should be maximizing). This is so perfectly obvious, it’s hard to know what to defend. Anyone who wants to argue that we can ground morality on a scientific basis has to jump through some hoops.

First, there are people who aren’t that interested in universal well-being at all. There are serial killers, and sociopaths, and racial supremacists. We don’t need to go to extremes, but the extremes certainly exist. The natural response is to simply separate out such people; “we need not worry about them,” in Harris’s formulation. Surely all right-thinking people agree on the primacy of well-being. But how do we draw the line between right-thinkers and the rest? Where precisely do we draw the line, in terms of measurable quantities? And why there? On which side of the line do we place people who believe that it’s right to torture prisoners for the greater good, or who cherish the rituals of fraternity hazing? Most particularly, what experiment can we imagine doing that tells us where to draw the line?

More importantly, it’s equally obvious that even right-thinking people don’t really agree about well-being, or how to maximize it. Here, the response is apparently that most people are simply confused (which is on the face of it perfectly plausible). Deep down they all want the same thing, but they misunderstand how to get there; hippies who believe in giving peace a chance and stern parents who believe in corporal punishment for their kids all want to maximize human flourishing, they simply haven’t been given the proper scientific resources for attaining that goal.

While I’m happy to admit that people are morally confused, I see no evidence whatsoever that they all ultimately want the same thing. The position doesn’t even seem coherent. Is it a priori necessary that people ultimately have the same idea about human well-being, or is it a contingent truth about actual human beings? Can we not even imagine people with fundamentally incompatible views of the good? (I think I can.) And if we can, what is the reason for the cosmic accident that we all happen to agree? And if that happy cosmic accident exists, it’s still merely an empirical fact; by itself, the existence of universal agreement on what is good doesn’t necessarily imply that it is good. We could all be mistaken, after all.

In the real world, right-thinking people have a lot of overlap in how they think of well-being. But the overlap isn’t exact, nor is the lack of agreement wholly a matter of misunderstanding. When two people have different views about what constitutes real well-being, there is no experiment we can imagine doing that would prove one of them to be wrong. It doesn’t mean that moral conversation is impossible, just that it’s not science.

2. It’s not self-evident that maximizing well-being, however defined, is the proper goal of morality.

Maximizing a hypothetical well-being function is an effective way of thinking about many possible approaches to morality. But not every possible approach. In particular, it’s a manifestly consequentialist idea — what matters is the outcome, in terms of particular mental states of conscious beings. There are certainly non-consequentialist ways of approaching morality; in deontological theories, the moral good inheres in actions themselves, not in their ultimate consequences. Now, you may think that you have good arguments in favor of consequentialism. But are those truly empirical arguments? You’re going to get bored of me asking this, but: what is the experiment I could do that would distinguish which was true, consequentialism or deontological ethics?

The emphasis on the mental states of conscious beings, while seemingly natural, opens up many cans of worms that moral philosophers have tussled with for centuries. Imagine that we are able to quantify precisely some particular mental state that corresponds to a high level of well-being; the exact configuration of neuronal activity in which someone is healthy, in love, and enjoying a hot-fudge sundae. Clearly achieving such a state is a moral good. Now imagine that we achieve it by drugging a person so that they are unconscious, and then manipulating their central nervous system at a neuron-by-neuron level, until they share exactly the mental state of the conscious person in those conditions. Is that an equal moral good to the conditions in which they actually are healthy and in love etc.? If we make everyone happy by means of drugs or hypnosis or direct electronic stimulation of their pleasure centers, have we achieved moral perfection? If not, then clearly our definition of “well-being” is not simply a function of conscious mental states. And if not, what is it?

3. There’s no simple way to aggregate well-being over different individuals.

The big problems of morality, to state the obvious, come about because the interests of different individuals come into conflict. Even if we somehow agreed perfectly on what constituted the well-being of a single individual — or, more properly, even if we somehow “objectively measured” well-being, whatever that is supposed to mean — it would generically be the case that no achievable configuration of the world provided perfect happiness for everyone. People will typically have to sacrifice for the good of others; by paying taxes, if nothing else.

So how are we to decide how to balance one person’s well-being against another’s? To do this scientifically, we need to be able to make sense of statements like “this person’s well-being is precisely 0.762 times the well-being of that person.” What is that supposed to mean? Do we measure well-being on a linear scale, or is it logarithmic? Do we simply add up the well-beings of every individual person, or do we take the average? And would that be the arithmetic mean, or the geometric mean? Do more individuals with equal well-being each mean greater well-being overall? Who counts as an individual? Do embryos? What about dolphins? Artificially intelligent robots?

These may sound like silly questions, but they’re necessary ones if we’re supposed to take morality-as-science seriously. The easy questions of morality are easy, at least among groups of people who start from similar moral grounds; but it’s the hard ones that matter. This isn’t a matter of principle vs. practice; these questions don’t have single correct answers, even in principle. If there is no way in principle to calculate precisely how much well-being one person should be expected to sacrifice for the greater well-being of the community, then what you’re doing isn’t science. And if you do come up with an algorithm, and I come up with a slightly different one, what’s the experiment we’re going to do to decide which of our aggregate well-being functions correctly describes the world? That’s the real question for attempts to found morality on science, but it’s an utterly rhetorical one; there are no such experiments.

Those are my personal reasons for thinking that you can’t derive ought from is. The perceptive reader will notice that it’s really just one reason over and over again — there is no way to answer moral questions by doing experiments, even in principle.

Now to the disclaimers. They’re especially necessary because I suspect there’s no practical difference between the way that people on either side of this debate actually think about morality. The disagreement is all about deep philosophical foundations. Indeed, as I said in my first post, the whole debate is somewhat distressing, as we could be engaged in an interesting and fruitful discussion about how scientific methods could help us with our moral judgments, if we hadn’t been distracted by the misguided attempt to found moral judgments on science. It’s a subtle distinction, but this is a subtle game.

First: it would be wonderful if it were true. I’m not opposed to founding morality on science as a matter of personal preference; I mean, how awesome would that be? Opening up an entirely new area of scientific endeavor in the cause of making the world a better place. I’d be all for that. Of course, that’s one reason to be especially skeptical of the idea; we should always subject those claims that we want to be true to the highest standards of scrutiny. In this case, I think it falls far short.

Second: science will play a crucial role in understanding morality. The reality is that many of us do share some broad-brush ideas about what constitutes the good, and how to go about achieving it. The idea that we need to think hard about what that means, and in particular how it relates to the extraordinarily promising field of neuroscience, is absolutely correct. But it’s a role, not a foundation. Those of us who deny that you can derive “ought” from “is” aren’t anti-science; we just want to take science seriously, and not bend its definition beyond all recognition.

Third: morality is still possible. Some of the motivation for trying to ground morality on science seems to be the old canard about moral relativism: “If moral judgments aren’t objective, you can’t condemn Hitler or the Taliban!” Ironically, this is something of a holdover from a pre-scientific worldview, when religion was typically used as a basis for morality. The idea is that a moral judgment simply doesn’t exist unless it’s somehow grounded in something out there, either in the natural world or a supernatural world. But that’s simply not right. In the real world, we have moral feelings, and we try to make sense of them. They might not be “true” or “false” in the sense that scientific theories are true or false, but we have them. If there’s someone who doesn’t share them (and there is!), we can’t convince them that they are wrong by doing an experiment. But we can talk to them and try to find points of agreement and consensus, and act accordingly. Moral relativism doesn’t imply moral quietism. And even if it did (it doesn’t), that wouldn’t affect whether or not it was true.

And finally: pointing out that people disagree about morality is not analogous to the fact that some people are radical epistemic skeptics who don’t agree with ordinary science. That’s mixing levels of description. It is true that the tools of science cannot be used to change the mind of a committed solipsist who believes they are a brain in a vat, manipulated by an evil demon; yet, those of us who accept the presuppositions of empirical science are able to make progress. But here we are concerned only with people who have agreed to buy into all the epistemic assumptions of reality-based science — they still disagree about morality. That’s the problem. If the project of deriving ought from is were realistic, disagreements about morality would be precisely analogous to disagreements about the state of the universe fourteen billion years ago. There would be things we could imagine observing about the universe that would enable us to decide which position was right. But as far as morality is concerned, there aren’t.

All this debate is going to seem enormously boring to many people, especially as the ultimate pragmatic difference seems to be found entirely in people’s internal justifications for the moral stances they end up defending, rather than what those stances actually are. Hopefully those people haven’t read nearly this far. To the rest of us, it’s a crucially important issue; justifications matter! But at least we can agree that the discussion is well worth having. And it’s sure to continue.

  • TimG

    Well, we have to agree on our assumptions if we want to prove *any* truth, whether it’s a truth about morality or Euclidean geometry or whatever. And science too is based on assumptions, albeit fairly minimal ones (in the sense that almost everyone believes them) like “inductive reasoning is valid”. It seems to me that with morality the problem is just that there’s a lot of disagreement about what the *right* assumptions are… maybe that’s your point.

  • Colin McFaul

    You can actually go a little further with this claim:
    “3. There’s no simple way to aggregate well-being over different individuals.”
    and say that there is no way, simple or not, to aggregate well-being. Arrow’s Impossibilty Theorem proves that, aside from certain trivial situations, there is no way to assemble multiple people’s (presumably coherent) moral preferences into a coherent societal moral preference.

  • Jim Lippard

    I’m not convinced that the parallel arguments about epistemic norms are at a different level of description–your argument shows that they are distinct, but not that they aren’t analogous. Epistemic norms, like morality, are associated with sciences that are far, far less mature than the physical sciences–the social and cognitive sciences–and the project of naturalizing epistemology has backed away from Quine’s proposal of using science as a replacement for the normative philosophical component, as opposed to the still-live and quite fruitful proposal of using it as a supplement. That strikes me as quite analogous to the situation in morality. There are also still relativists in both domains, and those who argue for moral realism and moral progress as well as those who argue against scientific realism and scientific progress. Epistemic norms for science, like moral norms, do not have universally agreed-upon goals–there’s maximization of truth (itself a concept with multiple competing accounts), unification of mathematically elegant theories, finding instrumentally successful (predictive) theories, and gaining evidentially probable explanations, for example.

  • http://EmergentDisorder amdahl

    @Colin: You’ve misread Arrow’s Impossibility Theorem. It states that no system can aggregate individual preference orderings into a social preference ordering that meets certain reasonable criteria. It does not rule out interpersonal comparisons of utility, it just means that we could not do so through a democratic system in people give rank-ordered preferences. If, hypothetically, a central planner could have access to each invdividual’s utility functions and weight them against each other, the social utility function would simply be the sum of weighted individual utilities. Such a social utility function would meet ALL of Arrow’s criteria as long as each individual’s utility is given some positive weight.

    @Sean: You’ve failed to demonstrate that ought can be derived from is. Nowhere in your article do you actually derive OUGHT. The fact that individuals attempt to maximize their own utility (whether true or false) has no normative implications. It is a statement of what is. So to derive anything normative from that is begging the question.

  • efp

    I think very few people would recognize the above definition of morality. I don’t know about deriving “ought” from “is”, I find the notion itself incongruous, but to even consider the question you have to understand the “is” of “ought,” i.e., answer the question: what is morality? All indications are there is a distinct cognitive process for making moral judgments, as one would expect for a social animal that had to get along in groups long before formal systems of ethics came about. Pinker wrote a nice NYT Mag article on the subject a while back (link). I tend to call the emotive/instinctual responses of a person to questions of conduct “morality,” reserving “ethics” for the theoretical systems constructed to explain the former. Since one’s emotional responses don’t follow logical rules, much ink has been wasted trying to make a system that matches them. What you state above is basically that an ethics built on a vague notion of maximizing utility will be a inconsistent mess, which is no surprise.

  • Peter Morgan

    I think you’re awfully close to jumping the shark. “These may sound like silly questions, but they’re necessary ones if we’re supposed to take morality-as-science seriously.” I think they are far from necessary. The reduction of the meaning of human existence to a single variable, well-being, is remarkable. It’s one way to go, just as the reduction of the meaning of human existence to how much money you have in the bank is a way to go. I think you don’t mention the dynamics of well-being are linear or nonlinear, but surely it’s “necessary” to discuss the dynamics?
    Surely, please, we can retain a slightly higher-dimensional discussion? Health and happiness, perhaps? Do we really have to decide what single-variable function of health and of happiness we want to optimize before we start? What of the complexities of measurements that are acceptable to behaviorists, instead of single almost self-assessments of internal state? Is it “necessary” that we not consider those? It shows that I’m no psychologist, and the mathematician has taken over in my simplistic identification of the number of degrees of freedom in your model as “the” problem, but pl-ease.

  • Lord

    I think the question is how ought arises from what is in the first place.

  • Jim Lippard

    amdahl: Not clear to me why you thought Sean was trying to derive ought from is, since his main point in the post is that it’s not possible.

  • Ernie M. Brewer

    I agree that a science of morality is possible, through neurscience, psychology, evo-devo bio-logy, but philosophy to has a place in this basic template for a 21st century science. Well-being is a great metaphor for the possible emergence of a Being of human qualities.

  • Matt

    You know, I think you’ve actually won me over with this. Your first point is the most compelling one, from my pov. If I honestly evaluate my life, I think the most instructive and long-term beneficial events have been the most horrible and painful ones, from an “absolute” sense. The stuff that, if i were a social moralist, i would by definition be attempting to minimize in the population at large. I don’t know, now i’m worrying i’m opening a whole other can of worms here, but something seemed to click for a minute anyway. FWIW.

  • Emil Karlsson

    The problem with Dr. Carroll’s argument is that it does not distinguish between “what brings well-being” from “what people think brings well-being”. These are not necessarily identical. The fact that people do not agree about what the definition of well-being is or think that well-being is a worthy goal does not show that no such definition exists or that well-being is not a worthy goal. People disagree about evolution, does this mean that evolution is somehow invalid?

    As Carrier and Boyd has demonstrated, psychopaths are largely irrelevant to moral normativity. If a horse is one day born with two heads due to some mutations, does this mean that it is wrong to say that horses in general have one head? Morality is more like biology and less like mathematics. There are certain normative procedures for growing corn, but given some bizzare weather or climate conditions, these might not apply, but this does not mean that they suddenly become useless or invalid.

    Lastly, epistemic justifications are in some sense identical to moral justifications. Imagine the following conversation between a biologist (B) and a creationist (C).

    C: Is there any reason why I ought to accept evolution?
    B: Yes, there are tons and tons of scientific evidence!
    C: So you are saying that I ought to accept evolution (“ought”) because of the evidence (“is”)?
    B: Yes.
    C: But now you are trying to derive and ought from an is! You cannot do that says Hume.
    B: Err…
    C: So, indeed, there is no reason why I ought to accept evolution!

    *the creationist smugly walks away*

    An “epistemic ought” has the same functional outcome as a “moral ought”. Epistemic ought only make sense if we agree to a number of presuppositions, such as “truth is better than falsehood”, “reason is more virtuous than faith”, “an outside world exists” etc. Without these, science shatters.

  • Phil Plait

    This is far outside my normal realm of thought, so forgive what may be a naive question. But is it really not possible to aggregate the “well-being” — assuming we can define it, which I am unsure of — over individuals? There are many emergent properties in the real world that pop out of aggregated (and sometimes random) individual behavior: sand flowing down a dune, for example, or radioactive decay of a sample over a large sample long period of time.

    Sean, the next time we meet, I suspect we’ll have an interesting discussion.

  • greg

    Not directly related, but there have been some interesting studies in which cognitive scientists have posed various applied morality questions (do you save 10 people or one? what if the one is a baby? do you still sacrifice the one if you have to physically push him into the path of the train to stop it from hitting the 10? etc) to people while taking images of their brain activity. I don’t recall the specific results, so I’ll have to see if I can dig up the papers or some of the coverage about them.

    I don’t know if any of the studies checked variations between different cultures or not, but I think testing like that might help scientifically evaluate moral issues (especially regarding cultural variances vs inherent morality questions.)

  • Dreamer

    There is an added complexity to all this too — determining what’s best for our collective “well-being” today versus what’s best for our collective well-being in the future (and not just in the near future, but a future only the descendants of our grandchildren will see).

    Given that we value the well-being of our living descendants highly, much of that well-being is inexorably tied-in with our own well-being anyway, but the further into the future we go, the more tenuous that link becomes. And thus what may seem good and moral today (in terms of well-being), may not be considered so when looked back on from 100 years into the future.

    For example, if the worst predictions of global warming turn out to be true then maximizing our collective well-being for the present generation could be in direct and devastating conflict with the well-being of the third or fourth generation down the road. Now, if we only need to make small sacrifices today to avert the worst effects of global warming in the future, then the knowledge that we have done something to help the families of our great-grandchildren is probably enough to offset any collective loss of well-being felt from those sacrifices. But what if it was determined that the only way to stop a global catastrophe affecting billions of lives 100 years from now was to radically change our way of life today? (e.g. banning gas-driven cars, increasing the tax on fossil fuels many times over, restricted air travel, etc.).

    I doubt anywhere near enough people would be able to factor in the well-being of future generations in that case (until the crisis is almost upon us), and yet that future generation, say, 100 years from now—the one afflicted by the disastrous changes in Earth’s climate—would almost certainly look back on our generation as one that was deeply selfish and immoral.

    It is true that human beings can and do make great sacrifices (even unto death) when they believe they are doing it for the greater good, but that is almost always when the threat to ourselves and our loved ones is imminent. Such considerations of morality/well-being tend to break down when that’s not the case.

    (Note: Please don’t make my example an excuse for debating GW on this thread — it is just an illustration, nothing more.)

  • Dreamer

    Once you have put any degree of thought into this issue, it’s easy to see why so many people like the idea of a supreme being dictating a series of moral absolutes for everyone to follow. It actually doesn’t make the reality of moral choices and their impact any less messy in the end, but it sure does take a lot of the worry and complexity out of it.

  • greg

    But is it really not possible to aggregate the “well-being” — assuming we can define it, which I am unsure of — over individuals?

    I would say that as the number of people aggregated increases the overall ‘value’ of each individual’s well-being would necessarily decline, unless it is a purely neuro-chemical or otherwise physical datum (such as could be manipulated via plugging everyone into a Matrix-esque system which maximizes the physical chemical processes). No two people have identical opinions, so there is always some degree to which they would conflict, even if it is only minorly, such as two otherwise identical people except one prefers A-Rod to Jeter. So instead of 100% well-being, they are at 99.999999999%. Add in a third person otherwise identical person who favors some other Yankee and then the well-being maximum has been reduced to 99.999999998%.

  • DaveH

    It’s a subtle distinction, but this is a subtle game.


    My approach is far less subtle. First ‘science project’ I would suggest is to determine whether the human race is the greatest cause of suffering.

    ‘To be or not to be’ may or may not be an immature question, but “Should WE be?” is the mature collective question that the human race is almost certainly not ready to face.

  • Dan L.

    I agree with several other commenters here that the “is/ought” divide is at least partially synthetic — at the very least because as Emil Karlsson points out, deciding whether or not to believe a particular proposition is precisely deriving an ought (“Ought I believe?”) from an is (whatever the reasons given).

    That’s not to say that I think we can scientifically derive some optimal moral code, or even an optimal system of government. But I don’t think this is really because of some a priori philosophical distinction between facts and values so much as the fact that any moral code or system of government exists to discourage people from pursuing their own best interests, or what they perceive to be so at the time.

    What I’m saying is that the exact same problem noted by Carroll and others about finding an optimal set of values extends to science as well: one has to presuppose an ontology amenable to the explanations engaged in by scientists. Whether we’re dealing with epistemology or ethics, is or ought, the truth of any proposition is dependent upon exactly those presuppositions which cannot be proven. So the necessity of presuppositions is not what causes the is/ought divide — it applies equally in both cases.

    The only reason epistemological propositions seem to be on surer footing than ethical propositions is that within science, everyone has agreed on the same ontology. In discussing ethics, there is no comparable near-universal set of presuppositions. That is, the distinction is not an a priori philosophical one, but a practical, sociological one. If everyone could agree on definitions and measurements for human well-being the same way they can about, say, an electron, Harris would be absolutely right that we could derive an “is” from an “ought.” (Conversely, if there was a sizeable school of physicists who disbelieved in the notion of matter-waves, what “is” an electron wouldn’t be so clear as it is for us. )

    Whether the outcome of such a program would actually be better than what we have is another question entirely.

  • Matt Tarditti

    As is often the case, I think this one comes down to semantics. If you define science as something based in the empirical (and ONLY the empirical) arena then you are of course going to have a hard time arguing that there is something empirically verifiable about morality.

    But I’m not convinced that science, that the study of “is” as Sean has defined it, MUST involve empirical data. Does psychology deal with measured, independently verifiable, data sets to advance it’s theories? Is string theory ever going to be able to verified through a measurable quantity? Neither of these even have an imagined quantity through which to make measurements. Nevertheless, we have apparently have no problem grouping both in under the larger umbrella of “science”.

    If empirical requirements are lifted from the study of morality, then I believe that we will end up looking at, primarily, the biology behind happiness as tempered by the requirements of the categorical imperative (the Golden Rule).

  • DaveH

    Does psychology deal with measured, independently verifiable, data sets to advance it’s theories?

    Yes. Perhaps you’re thinking of psychotherapy.

    Your example of string theory is a practical, not an ‘in principle’, problem. Beside which, the answer may well be yes to that too.

    @ Dan L, 18 – On your supposed equivalence of presuppositions: Yes, science proceeds on the extrapolation of the principle that knowledge of a state of affairs facilitates a response to that state of affairs. You are free to try not to gather knowledge of states of affairs, or to attempt truly ignorant/random action, at any time you like. There’s no ought to it. There is a moral argument that you should believe facts that are true, but that moral argument is not represented by EK’s trite dialogue; It is not that you should believe true things simply because they are true.

    I don’t think it is possible to get very far at all with the argument that presuppositions of science are equivalent to ethical presuppositions, unless you subscribe to the radical view that there is no ‘is’. This is a million miles away from the position of Sam Harris, of course.

  • Steve Esser

    You say and repeat that you’re presenting arguments for why it cannot be done in principle. What I read in the meat of the paragraphs, though, are lists of questions and challenges which summarize how difficult it is for you to imagine the project being successful.

  • Matt Tarditti

    Thank you for the response, DaveH. But to avoid a future repeat of my apparent mistakes, what is the “in principle” empirical data for psychology? I’m not trying to be facetious, but I am honestly at a loss to describe how psychology is a science, even though I believe it is a science.

  • lix

    Dan L. – Deciding whether or not to believe a proposition is NOT deriving an ought from an is. Not in the moral sense of “ought”, which is how it’s used here. Believing is not something we do because we think we ought to morally. We say we believe things when we are firmly convinced that they are true. That has nothing to do with morality.

    It’s true we often say “I ought to believe X” – what people usually mean by this is not that we feel a moral obligation to believe X, but that we think the evidence supports X despite our inclination to disbelieve it. Generally speaking, people think it’s in their ultimate interests to believe things that are true, and generally speaking, people think evidence is ultimately more reliable than their intuitions. Hence this is a self-interested “ought”, not a moral “ought”.

    You can certainly derive self-interested “ought” from “is” if you know your preferences well, but that has nothing to do with morality and is absolutely not “ought” in the sense Sean refers to.

  • Dan L.

    @DaveH, 20:

    If there is a division between “is” and “ought,” it doesn’t get there by baldly asserting that it is there. “Ought” is not precisely defined or used in an esoteric context where its meaning is clear.

    When a proposition is asserted, we need to figure out whether to believe it or not. We might rely on the say-so of friends and loved ones, perceived ignorance or expertise of the sort, etc. There is no one optimal formula; agreeing with the expert against the charlatan always brings with it the risk that the expert is wrong and the charlatan happens to be right. Often, believing a proposition despite the say-so of experts will be remembered as courageous is that proposition turns out to be true after all. Since there is no one right way of figuring out whether to believe a particular proposition or not, we need some set of criteria or heuristics.

    These will not resemble scientific laws. They are values. Not necessarily moral values (I actually think they are, but I’m not going to try to make that case). However, since they are nonetheless values, I think that “ought” applies.

    I don’t think it is possible to get very far at all with the argument that presuppositions of science are equivalent to ethical presuppositions, unless you subscribe to the radical view that there is no ‘is’. This is a million miles away from the position of Sam Harris, of course.

    Yeah, of course — I think you may have missed my point. Which was: “is” questions are just as subject to presuppositions as “ought” questions; the difference lies in the fact that the presuppositions needed for the “is” questions readily yield falsifiable propositions, which makes it easy to sort out “good” and “bad” sets of scientific presuppositions (theories, vaguely). If ethical presuppositions could yield propositions that are falsifiable in the same way as scientific presuppositions, the “is”/”ought” divide would disappear, because we would all start tossing off ethical presuppositions that were obviously invalid, and the average set of ethical presuppositions would start to show a lot less variation. Which is kind of how science works: you start with a set of theories and throw them out as their predictions are falsified.

    So I agree with Harris that the “is”/”ought” divide is synthetic, but I disagree that we can derive an optimum moral code using the methodology of the sciences. I agree with Carroll that trying to use science to derive an ethical system is a fool’s errand, but I disagree that “is” and “ought” only apply in fundamentally different contexts.

    Why? Because “is” and “ought” are linguistic conventions and have no normative power over the universe — the same problem that comes up during almost any argument with a philosopher. You can talk about necessary conditions until you’re blue in the face, but if I find a real-world counterexample (perhaps a situation in which deciding what to believe is itself an expression of moral values?) it’s the philosophical theory and not the universe itself that is in need of revision.

  • DaveH

    what is the “in principle” empirical data for psychology?

    Matt, I don’t understand what you mean by that question.

    You’re not completely at a loss to describe how Psychology is (or can be) a science, because you yourself mentioned the use of measured, independently verifiable, data sets to advance theories. That’s not a complete definition, but it’s a start.

  • Dan L.


    I disagree utterly. First of all, it’s not clear that deciding whether or not one “ought” to believe a proposition is not a moral decision. I’m rather sure that it is at least sometimes a moral decision; my first inclination is to argue that it is pretty much always such, though often without much consequence.

    Second of all, it’s not clear that the “moral sense of ‘ought'” is entirely distinct from other uses, which should be clear from my assertion above. For example, you bring in a “self-interested ought.” I’m pretty sure this is just the moral ought, perhaps expressed by someone with a more relaxed set of moral values.

    Unlike the vaccination question, there is no definite right or wrong here. You don’t know — or get to decide — what ought means any more than I do. So please stop condescending and just consider this as a different approach to the problem than yours. Maybe you can learn something from it.

  • Mark

    @Matt Tarditti: The empirical data gathered in psychological research is abundant. Behavioural data, such as reaction times or accuracy measures, as well as data on psychological constructs, such as personality measures. Both classes of data certainly meet the criteria for being measured and independently verifiable.

    I think the ought/is argument is an important one, but I think it’s equally important (or perhaps more-so, at least for any consequentialists out there) to acknowledge that the scientific work Harris and others are calling for is already well underway. It’s just not presented as the science of morality – it’s presented as the thing we’re all actually talking about – the science of well-being, Positive Psychology. While it is sometimes presented in a feel-good unscientific way, in principle it can answer most of the silly-sounding-but-necessary questions, given appropriate assumptions or answers to the remaining questions.

    Here’s an example of some of that work:

  • DaveH

    “is” and “ought” are linguistic conventions and have no normative power over the universe

    Then, all distinctions are synthetic, since everything is spoken of using a linguistic convention. Thank you, Wittgenstein – I feel if you’re going to say stuff like that you might as well not say it!

  • lix

    @Dan L.: Sorry my answer appeared condescending, it was not intentionally so. It still seems to me there’s a clear distinction between moral and self-interested use of “ought”.

    Morality is about what one should do at a broad level, encompassing society’s interests and potentially non-utilitarian values and goals as well. Self-interest, in contrast, is simply about how to satisfy one’s personal preferences, and is therefore relatively easy to evaluate.

    Here’s an example of a self-interested ought:
    “I want an apple. But the apples are not here; they’re in the kitchen. So I ought to go to the kitchen and get an apple.”
    Here’s an example of a self-interested ought in the context of belief:
    “I’d like to believe in unicorns. But, I can’t find any evidence of unicorns, and I really don’t want to believe falsehoods. So I ought not believe in unicorns.”

    In both cases, “ought” is used simply to derive logical consequences of existing knowledge in the context of known preferences.

    Here’s an example of a moral ought:
    “I ought to love my enemy.” In this context, the “ought” is not based on logic or any utilitarian goal; it’s just something some people feel is important, regardless of the context or consequences. The problem is, different people and cultures have very different feelings about these things, so there is no simple or universal preference function. As well as having different basic values, different people place different weights on other peoples’ interests, and some groups believe in absolute moral systems that don’t even allow for preference functions.

    It’s true, I have heard some people say things like “I ought to believe in God”, which sounds like a moral ought applied to a belief. But in my experience, those were always people who had grown up in a strong religious context and found atheism logically compelling but feared the social consequences they would face if they admitted it. So that comes back to self-interest again – just an unusual case where believing a lie is a self-interested act. Similarly, when someone says: “I ought to trust her” – what is really meant is “I don’t entirely trust her, but I fear the consequences of my mistrust and therefore think it’s in my interests to suppress it”.

  • Gordon

    Well, maximizing well-being is simply John Stuart Mills` Utilitarianism…
    Also, saying you cannot quantify `well-being` basically then also says what
    neuroscientists are doing with fMRI and PET scans also are not science.
    I think that the problem between Sean and Sam is that Sam is a neuroscientist,
    and Sean is a physicist, and the cultures and definitions are somewhat different.
    Sure, morality is a concept, not a particle. It is related to a mind-state. That does not mean
    that it isn`t part of objective reality so long as human cultures exist, in the same way that, for example, hunger and thirst exist. Sean is just being a positivist reductionist, which, by the way, is OK.

  • Mark Sloan

    The methods of science are fully capable of telling us what the common underlying principle or principles are (if any exist) of all cultural moral standards and common moral intuitions of mentally normal people.

    For instance, is there a hypothesis about what this underlying principle might be that meets normal criteria for scientific utility better than any other hypothesis? These criteria might include explanatory power for cultural moral standards and moral puzzles and predictive power for common moral intuitions, universality, no contradictions with known facts, and so forth.

    If such new scientific knowledge is found, it could be examined to see if it, like any scientific knowledge, could be exploited for our benefit.

    The evolution of morality literature shows the leading candidate for the underlying principle of moral cultural standards and moral intuitions is that they are heuristics and strategies for exploiting the benefits of cooperation for the cooperating group. If all moral behaviors are expected to, on average, produce benefits for the group, then it might be a rational choice to accept the burdens of such a definition of morality. No magic oughts (no sources of justificatory force beyond reason) would be required.

    So there may be no necessity to derive ‘ought’ from ‘is’.

    It may also be a good idea to drop the idea that somehow the well-being of conscious beings is a definition of the goal of moral behavior that is justifiable as based in science. It isn’t. It is not even competitive.

  • Charon

    @Matt Tarditti: DaveH was using “in principle” for string theory, not psychology. The latter already has lots of “in practice” examples. They abound, but I would recommend Patricia Churchland’s Brain-Wise: Studies in Neurophilosophy. It has many examples of psychological experiment giving evidence about hard problems like the nature of consciousness, our perception of color, our mental representations of the world, etc.

    Psychology isn’t physics. There’s no simple place to start, like pendulums or planetary orbits, and people are much more complicated than electrons. So there’s been a lot of guessing in the past. Still, even the most cursory examination of modern psychology reveals a strong empirical component.

  • Doug

    Yes, You can.
    You can derive ought from is, as long as you are clear on the meaning of morality. In order for there to be a morality (ought), there has to be an action that is taken that is either moral or not. And a human being will make the choice of whether or not to perform that action. That’s ethics. All this aggregated stuff is Politics. So let’s lay yhte politics aside for the moment.

    So what is IS? What is is that humans are living creatures that must use their physical and mental abilities to survive and thrive in the world. . A guy on an island does what he must to survive. He picks fruit, kills fish and land creatures, and he is moral, beacause he is doing what he needs to do to survive. Now, we must agree that no human has more intrinsic right to survive than any other. If we cannot agree on this, there is no morality, for we can choose to do whatever we want to whomever we choose. Thus, when he comes into society, he has to deal with other people, with the same survival rights as he, some of whom may have something he needs, some of whom want what he has. Now, he has a choice. He can take what he wants by force, or he can trade. There is only one MORAL choice, based on what IS.

    Let’s look at the morally correct choice. He trades. This unique (yet common) action enables him to give up something for something he wants more. Both parties have different values, but both get what they want in the end. The trade takes this differing value into account, and it is just the differing values between people that make this trade possible.

    Your paper confuses individual morality with societal choices. So now let’s talk politics. But now you are talking about someone making choices for someone else, without knowing their values. This is where you get hung up. You cannot imagine a system that encompasses everyones values, or evenmakes them all feel best . Well, you need not look for a system that acccounts for people’s differing values. The process by which people trade openly goods or services or money for the things they need (that maximize their well being) already exists. It is called capitalism. No need for “right thinking people” to decide what woud be best for everyone else. And this Ought actually derived from the IS. The system of trade and money was not invented, or decreed. It grew from the behavior of humans acting rationally to each other’s benefit, and has thrived because it has provided a way for humans to survive and thrive to become the dominant species on the planet. .

  • SteveN

    Sean says “In the real world, we have moral feelings, and we try to make sense of them. Moral relativism doesn’t imply moral quietism.” But Sean seems to be saying that moral relativism is unavoidable, which I agree is distressing. Clearly it’s impossible to come up with a universal definition of collective well being , but an interesting question is how exactly, in the final analysis, do people arrive at their own “internal moral judgments”? People are making moral judgments all the time. There must be underlying reasons for a particular “moral choice” that can in principle be understood scientifically, if we understood this person’s complete history and biology. A person believes certain moral choices are optimal for him/her (self interest), and of course people will differ on self interest. But exactly what is the input (biological and environmental) that determines our “moral” choices? In other words, there are scientific reasons why people make the moral judgments that they do. Perhaps someone believes that the “greatest good” is the taste of a hot fudge sundae. His brain is wired to like hot fudge sundaes (is), so he eats them (ought). A definition of collective self interest may never be possible, but an individual’s “ought” perhaps can be objectively understood.

  • spyder

    This is not intended at all to be snarky, but there are very skilled scientists working on these problems that need to be heard. Jim Lippard’s point in comment 3 is based, in part, on the developments of scientific studies of consciousness, particularly AI efforts. Researchers at Stanford, for example, are trying to encode human traits such as charity and humor into AI machines. There are advanced efforts at University of Texas on tracking and training AI towards understanding metaphor. These sorts of studies are going on all over the planet, and it will become a matter of time that we have the formulae for ought and is, and a whole lot more.

  • Jim Lippard

    greg (#13): You may be talking about work by Joshua Greene et al at Harvard (, about which you may want to read Selim Berker’s “The Normative Insignificance of Neuroscience,” _Philosophy and Public Affairs_ 37, no. 4 (2009), pp. 293-329 for a critique.

  • Bee

    Isn’t this discussion like a century old or so? What’s the meaning of the utility function and can it be aggregated? Didn’t economists settle on it’s not happiness and it can’t be aggregated, but you don’t have to?

    This just reminds me I wrote a semi-finished paper on this issue which is still waiting to be finished. Why did you have to remind me of this??? To make a long story short, it supports your point of view. There’s actually people who think that one should measure brain activity to determine somebody’s happiness as an objective measure. Just imagine that, if your neurons happen to fire less your opinion counts less. If you’re interested in a draft of my paper, pls send me a note, hossi at nordita dot com, I’d be interested in your opinion. And maybe I’ll finish it at some point…

  • Jason Dick

    Well, I think I would take a very different tack in order to attempt to approach morality in as objective a manner as possible. Here are the steps I would take:

    1. Assume that people have their own views on morality, on which behaviors are desirable or undesirable, on what outcomes are desirable or undesirable.
    2. Assume that the “best” morality for an individual stems from minimizing contradictions between the various desired behaviors and outcomes.

    Note that if it so happens that we all share a basic “moral grammar”, as some have put it, such that there are broad areas of agreement, merely reducing the contradictions in our own moral attitudes, a process to which science is uniquely suited, areas of disagreement between individuals are automatically reduced. Science can also tell us whether or not this common moral grammar exists, and anthropogenic studies have, as near as I can tell, born this out.

    Also note that simply maximizing individual utility, though it will not always maximize global utility, does tend to go largely in this direction. This has been borne out in various ethical games that have shown that in games where people have to interact over and over again, the strategies that do best are those that make it so that other people in the game are most likely to deal fairly with them: even if the game is such that cheating offers potential huge benefits, other members of the game remember the cheating, preventing such cheating from working properly.

  • AlexK

    Hello Sean

    If I understand it correctly your argument(s) is basically don’t call it science if you cant
    construct an experiment to test the theory. It’s not that the pursuit of proving scientifically morality is good or bad just that any response that doesn’t answer this question is basically wrong.I’d like to add that i would love it to be true in order to make the world a better place but i have a few difficulties accepting it.

    I think its impossible to know because : Lets say i lead a perfect life that maximises my potential for well being ingenuity creativity in a world that everyone does the same.What about all that ingenuity beauty and creativity coming from hardship ? How can u measure which one is more beneficial (or greater)?

    I’ve come to the conclusion that this is the exact reason why it cant be called science but the topic is a very interesting philosophical one and as you said everyone derives his own conclusions.

    But finally i believe the furthest we can go in our percuit to undestand morality or what is best scientifically is the realisation that diversity is necessary and no one road ideal . This knowledge creates a new feeling, attitude and perspective and once all people experiece this maybe then we could reconsider the question.

    Thank you for convincing me of this.

  • Bee

    typo: it’s hossi at nordita dot org

  • Sebastien

    Interesting post. I agree with your conclusions, but I would state the argument in a slightly different way.

    Science is the domain of what is, while morality is the domain of what should be (or ought to be). These are simply two very different things. “What the world should be” is a value judgment, a preference, not some basic fact about the universe.

    Let’s look at it this way: the observable universe is something like 10^88 particles in a particular configuration. Now, morality is basically about saying “the universe should be in configuration A instead of in configuration B”. Wait… what? Why should it be? Because some people – perhaps most people, perhaps even every single person in the world – thinks so. Ok, but that’s only a fact about people, not about the universe. I mean… that A is better than B is not written in the stars. The universe simply is, and it doesn’t give a damn what configuration it’s in. In other words, reality has no moral dimension.

    Or to put it differently: science only describes what is, and that includes no judgment whatsoever. Therefore, science can tell you “everybody wants A” or “everybody would be happier under A” or “A would maximize human well-being”, but it stops there. The judgment, the “ought”, the “therefore we should do this” must be added for it to become morality, but such an act is outside the scope of science. Science doesn’t make any judgment; it only describes the world.

    Which is sort of why I’m not crazy about your arguments. Imagine that :

    – Everybody could agree on the same definition of well being.
    – Everybody agreed that maximizing utility was the goal of morality.
    – We could agree on a way to aggregate well-being.

    So what? That would only mean that we agree. “What should be” would still remain a human perspective that goes beyond a description of what is.

    And don’t get me wrong: it’s obvious that science has enormous potential for morality. Science can help us understand the origin of our moral preferences. For instance, here is a fascinating piece by Steven Pinker on the moral instinct.

    And once we agree that some outcome is preferable to some other one, science is by far the best tool we are to determine the best way to reach that outcome. But once all that is said and done, the fact remains that the basic premise of morality – that some things should be preferred to others, that there is such a thing as “ought” – is not a part of an empirical description of reality.

  • Ben

    I’m a little late to this game, but I’ll try to say more in a bit. Here’s a quick reply:

  • Matt Tarditti

    I wish I could have responded earlier, but so be it…
    True that both physics and psychology both have well documented, empirical foundations. But what troubles me about Carroll’s implied definition of “empirical” as applied to this argument:
    “Two scientific theories may disagree in some way … Whenever that happens, we can always imagine some sort of experiment or observation that would let us decide which one is right.”
    When I read this, it almost seems like Carroll is limiting science to knowledge about the universe that can be obtained in an algorithmic sense. If A is true, then C must be true. But if B is true, then D must be true. But in psychology, if A is true, then C is probably true, but D could also be true. It obviously comes down to statistics. But if psychology and physics can be scientifically described in terms of statistics, what prevents morality (as Caroll has defined it) from falling into the same realm?

    If no one wants to engage that argument, I understand…its a dead horse. But Sebastien’s post at #41 is great. I hope that there is a coming rebuttal to his argument, not because I disagree, but I’m just wondering how you construct a rebuttal to it in the first place.

  • Steve Esser

    FWIW, a simple rebuttal on this “is-ought” business:

    #41:”that some things are to be preferred to others…is not part of an empirical description of reality.” I think this is wrong. Our make-up includes a huge bundle of various preferences – they are natural facts about us.

    Our moral impulses are natural facts, too. There is good work underway fleshing out the evolutionary origins of our moral sense. Our development of a science which seeks in part to develop and fine-tune these impulses doesn’t introduce anything supernatural.

    The “ought” here is the coupling of the facts about experiential well-being (and its causes) with our moral desires – but these latter are also natural facts!

    (And yes, this is all very speculative and the difficulties are immense, etc., but pointing those things out is not an argument that it is impossible in principle).

  • kravien


    First of all, founding morality in the new science may be horrible – we answer scientifically questions about all other life forms and that justifies and makes possible and efficient our behavior, called largely the conquest of nature. What if we did that to ourselves? What if you could say a child will probably not be happy or just because of his genes or something like that – you would have the justification to kill him.

    Second of all, the important matters could neatly be summarized thusly: what are the alternative views of happiness? Which are compatible with the new science and which not? – In fact, he simply say let’s use science and forget about everything else. What if science is not the source of – or compatible with – the greatest possible happiness? I am not declaring it is, but it seems like the question is worth asking…

    Third of all, the problem of moral relativism is political: if there is no justification for moral judgment, laws are baseless, and therefore justice is nothing but the advantage of the stronger: there is no truth to moral problems – everything that matters to us – therefore whoever can impose his will is justified or at least is not susceptible to blame. If moral relativism, by some liberal pipe-dream, implied moral quietism, we would be safe perhaps. But it does not – and it justifies what we may call alternative lifestyles, but used to be called horrors. Consider that the problem with quantifying well-being is that it may feel good to love a beautiful woman – but also to see others tremble with envy or jealousy. It might feel good to eat fine dishes or contemplate your worthy children; but also to fight wars and oppress people. In fact, maybe tyranny just feels best…

  • BlackSun

    This is a recitation of a traditionalist view, which cannot manage to see human beings for what they are, complex systems of biomachinery. Because of all the complexity and the opaqueness of human motivation, it seems we cannot even arrive at a consistent set of principles for deriving morality.

    But this is false. We simply haven’t gotten there yet. Once the human brain is reverse engineered and understood, once commonalities are established between people with *seemingly* different moral viewpoints, we will begin to unravel this mystery.

    Bottom line, our moral instincts are shorthand about what our minds have analyzed to be the best methods for human flourishing. We can rightly exclude brain pathology from this discussion. Just as we would exclude any piece of broken machinery from the analysis of functioning models.

    As Sam Harris correctly pointed out, morality is concerned with the well-being of conscious creatures. It is a departure from one set of goals of our biological machinery–that of amoral reproduction and domination of genetic competitors–to another finely tuned set of goals. Civilization and prosperity has finally given our altruistic and cooperative natures a means of expression. Empathy provides some measure of understanding of those who are suffering. Mirror neurons tell us we should care about them.

    We are fortunate enough to have available to us a wealth of information about conscious systems, and we are soon to get a lot more. To imply that that no consistent pattern or theory exists in this data is laughably short-sighted.

    I’m willing to concede this is a human-centric viewpoint. But since human flourishing is tied in with the flourishing of ecosystems which include other species, science based morality would inform a broader-based view of ecosystem and social sustainability. This is the new science of morality, and it is in its infancy. I fully expect the naysayers to continue until such time as the discipline becomes better established. It will be a cooperative effort between neurologists, sociologists, anthropologists, psychologists, zoologists and environmental scientists, to name a few. This is the human equivalent of a “theory of everything.”

    The fact that a “theory of everything” is elusive hasn’t stopped physicists from looking for it, nor should it slow, even in the smallest degree, our progress toward a scientific understanding of morality. Like many other objections to science, this seems to be largely about other disciplines not wanting to cede power to a new objective regime they cannot control. That is where the study of morality is headed–toward the realm of evidence which may challenge all of us to redefine and abandon long-cherished but outworn beliefs.

  • Pingback: Darwiniana » Carroll vs Harris()

  • Pingback: Black Sun Journal » In Support of a Scientific Morality()

  • Quine

    I am with Sam on this one. First off, others have dragged Hume and the is/ought into the discussion; Sam never claimed to have overturned Hume. Part of this is because the “ought” in Hume is the provably optimum ought, which, again, Sam does not claim. It is trivial to show that you can get an “ought” from an “is” if it does not have to be correct (you could roll dice). Sam is asking us to use the knowledge we have obtained from the scientific method to engineer a better moral system than what so much of the world has inherited from bronze age scripture.

    I also want to stress that this is not science, it is engineering. Science did not tell us to get rid of smallpox, we decided that was a “good” thing to do, and used the knowledge about what smallpox was, gathered through the scientific method, as a basis to engineer a method to get rid of it. We are currently engineering methods to reduce malaria. Can we engineer methods to improve the lives of the people of the world by making changes to the moral codes handed down from the past? Almost certainly. Can we prove an optimum? No, not even in principle. Well, if we can use scientific knowledge of the world to do better, shall we let the (provably unobtainable) perfect be the enemy of the good (or at least better)?

  • Sam D

    Hume’s skepticism is irrational mainly because it can be applied to everything we know including scientific method. According to Hume when we claim that an object falls due to gravitational force we are making an assumption because we do not know it for a fact that it will happen. Similarly, morality cannot be epistemically objective because it is based on assumption. But I believe this is a bad argument for two reasons,
    First, Skepticism has its limits if we do not accept some axioms progress and improvement of thinking is not possible.
    Second, similar to law of gravity there are laws that can be described by human nature. For example freedom of speech is a moral right that every human being can enjoy regardless of their culture because language is innately in human capacity.

  • David

    I actually disagree with the original definition of morality, that it is *maximizing* the sum of morality of everyone. Surely there are other functions we could consider, such as *maximin*- maximizing the minimum wellbeing of anyone. Otherwise I think you can always construct examples where, for example, murder is justified to benefit some group.

  • MedallionOfFerret

    Good post, Sean. I came to CV for science; I get a bonus like this post every so often. Keep it up.

  • Ronan

    This is nice; that first “Is/Ought” post got me thinking on the subject (the first time I had ever heard of the is/ought conundrum, in fact), and now…back we come again. I’d argue that deriving the Ought from the Is isn’t necessary, because the ought already is; or rather, there are a whole bevy of oughts running around, in the form of everyone’s individual ideas of what should be. I don’t quite see whether one should have to worry about whether or not they’re “true” (whatever that means, in this context), because regardless of that they exist. Seems like it would be sensible to follow along with those oughts, and do one’s best to make sure that what oughts one encounters, or can deduce to exist in other people, are followed through with–because, again, trying to figure out what context they should be true in seems difficult, impossible, or nonsensical. They exist, and resisting them or ignoring them is even more pointless (from a purely nihilistic point of view, mind) than following them, so…Hey, why not?

  • William Sidell

    What is existence?

    Existence is what is and is defined by the individual.

    What is science?

    Science is the attempt for an ‘exister’ to understand the existence that he believes he occupies by using certain tools.

    What is a tool?

    A tool is an instrument that controls the way an ‘exister’ perceives his existence (whether or not what he perceives is actually true.)

    Why is this of consequence?

    An ‘exister’ defines his existence.
    Existence is defined by science.
    Science is defined by tools.
    Perception is the resultant of tools.
    Tools define what is for an ‘exister’.
    Ought is a subset of is.

  • GTChristie

    Much of the “is/ought” debate can be simplified by admitting (or submitting) that “morality” is not a form of knowledge. There may be empirical things we can know or learn about morality, which might tell us what morality “is,” but that exercise does not inform actual judgments; knowing what a judgment is does not crank out judgments themselves. Hume’s separation of is from ought was brilliant but sometimes we don’t get the upshot: judging is a process informed by something other than facts (there are no moral facts). This doesn’t render judgment (or ethics or morality) impossible. It just makes judgment a product of something other than science.

  • DaveH

    Human Flourishing sounds like some nasty process in an episode of Dr Who :-D.
    On the other hand, Flourishing could be a stimulating form of BDSM.

  • BigMKnows

    I suggest a formal debate. The Carroll vs Harris Debate!

  • Kevin

    Excellent post.

  • Vincent

    Sean, you have made a few mistakes.

    “There’s no single definition of well-being”.

    How is this a problem *in principle*? As Sam has pointed out, there is no single definition of ‘health’, but that hasn’t been a problem for medical science. And even if a single definition were needed for Sam’s case, the non-existence of this definition is a problem *in practice*, not in principle. We can easily imagine that the day may come when we will settle on a definition of ‘well-being’.

    “…what is the experiment I could do that would distinguish which was true, consequentialism or deontological ethics?”

    This is a strawman. Sam has repeatedly said that when he uses the word ‘science’, he uses it in a broad sense, encompassing all fields of rational inquiry (e.g., philosophy, history, mathematics, and science). There may not be an *empirical* test to distinguish among rival moral theories, but there may be *conceptual* or *philosophical* arguments that could do so.

    “There’s no simple way to aggregate well-being over different individuals.”

    Again, this may be a problem in practice, but how is it a problem in principle? Sam’s argument does not hinge on whether we could currently aggregate well-being across individuals. Even if different people experience well-being in different ways, that’s not a problem. Different people enjoy different types of music. To place all individuals in a state of musical enjoyment would not be to play one type of music to all of them. Rather, it would be to play to each individual the type of music that he or she favors. Similarly, we could in principle arrange our societies so as to cater to the different well-being requirements of different people.

    “…it would generically be the case that no achievable configuration of the world provided perfect happiness for everyone”

    This is another strawman. Sam has never claimed that his argument hinges on the possibility of providing perfect happiness for everyone. Sam has explicitly said that he conceives of a moral landscape with different peaks and troughs. The peaks do not represent *perfect* happiness for *everyone*. Rather, the peaks represent the maximum *possible* happiness.

    “…pointing out that people disagree about morality is not analogous to the fact that some people are radical epistemic skeptics who don’t agree with ordinary science”

    Ordinary science is based on axioms (e.g., we must be logically coherent in our descriptions, we may base predictions on past observations). Sam is arguing that moral reasoning is also based on axioms (namely, that we should pursue well-being and avoid suffering). Those who reject the axioms of ordinary science have no way to construct a rational body of knowledge about the world. Sam is arguing that those who reject his axioms of moral reasoning have no way to construct a rational set of moral imperatives. Indeed, what would it mean to construct a set of supposedly ‘moral’ imperatives that would, for instance, assign rights and duties to inanimate objects, or advocate the greatest possible misery for all conscious creatures?

  • Par la Grâce de Dieu, NIKOLAI III , EMPEREUR et Autocrate de toutes les

    One of the most frightening things about the Nazis is not that they had different moral standards from those of the rest of us: it is that they didn’t. They knew that what they were doing was wrong — and in fact that’s a major reason why they went ahead and did it. See eg Himmler’s “secret speech”.

  • RichardW

    @Phil Plait

    I think Sean has misstated the problem slightly. Perhaps we could invent an aggregation scheme, if we chose objectively measurable criteria for well-being. But there is no objective basis for choosing one such aggregation scheme over another. It may seem obvious to choose a scheme of universal equality, i.e. one where each individual’s well-being counts equally. But why should we? We can’t even ask that question because it presupposes a prior moral standard (a prior should/ought). Many of us do support such a scheme–at least when it comes to basic human rights–but the ultimate reasons for us to do so are our subjective preferences.

    Moreover, while we may support equal basic rights for all humans, few of us actually treat all humans as having equal moral value. We naturally give preference to the well-being of our own children over children in a distant country, and we don’t generally feel that we are being immoral in doing so. I don’t think we would accept a moral standard that told us that we (as individuals) had a moral duty to consider the well-being of all children equally.

  • Cartesian

    For me the cartesian morality is based on the usual medicine and psychology/psychiatry for a lot of things, so it is based on science.

  • greg

    @Jim Lippard (36) – thanks for the reference and the critique of Greene’s work. For anyone who is interested, there is a working draft of the paper Jim mentioned available here. Final version is behind a journal paywall.

  • RichardW

    P.S. If we focus too much on how to measure well-being, or how to aggregate across individuals, there’s a danger of losing sight of the logically prior question: why do we want such a formula in the first place? What’s it for?

    When we come up with scientific laws, there are two fairly obvious answers to “what’s it for?” We want scientific laws because we (a) want to understand how the world works, and/or (b) want to control the world.

    But there is no equivalent answer to the question of what this formula for maximising well-being is for. To say “it’s for maximising well-being” is tautologous. The only reasonable answer is that maximising well-being is something that people want to do. In other words, it’s a subjective matter about what people want. And different people are likely to have different preferences for what constitutes maximum well-being, whose well-being is most important and even whether well-being is the only thing that matters.

  • Yair

    Sean, I’m with you on the debate but you shot yourself in the foot with

    The job of morality is to specify what that [global well-being] function is, measure it, and derive conditions in the world under which it is maximized.

    IF you define morality this way*, half your questions go away. For example, while you “see no evidence whatsoever that they all ultimately want the same thing”, that’s saying that (psyhological and neuroscientific) evidence CAN convince you otherwise – so the theory is scientific.

    Another example: “Now, you may think that you have good arguments in favor of consequentialism. But are those truly empirical arguments?” Well, they are analytic – morality AS DEFINED ABOVE simply is consequential. I don’t need empirical evidence that morality is consequentialist any more than I need empirical evidence to establish that the Schroedinger equation is linear.

    You let Harris define morality to be what he wants it to be. And if you do, it’s an empirical science. The real question is what morality is – once you settle on some naturalistic metaethics, ethics indeed becomes a science.

    * I’m assuming here that you have an implied “we all want and should maximize global well-being” as part of the moral theory.

  • DaveH

    Sam is arguing that moral reasoning is also based on axioms (namely, that we should pursue well-being and avoid suffering)

    Avoid suffering? Are you sure? Some attention might be paid to the insights of our friend Nietzsche. Avoid causing suffering? That would keep some of your pieces on the board, at least, but such presumptions are fraught with problems, as Sean (as he has with most of the comments made) has already highlighted.

    I also feel like mentioning that I really hate IKEA.

  • Pingback: Can Science Answer Moral Questions? (Pt. 2) - Science and Religion Today()

  • Pingback: Bruin Alliance of Skeptics and Secularists » BASS Meeting VI()

  • Vincent

    @ DaveH (comment 66)

    Yes, I’m sure. Avoiding suffering lies at the heart of sensible moral reasoning. If we get into the nitty-gritty details, then of course we can identify cases where we ought to endure a degree of suffering to obtain greater overall well-being (e.g., lifting weights at the gym is painful, but ultimately is good for your health and releases pleasurable endorphins). No one’s saying that to reach a peak of well-being we must never suffer. But clearly we ought to avoid needless suffering. There is a conversation to be had about how to maximize our well-being with minimal concomitant suffering, and Sam Harris is trying to lay the foundations for this conversation to take place.

  • tumbledried

    I think it probably is possible to answer moral questions based on the grounds of solid quantitative logical reasoning. However, for one thing, it is necessary to make assumptions in order to get such models to work – nothing new here, of course, just an instance of the incompleteness theorem. Also, I think, even with very simple assumptions (such as that players in some form of game are bayesian decision makers) the level of difficulty involved in building a convincing logical foundation is quite high. But I don’t think that this should stop people from trying, even if the answers at the end of the day are limited rather than absolute in extent.

    My current thoughts on the matter is that in order to get a proper description of such dynamics one needs to look at double categories, or triple categories at the very least (like 2 or 3 categories, but with a bit more structure). Then one needs to build on top of this a tensor theory, and then somehow use this to find appropriate Nash-type equilibria.

  • DaveH


    Clearly, needless suffering is not needed. By definition. But our relationship with suffering is far more complex and intrinsic to experience than you indicate. Various forms of suffering we even call entertainment. I question what you call an axiom.

    The discipline of suffering, of GREAT suffering–know ye not that it is only THIS discipline that has produced all the elevations of humanity hitherto? The tension of soul in misfortune which communicates to it its energy, its shuddering in view of rack and ruin, its inventiveness and bravery in undergoing, enduring, interpreting, and exploiting misfortune, and whatever depth, mystery, disguise, spirit, artifice, or greatness has been bestowed upon the soul–has it not been bestowed through suffering, through the discipline of great suffering?

    Is that not sensible? Is he not talking about the spirit of adventure, fortitude, endurance, Tragedy, Horror, The Blues… ?

  • Science

    Yes, morality won’t ever be a strictly scientific discipline but it can and should benefit from rigorous studies of consequences of various moral frameworks. Science won’t supplant morality but it can and should inform our moral choices.

  • Craig Ewert

    Vincent said:

    Avoiding suffering lies at the heart of sensible moral reasoning

    Avoiding suffering is at the heart of one kind of moral reasoning, but not of all of them.

    Look at “Troy” the recent movie. For Achilles and the Myrmidons, achieving glory is the heart of their moral code, and suffering, their own and anyone elses, is incidental.

    Look at the recent incidents of “honor killing”. They happen because, for those men, honor is the heart of morality, and it trumps suffering, both their daughters and their own.

  • Craig Ewert

    Ronan said:

    I’d argue that deriving the Ought from the Is isnt necessary…

    This is almost brilliant. Given the lack of a (nearly) universal connector between the two domains (Sam Harris says: you Ought to do what promotes the well-being of everyone, but a billion Muslims say: you Ought to do what Allah through Mohammed has commanded, etc), we should give it up as a lost cause.

    The reason everyone wants to link Ought to Is, is that nearly everyone (except schizophrenics and creationists) broadly agrees on Is, but there are dozens (hundreds?) of competing Oughts with millions and billions of adherents each.

    The analogy for Is would be if everyone in Europe thought the world was flat, and thousands of them claimed to have actually been to the edge, and seen the turtle underneath.

  • costanza

    Yawn! David Hume put this “in the can” a long time ago.

  • Pingback: Nietzsche’s Revenge: PZ Myers v. Sam Harris on whether science can assist a person in deriving an “ought” from an “is” « Prometheus Unbound()

  • Josep

    Henry Poincaré once wrote something like:

    “From premises in indicative you can not derive conclusions in imperative.”

    Seems clear enough, and brief !

  • czrpb

    Sean said: “You’re going to get bored of me asking this, but: what is the experiment I could do that would distinguish which was true, consequentialism or deontological ethics?”

    Doubtful that you will even read this, but this is where I am finally convinced you are on the wrong side here: My opinion is that 25, 50, 100 years from now this will be answerable and therefore you will be seen on the wrong side of this issue.

    I see this as no different than any other advancement in morality where the early defenders had little more than a greater share of empathy and foresight: Slavery/Slave trade, women’s rights, animal rights, etc.

    Taking animal rights, you *would* be on logical/scientific grounds at some point in the past if you were to object to the statement: “The monkey brain *is* similar to the human brain w/r/t pain sensation and we (humans) dislike pain, therefore we *ought* not cause pain to monkeys with experiment X.” with “We do not have enough *scientific* evidence that the similarities between monkey and our brains are so similar, therefore we do not know if monkey ‘pain’ is like our own and so that is not a good enough argument to convince me to stop experiment X.” You would be right scientifically then, but you would be wrong morally then. And returning to our time, you would be seen as on the wrong side.

    Finally, I am quite happy to condemn you *now* for your inability to recognize what I think will be commonly recognized in the future for this reason: I celebrate people like Thomas Paine, Lucretia Mott, Emma Goldman, Thomas Clarkson, etc. for their stands at the time and condemn those who did not take those stands: Yes, I do consider there to be a trend in history of “moral advancement” and that there are those who are the outliers who are looked back on as being more “advanced”. I consider this whole discussion similarly.

  • Pingback: Obama Was Top Recipient of BP-Related Dollars in 2008 | Colliding With The Future()

  • The Amateur Scientist

    The author ironically is making several logical fallacies when attempting to point out a logical fallacy, the ought-to-is fallacy, in Mr. Harris’ work. The first is the fallacy of presupposition, he presupposes that morality IS an ought and not an is. Starting with this fallacy allows him to make the false analogy fallacy where he equates morality and science as the “is” and “ought” in the ought-to-is fallacy.

  • Pingback: The Science of Morality, Part I: You *CAN* Derive ‘Ought’ From ‘Is’ « Becoming Gaia()

  • Jason Streitfeld


    I’ve enjoyed your responses to Sam Harris, but I have one concern. At times, you seem to argue for noncognitivism–the view that moral judgments are not judgments of fact, and so cannot be either true or false. Yet, you also seem to support moral relativism. These two approaches have some similarities, but they are not compatible. Moral relativists maintain that moral judgments are factual–they can be true or false–but that their truth or falsity is determined locally, and cannot be extended to other people. Noncognitivism is a much stronger position, I think.

    This relates to a problem with Harris’ approach. He suggests that the issue here is a choice between moral relativism and moral realism: Sam Harris and the Moral Realism/Moral Relativism Myth.

    The majority of non-theistic philosophy professors and PhD.s are neither moral realists nor moral relativists. (Though there are many non-theistic moral realists, despite Harris’ claim to being the lone gunman here.) Harris ignores this, hand-waving the philosophical terminology and the ideas it represents (such as noncognitivism), favoring a dumbed-down and ignorant discourse. This can only hurt the debate.

    Harris plays into the religious moralists’ hands by suggesting that science must provide the sort of foundation for morality which theists demand. As you suggest, such a foundation is neither possible nor necessary for a robust morality. Atheists are no worse off than theists on this front. I would say atheists are (or should be) better off, since they (should) understand why morality does not require theoretical justification. Harris is not helping any.

  • Pingback: A Lame Claim « The Signal in the Noise()


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] .


See More

Collapse bottom bar