Does This Ontological Commitment Make Me Look Fat?

By Sean Carroll | June 4, 2012 8:39 am

3:am magazine (yes, that’s what it’s called) has a very good interview with Craig Callender, philosopher of physics at UC San Diego and a charter member of the small club of people who think professionally about the nature of time. The whole thing is worth reading, so naturally I am going to be completely unfair and nitpick about the one tiny part that mentions my name. The interviewer asks:

But there is nothing in the second law of thermodynamics to explain why the universe starts with low entropy. Now maybe its just a brute fact that there’s nothing to explain. But some physicists believe they need to explain it. So Sean Carroll develops an idea of a multiverse to explain the low entropy. You make this a parade case of the kind of ontological speculation that is too expensive. Having to posit such a huge untestable ontological commitment to explain something like low entropy at the big bang you just don’t think is worth it.

There is an interesting issue here, namely that Craig likes to make the case that the low entropy of the early universe might not need explaining — maybe it’s just a brute fact about the universe we have to learn to accept. I do try to always list this possibility as one that is very much on the table, but as a working scientist I think it’s extremely unlikely, and certainly it would be bad practice to act as if it were true. The low entropy of the early universe might be a clue to really important features of how Nature works, and to simply ignore it as “not requiring explanation” would be a terrible mistake, even if we ultimately decide that that’s the best answer we have.

But what I want to harp on is the idea of “ontological speculation that is just too expensive.” This is not, I think, a matter of taste — it’s just wrong.

Which is not to say it’s not a common viewpoint. When it comes to the cosmological multiverse, and also the many-worlds interpretation of quantum mechanics, many people who are ordinarily quite careful fall into a certain kind of lazy thinking. The hidden idea seems to be (although they probbly wouldn’t put it this way) that we carry around theories of the universe in a wheelbarrow, and that every different object in the theory takes up space in the wheelbarrow and adds to its weight, and when you pile all those universes or branches of the wave function into the wheelbarrow it gets really heavy, and therefore it’s a bad theory.

That’s not actually how it works.

I’m the first to admit that there are all sorts of very good objections to the cosmological multiverse (fewer for the many-worlds interpretation, but there are still some there, too). It’s hard to test, it’s based on very speculative physics, it has a number of internal-consistency issues like the measure problem, and we generally don’t know how it would work. I consider these “work in progress” types of issues, but if you take them more seriously I certainly understand. But “wow, that sure is a lot of universes you’re carrying around” is not one of the good objections.

When we’re adding up our ontological commitments (i.e., the various beliefs about reality we are willing to hypothesize or even accept), the right way to keep track is not to simply add up the number of objects or universes or whatevers. It’s to add up the number of separate ideas, or concepts, or equations. There are an infinite number of integers, and there are only a finite number of integers between zero and a googol; that doesn’t make the former set somehow ontologically heavier. If you want to get fancy, you could try to calculate the Kolmogorov complexity of the description of your theory. A theory that can be summed up in fewer words wins, no matter how many elements are in the mathematical structures that enter the theory. Any model that involves the real numbers — like, every one we take seriously as a theory of physics — has an uncountable number of elements involved, but that doesn’t (and shouldn’t) bother us.

By these standards, the ontological commitments of the multiverse or the many-worlds interpretation are actually quite thin. This is most clear with the many-worlds interpretation of quantum mechanics, which says that the world is described by a state in a Hilbert space evolving according to the Schrodinger equation and that’s it. It’s simpler than versions of QM that add a completely separate evolution law to account for “collapse” of the wave function. That doesn’t mean it’s right or wrong; but it doesn’t lose points because there are a lot of universes. We don’t count universes, we count elements of the theory, and this one has a quantum state and a Hamiltonian. A tiny number! (The most egregious version of this mistake has to belong to Richard Swinburne, an Oxford theologian and leading figure in natural theology, who makes fun of the many-worlds interpretation but is happy to accept a completely separate, unobservable, ill-defined metaphysical category into his ontology.)

The cosmological multiverse, while on much shakier empirical ground than the many-worlds interpretation, follows the same pattern. The multiverse is not a theory, it’s a prediction. You don’t start with the idea “hey, let’s add an infinite number of extra universes!” You start with our ideas of general relativity, and quantum mechanics, and certain plausible field content, and the multiverse comes out, like it or not. You can even get a “landscape of different vacua” out of very little theoretical input; Johnson, Randall and I showed that transitions between states with different numbers of macroscopic spatial dimensions are automatic in a theory with just gravity, vacuum energy, and an electromagnetic field, while Arkani-Hamed et al. showed that the good old real-world four-dimensional Standard Model coupled to gravity supports a landscape of different vacua that depends on the masses of the neutrinos. The point is that these very complicated cosmologies arise from very simple theories, and it’s the theories we should be judging, not their solutions.

The idea of a multiverse is extremely speculative and very far from established — you are welcome to disagree, or even better to ignore it entirely. But please disagree for the right reasons!

CATEGORIZED UNDER: Philosophy, Science, Time, Top Posts
  • http://mattleifer.info Matt Leifer

    “I’m the first to admit that there are all sorts of very good objections to the cosmological multiverse (fewer for the many-worlds interpretation, but there are still some there, too).”

    I find it amusing that you, as a cosmologist, believe this, but I, as someone who works on the foundations of quantum theory, see it the other way round. On my understanding, we have pretty good evidence for believing inflation, and pretty good reasons for believing that the best way of implementing inflation involves a cosmological multiverse. Those “other universes” would be unambiguously real, in just the same sense that ours is.

    On the other hand, the many-worlds multiverse is only compelling if you believe that the wavefunction should be treated ontologically, as a literal description of reality, but there are many compelling arguments that suggest it should be treated as an epistemic object, more like a probability distribution. The supposed “killer argument” that wavefunctions can interfere is not compelling, because it has been shown that interference can arise naturally in an epistemic interpretation. Therefore, I would say that the many-worlds multiverse rests on much shakier ground than the cosmological one, and that is even before we start to think about probabilities.

  • Jeremy

    As far as I understand as a layman, hypotheses of this sort don’t just posit an ever-branching “tree” structure, but also the fact that one particular branch of the tree is “indexed” as “real” or “ours” (or whatever – it’s importantly unlike the other branches). The tree itself may not be complicated, but the fact that one particular branch of it is set apart from the others calls for explanation. Isn’t the explanation of that extra fact every bit as demanding as an explanation that does not involve an ever-branching tree structure? And if the extra fact is not addressed at all, it looks to me as if the tree itself is just non-explanatory “bloat”.

  • Craig McGillivary

    How big a role does Occam’s Razor actually play in Cosmology in practice? Do physicists spend a lot a time measuring the complexities of their theories?

  • Moshe

    I think this is not a fair criticism. I also tend to think that what should count is not the number of onto.ogical units (whatever that may mean), but the number of independent assumptions that goes into the theory. So, as far as the existence of other universes, or other branches of the wave function is concerned, I agree with your sentiment. But, you and others are trying something harder — to explain some features of our own universe, or our own branch of the wave function, in terms of some structure on the space of all possible worlds/branches. For that you need know many things beyond the mere existence of other universes: e.g. what those other universes look like, what made them come into being, and even what questions make sense in this context. I think that amount of uncertainty in all these questions currently (and perhaps even in principle) makes the properties of all the other universes/branches (or their distribution, which amounts to the same thing) independent moving parts of the theory. In this context, I think it is not ridiculous to complain about excessive ontological baggage, or (if you don’t care for ontology since you don’t know a sharp definition of “existing”) about the lack of “compression” in any multiverse-based explanation, these are basically isomorphic questions.

    • https://plus.google.com/118265897954929480050/posts Sean Carroll

      Moshe — I take your point, but I don’t think they are truly isomorphic. Basically you are making the (fair) criticism that we don’t know much about the multiverse, so there’s a certain vagueness in what the other regions are like, so you can get away with a lot. But that’s a criticism of our current state of theory-building, and would hopefully be a temporary condition; if we knew the underlying theory better, that vagueness would dissipate. So I still think it’s wrong to pinpoint the problem as one of ontological extravagance.

      Craig — cosmologists do it all the time (as do all scientists), but usually just very informally.

      Jeremy — it’s true that you do need that extra bit of information, and I would provisionally agree that it should be taken into consideration in one’s ontological accounting. But it’s just one fact, not an infinite number of universes, so it doesn’t count as all that much.

      Matt — vive la différence!

  • Josh

    Sean, I really appreciate your willingness to critique what I call “deontological arrogance”, the idea that the possibility a question may be too corrupt to give a meaningful answer is more likely than there being an explanation. This reminds me of the “North of the North Pole” answer that used to be one of the most common replies given to the layperson as to “what happened before the Big Bang?” Honest experts, I contend, recast this awkward layman’s question which is sometimes simply dismissed as a corrupt chicken-and-egg recursion argument into a more full discussion of models that potentially avoid Big Bang singularities or other proposals that may show time-like worldlines could extend beyond a 13.7 billion year old “initial state”.

    Another example might be that the heliocentric models of Copernicus or Kepler that lacked an explanation as to why the Sun was at the center of the universe. It really took a Newtonian perspective to explain its prominence in the solar system in terms of gravitational mass. I can imagine Copernicans and Keplerians responding with deontological assurances before Newton that the Sun being at the center was a bald fact of our universe that need not be explained any more than the question as to why the Earth had a single moon or a 23 degree axis tilt. While it was to some extent not really possible to answer the question as to why heliocentrism “worked” until Newton, the question wasn’t really corrupt and we have pretty good explanations for this question today that aren’t just “the question is meaningless”.

  • Moshe

    I am not going to argue too hard for any ontological position, since in my current state of knowledge any question of ontology, as applied to regimes far from our daily experience, is ill-defined (or at least I cannot make sense out of it). But, I do think that the criticism of epistemological extravagance is not all that different, and I tend to sympathize with it, and also tend to think that it is not just a matter of our current knowledge but more of a matter of principle. For example, even in the context of simple toy models in which every concept is finite and calculable, it is not clear to me what a multiverse based prediction might look like (e.g. what physical principle picks the measure, and what prevents me from picking another one).

  • Pingback: Does This Ontological Commitment Make Me Look Fat? – - ScienceNewsX - Science News AggregatorScienceNewsX – Science News Aggregator

  • AI

    Sean: “By these standards, the ontological commitments of the multiverse or the many-worlds interpretation are actually quite thin. This is most clear with the many-worlds interpretation of quantum mechanics, which says that the world is described by a state in a Hilbert space evolving according to the Schrodinger equation and that’s it…”

    A theory is not just mathematics but also interpretation which is needed to relate that mathematics to the real world. In case of many-worlds the interpretation is so excessive and ill-defined as to completely outweigh any benefit from having one less assumption about measurement.

    Similarly with the early universe, it’s preferable to have one puzzle – why the entropy was low – to countless new questions concerning properties and nature of the hypothetical unobservable portions of the multiverse which might somehow “explain” low entropy.

    In general an explanation which introduces more questions then answers is not good. The goal of science is to explain more with less not the other way around.

  • http://vacua.blogspot.com Jim Harrison

    I have no idea why Callender thinks of the multiplicity of worlds as ontological extra baggage, but I think lots of folks who balk at the multiverse or the many-worlds interpretation of quantum mechanics are simply falling victim to the asparagus fallacy. (I’m glad I don’t like asparagus because if I liked it, I’d eat it; and I can’t stand the stuff.) The notion that this world is, after all, the world is common sense so it’s easy to privilege this belief even if you know that common sense intuition isn’t really an argument in cosmology or physics. The universe (or multiverse) is under no obligation to respect our sensibilities, but it’s hard to process that fact. We claim to consider all alternatives equally; but when it comes to it we find ourselves rejecting some options, not because they are more complicated, but simply because they are more foreign to our usual way of thinking.

  • http://www.math.columbia.edu/~woit/blog Peter Woit

    Sean,

    You’re creating a straw man argument. The quote explicitly states that the problem is the “huge untestable ontological commitment.” The “untestable” qualifier which you drop is crucial. Despite what you seem to think, everyone understands the concept of a simple, testable set of ideas leading to a multiverse scenario, and no one has a problem with this, philosophically or scientifically. Moshe explains the real issues better than I can.

    Where this ends up is with the “work in progress” argument, and there you need to identify progress towards having a legitimate, testable framework. The question of whether there’s a healthy research program here making progress, or just a set of excuses for a failed one, is what the argument about the multiverse is actually about, not the naive straw arguments you prefer to debate.

    • https://plus.google.com/118265897954929480050/posts Sean Carroll

      Peter, despite what you seem to think, I explicitly mention testability as a legitimate problem right there in the post.

  • http://www.math.columbia.edu/~woit/blog Peter Woit

    Sean,

    Again, a straw argument. I’m not claiming you don’t acknowledge that testability is a legitimate problem. What I’m pointing out is that when you write:

    “But what I want to harp on is the idea of “ontological speculation that is just too expensive.” This is not, I think, a matter of taste — it’s just wrong.”

    you are ignoring the words immediately before and after the ones you quote. A fuller quote is:

    “..the kind of ontological speculation that is too expensive. Having to posit such a huge untestable ontological commitment to explain something…

    The “the kind of” modifier which you drop is explained by the “huge untestable”, and you just drop the “untestable” part in order to be able to claim “it’s just wrong”.

  • Aiya-Oba

    Cosmos is absolute state of relative multiverse. -Aiya-Oba (Philosopher).

  • Jeremy

    I think Occam’s “Razor” would be better understood as Occam’s “weighing scales” – with “unexplained mysteries” in one pan, balanced against “explained mysteries” in the other pan. The reason why we don’t want to multiply entities beyond necessity is that doing so usually adds more to the “unexplained mysteries” pan than is counterbalanced by the “explained mysteries” pan.

    It’s easy to see that sheer “number of entities” is often irrelevant. For example, consider the apparently accelerating expanding universe. If we could explain it by positing twice, thrice – or a million times – as many atoms of the sort we are already familiar with, we wouldn’t hesitate to do so. The problem is positing “dark matter”: we’re unfamiliar with it. It isn’t clear at all that we are balancing Occam’s “scales” in the right way.

    There are often situations in which we clearly tip the balance the way we want. For example (in my opinion) positing prions explains some brain diseases by getting rid of more mystery than it adds. (I say “in my opinion”, because there are differences of opinion here, and it’s often hard to call.)

    My main problem with “multiverse”-type theories is not that “entities” are multiplied beyond necessity, but that Occam’s “scales” may be tipped the wrong way.

    By analogy, suppose I am a pre-Darwinian, and I have my own half-baked “theory of evolution”. It says that all life is descended from a single original life form, and that random variations occur, some of which die out, some of which survive. So far so good.

    But instead of describing the mechanism of natural selection, this half-baked theory says every possible variation that can occur actually does occur – somewhere or other on the vast number of other planets in the universe. We on Earth just happen to inhabit a planet in which giraffe necks and peacock tails happen to be long.

    It seems to me that this wouldn’t be an explanation worth the name. It tips the scales the wrong way. The real problem is not the “number of entities”, but the extra weight added to the “unexplained mysteries” pan of the scales.

  • Chris

    Entropy is a measure of the number of available states in the system. At the Big Bang the universe was much much smaller, so the number of available states is also much much smaller, hence entropy starts at a minimum. Now the universe is bigger, many more available states, so the entropy can increase.

    Also (and I admit this sounds crazy and probably is) if the entropy of the universe was at a minimum at t=0, wouldn’t the temperature of the universe be 0 Kelvin at the instant of the Big Bang?

  • http://quantummoxie.wordpress.com Ian Durham

    Since I like beating dead horses, I still don’t see why a low entropy early universe is all that strange if we interpret entropy as a measure of “possibility” (as opposed to probability, per se) or as a scaled number of configurations (this is all apart from the fact that there seem to be slightly disparate uses of the term entropy floating around out there to begin with so you’d need to specify which one you are talking about). Complexity, on the other hand, does need some serious explaining. But I think that perhaps part of the problem is hinted at in one of your own slides from the FQXi conference that compares entropy and complexity. In fact, I’m still not really sure what that slide was meant to convey (no offense). From my standpoint, the graph of complexity on that slide is far more mysterious than the graph of entropy (why does complexity eventually decrease?). Of course, there is some intuitive sense of what complexity is that is built into this, but the only attempt to clearly define what complexity is, that I am aware of, is by Scott Aaronson.

    As for the multiverse concept or many worlds or whatever, as numerous people have pointed out (including Jeffrey Barrett and David Albert as well as David Wallace), there are some issues underlying precisely what we mean by a “universe” or “world” that have yet to be worked out. Thus it seems a little premature to use the multiverse to solve a problem (that actually might not even be a problem in the first place) when its own foundation is not well-defined. It feels too much like a house of cards.

  • http://philosophyfaculty.ucsd.edu/faculty/ccallender/ Craig Callender

    Hi Sean,

    I think that you’re right that we should make a distinction between what the philosopher David Lewis called *quantitative* simplicity and what he called *qualitative* simplicity. Multiplying kinds of things in a theory seems a theoretical vice, whereas multiplying instances of a kind doesn’t seem so bad. As the intuition is usually put: when considering the “price” of positing the electron, no one objected that there would then be too many! So people typically think that quantitative simplicity is no theoretical virtue at all. And as you point out, on that measure multiverses and Everettian branches don’t count as bloated at all.

    That said, I think that there are cases in the history of science where quantitative simplicity *did* count (and seemingly rightly). I’m no expert on this, but I know that some philosophers of science have written on cases where quantitative simplicity did function as a virtue, e.g., the number of particles (of a given kind) invoked in saving a conservation principle. Intuitively that seems like it could be right — don’t posit more you need, either kinds of instances or kinds. Perhaps it would be worth thinking about these cases where quantitative simplicity seems to matter and seeing whether anything remains of the “too fat” objection to Everett or multiverses. Surely if I have two hypotheses, H and H’, and they both predicted the same phenomena, except that one posited multiverses and the other didn’t people would go for the one without. So quantitative parsimony must count for *something*?

    The real question, of course, is need. And especially in the case of inflation, where one might get empirical predictions and experiments, then it’s hard to make any in principle objection. Maybe I should have distinguished between two different motivations for multiverses (I think Tim Maudlin recently did this in an interview I read after I provided my answers.). One is a kind of “fine-tuning” motivation. I’ve definitely seen this style of motivation at physics conferences, as I’m sure you have. I’m just against people “knowing” that the low entropy constraint is a big problem a priori, for that closes off some possibilities. That’s my target–I hope the rest of the interview makes that clear. The other is a mechanism that will explain some empirical phenomenon and that mechanism has the by-product of predicting lots of pocket universes. The former seems a lot more controversial than the latter, which is part and parcel of successful science.

    • https://plus.google.com/118265897954929480050/posts Sean Carroll

      Craig– yeah, I tried to make clear that there was an interesting argument to be had (about what the low entropy of the early universe tells us), but that I wasn’t talking about that in this blog post, just nitpicking on a common but incorrect (in my view) application of parsimony.

      I would be interested in knowing of an example where quantitative simplicity actually did count, especially if it were more than just a tiebreaker between two otherwise equivalent theories. I can’t think of any off the top of my head.

      I think that if we could predict exactly the same phenomenon with and without the multiverse (and with the same simplicity of equations and concepts), people would quite rightly tend to prefer the non-multiverse version. But I would argue that “extravagance of ontological commitment” would not be a good reason for doing so. The good reasons would be the other ones I mentioned — testability, vagueness, well-definedness, etc., if those indeed were worries in this hypothetical scenario. If they were not, I don’t think we’d be justified in rejecting the multiverse, other than on grounds that we are prejudiced against things we can’t see.

  • Igor Khavkine

    Hi, Sean. In this post and on other occasions you express the view that the low entropy of the early universe is likely not just a historical accident but a clue toward some yet unknown principle of how the universe works. I’m curious about the reasoning that lead you to this opinion. I have to confess that, thinking about this question myself, I cannot find a sufficiently close analogy from what I know of how other known and tested physical principles have been discovered. Thus I personally tend to gravitated toward the “historical accident until proven otherwise” point of view. However, I’m genuinly interested in the details of reasoning that bring someone to the opposite opinion.

    • https://plus.google.com/118265897954929480050/posts Sean Carroll

      Igor– well, there is a measure on the space of states. Defining that entropy relies on that measure. Saying that the early universe had a low entropy is saying that it is in a very tiny region of the space of states. If we hadn’t observed anything about the universe, but were told what its space of states looked like, we would have expected it to be more likely to be high-entropy than low-entropy. The fact that it’s not suggests that there is possibly more going on to things than a universe just picked out of a hat. It’s not that the measure is some sort of “correct” probability distribution (it’s clearly not), but that its failure suggests that we can learn something by trying to explain this feature of the universe.

      It’s easy to find analogies. Say you’re on a beach where there are both light and dark grains of sand. If the grains are all mixed together, there’s nothing to be explained. If all the dark grains are piled in one particular location, you figure there is probably some explanation.

  • Brett

    Sorry, Sean, but you’re argument falls short. It’s wrong because I don’t like it. Also, I’m sorry to inform you but your ass looks fat in those jeans- your wife is lying to you. (neither comment meant to be a factual statement)

  • Tony Cusano

    Sean, Does the multiverse theory conserve all of the theoretical truths embodied in relativity and in QM? Rovelli and Smolin don’t seem to think so (see this month’s edge.org interview with Rovelli). It Seems to me that conserving facts we know as true is not the same as failing to imagine new perspectives on those facts. It seems like criticizing a theory because it postulates ideas not anchored in previously successful theories is a valid criticism that encompasses a notion of excess ontological baggage. However, I am asking because I am not sophisticated enough to know if I am approaching this based on a valid interpretation I the postulates of MV/MW theory.

  • Igor Khavkine

    Sean, thanks for the explanation. I recognize this argument. But, for the record, I personally don’t find it convincing. Continuing the analogy, the concentration of dark grains of sand is, from our perspective, unusual because we can compare to many other instances of isolated beaches where the two grain kinds are mixed. This and other similar analogies don’t take into account that, on the other hand, we have only one universe.

  • Brett

    aww man, somebody doesn’t like me.

    As far as a muiltiverse is concerned; so many issues come up. I personally believe in the concept of a multiverse. From the perspective of our universe being finite; I can’t imagine there being “nothing” beyond the boundaries of our universe. That just doesn’t compute for me. From the perspective of our universe being infinite; vacuum energy doesn’t seem to work and the whole concept of quantization into any sort of variance seems meaningless and impossible without some sort of boundary.

    I think you are correct though, Sean. Ignoring the low entropy of the early universe without a good reason is not very bright. The change in entropy suggests many major implications about Nature. What physicist would not be compelled to try and explain this? I would argue that a changing state of entropy, which can be summed up as ‘Dynamics’, is the foundation of Nature. Explaining why the universe started out with low entropy would be critical in understanding why dynamical systems (the universe) exists. I think Craig Callender is taking the stereotypical philosopher’s approach to all hard questions.

  • http://www.facebook.com/hal.swyers Hal S

    I think this debate is somewhat silly. It effectively boils down to a question of whether one believes that quantum mechanics and gravitation can be unified. If one takes the position that they can be unified, then one currently must resort to a “stringy” view of the universe, and one must identify mechanisms by which the cosmological constant takes a small value (which in some sense is basically a question similar to whether model parameters, such as mass etc, are correlated in totality or are independent). The alternative is that gravity and quantum mechanics are independent, such as discussed in this paper:

    http://www.cs.okstate.edu/~subhashk/quantijtp.pdf

    I for one subscribe to unification, and do not believe the cosmological constant is a wholly independent parameter. Justification for this point of view is that we actually do have theories that are robust enough to unify gravity and quantum mechanics. So while there is a certain amount of independence as symmetries are broken and parameters take on observed values, there is still ultimately some sort of functional dependence that restricts those values (e.g. some correlation function). The evidence is simply overwhelming that our universe is fundamentally quantum, and that “waviness” is absolutely fundamental as well.

    So my advice? Stop trying to make classical analogies to things that are clearly not classical. Accept that classicality is a low energy phenomena, or rather an emergent phenomena stemming from existence in a sufficiently mixed state.

  • John R Ramsden

    Excluding multiverse and many worlds notions by treating the universe as a concept instead of an instance of a broader concept isn’t the only error those guys may be committing. They also defy observation, or reasonable extrapolation of the obvious fact that practically everything in nature occurs in vast profusion, despite this in many cases not being at all obvious or plausible to earlier observers. So why should universes be any different?

    As for the low initial entropy conundrum, it is equally obvious to a multiverse believer such as myself that it must somehow emerge from the maximal entropy of an earlier universe generation. The only way I can see to resolve this paradoxical (even apparently nonsensical) conclusion is that once a universe reaches a maximal entropy, it rescales as if “searching out” asymmetries and the latter start a new generation.

    On that assumption, a multiverse scenario could be experimentally verified if there were any way to distinguish a Big Bang inflation which diluted existing asymmetries (the conventional view) versus the complementary picture, sketched above, in which any above a certain scale had been ironed out. The latter need not contradict the idea of a minimal scale; this could still exist at any given scale, and indeed might need to for the necessary apparent “clean break” between universe generations to be possible.

  • http://www.facebook.com/hal.swyers Hal S

    Just as an illustration of correct thinking. Consider the Mott problem. This is a nice quantum analogy one can use as to why one would think that all the masses of fundamental particles might be correlated at some level (rather than randomly distributed). In this example Mott and Heisenberg were able to show that the variability was in the ultimate determination of which path was being taken, not that the points detected would be random. As an analogy it is useful, since one would expect that the rest masses of fundamental particles should follow some sort of function in some configuration space, and not be randomly determined.

    http://en.wikipedia.org/wiki/Mott_problem

  • Lino D’Ischia

    This entire discussion strikes me as the epitome of what is happening in science: the descent into the nonsensical. Science, in most respects, is concerned with what we understand as ’cause and effect,’ with the set of possible ’causes’ delimited to those that we know. What is so troublesome about the invocation of ‘multiverses’ is its entire unknowability–we’re talking about entirely different universes (!), unknown regions completely cut-off, by definition, from that of our own! Why don’t we just say that giant green fairy monsters brought about the initial low entropy of our universe. Of course, this is complete nonsense. But, if you reply that we can discount this possibility because we have no way of proving these monsters exist, I simply have to answer that we do have proof of their existence: low initial entropy. Circularity in reasoning has now entered the scene. How do we ever usher it out? To me, this should be a great cause of concern for all thinking scientists.

  • Juzer

    Sean, re: # 19, would the multiplying numbers of epicycles to keep up with more and more accurate measurements in the Ptolemaic theory of planetary motion count as a disqualifier?

    • https://plus.google.com/118265897954929480050/posts Sean Carroll

      Juzer– it depends. If you were just adding completely independent epicycles, specified by new parameters each time, then yes, it would make things more complicated. But if you had some pattern that connected all the epicycles (a geometric series or something like that), it could still be quite simple. It’s all about how much information is required to specify the theory.

  • ComeOn

    Agree with Lino (comment 29), except why do you think this is happening all over science? Can you point to any research in Biology or Chemistry where this problem of “unknowability” is present, permitting a descent into the nonsensical? You cannot. In fact, it is only in some esoteric branches of physics where this happens, such as string theory and many-universes cosmology. The bread and butter of particle physics is concerned with what we can observe in colliders, so this issue does not arise.

  • http://jbg.f2s.com/quantum2.txt James Gallagher

    Hi Sean,

    If the Universe had begun in a high entropy state we’d still have worlds with beaches with dark patches of sands, although they’d be VERY rare.

    In this case would you invoke Jesus/God/Church to explain why we are the only such improbable planet?

    No, of course you wouldn’t. The universe probably started in a low entropy state cos that’s the way things start, really, I mean it would be a lot of effort to create a high entropy starting state..

  • Archie Pelago

    Callendar *isn’t* really saying that the arrow of time doesn’t need explaining. His main point is contained in the following:

    “suppose we judge the constraint on initial conditions to be lawlike. (I think that there are some powerful arguments for this.) Then all the universes that don’t begin in a low entropy state are, strictly speaking, unphysical and have zero probability. The initial state is then hardly monstrously unlikely (hence demanding explanation), but rather has probability one!”

    which is very sensible indeed. Though I would very much like to hear what his powerful arguments are.

  • Meh

    #29 and #31 : It wasn’t until the mid 20th century when we discovered that our galaxy was not the entire universe. In 60 years, we went from thinking that our galaxy was the entire universe to KNOWING that there are as many galaxies in the universe as there are stars in our galaxy, if not far more.

    Sean is saying that he won’t rule out the possibility that it may be something that we will never know (because IMO, that’s what bright minds do; they keep all possibilities open), but that we probably will find out.

    The field of medicine is pretty pathetic compared to what it could be; given that most doctors will throw pills at every problem you have. “no, my runner’s knee doesn’t hurt anymore now that I’m high as a kite”. We have no solid grasp of “nutritional science”; have you ever tried to get your full day’s worth of potassium? It’s impossible. Physics is different from other sciences because we admit what we don’t know. Physics goes beyond the knowledge of biology or chemistry. My passion for physics started in my high school chemistry class when I asked my teacher to explain why covalent bonds behave the way they do; he had no answer.

    Biology is further explained by Chemistry. Chemistry is further explained by Physics. How can you complain about the honesty of physicists when you are forced to turn to them for the knowledge that is beyond your field of study?

    Physics is further explained by discovery.

  • http://empiricalperspectives.blogspot.com/ James Goetz

    First, Craig appears off when he says that low entropy at the Big Bang needs no explanation. But perhaps the brute fact is that physics might never have that explanation, which is far different than saying initial low entropy needs no explanation.

    “There are an infinite number of integers, and there are only a finite number of integers between zero and a googol; that doesn’t make the former set somehow ontologically heavier.”

    Second, Sean, if a multiverse hypothesis depends upon the concept of fundamental time, then an infinite number of integers look fat. For example, nobody should doubt the existence of an infinite number of past and future Plank time coordinates independent of phenomena, but nonetheless, an infinite number of Plank times could never have elapsed (regardless if the length of Plank time intervals vary throughout a multiverse). Likewise, since fundamental time requires a past infinite elapse of time, then fundamental time is impossible. The elapse of time emerged.

    Given the above, a multiverse hypothesis with fundamental time is infinitely heavy while a multiverse hypothesis with emergent time might work if it has no other impossible obstacles.

  • John R Ramsden

    @Meh (#35) Well said, and your initial observation illustrates the first point I made in #27.

    Although Lino (#29) may have a point at present, his attitude sounds very much like that of the guy who lamented that the composition of stars must remain forever unknown – only a year or two before the development of spectroscopy. ;-)

  • Meh

    Thanks John,
    I was thinking about this last night, and remembered the question that I always come up with when discussing a multiverse: What defines a universe? Is it a collection of dimensions? Would parallel universes just be copies of our own? if that’s the case, then what is the boundary that separates the two universes? Is it just scale? If it’s just the measure of entropy as Lino #29 says, then it really just depends on the parameters of the system we define. The entropy of a black hole, our solar system, and an intergalactic region of space, could philosophically/hypothetically all be different universes if entropy is the defining parameter.

  • Emil

    When I think about the multiverse I remember the problem we have solved in the first year of college physics: why does light travels in a straight line? The solution involved considering the light takes all possible paths between 2 points (it can go all the way to the moon and back before hiting my eye) but only on the straight line the interference would be constructive. Of course I do not expect to measure any change away from the straight path and I think this is equivalent to saying that those other paths do not exist – but that does not prevent the light wave function to be defined for any point in our universe (both in time and in space). So multiverse seems like a nice construction that allows us to explain our observations but it also says that other wolds have no influence on how we experience our world.

  • Jordan

    The original postulation of a single neutrino with spin 1/2 emitted during Beta decay rather than two neutrinos with spin 1/4, three neutrinos with spin 1/6, etc., seems a case where we favored the quantitatively simpler hypothesis.

    For a little discussion, see: http://plato.stanford.edu/entries/simplicity/#QuaPar

  • Itai Bar-Natan

    By the way, I’d like to ask: is the Big Bang really a low entropy state? It has all of the energy uniformly distributed throughout all of space, which seems to me like a good way of maximizing entropy rather than minimizing it. The fact the it evolves into a low-entropy state seems to come from the fact that the expansion of the universe is non-adiabatic; if the universe expanded slowy enough, atoms would have the time to reach Boltmann-like isotope ratios, and density inequalities would have time to balance themselves rather than forming stars and galaxies.

  • A H

    “The cosmological multiverse, while on much shakier empirical ground than the many-worlds interpretation, follows the same pattern.”

    There can be no empirical evidence for the many-worlds interpretation, or any other sound interpretation of quantum mechanics. This sloppy use of philosophical terms really undermines your arguments.

  • Todd

    I absolutely love the wheelbarrow metaphor. Occam’s Razor is great but – what with its near-mystic “cutting both ways” and all – it lacks the concrete attraction of Carroll’s Wheelbarrow.

  • http://nojesusnopeas.blogspot.com James Sweet

    A related thought experiment — and I apologize that I can’t remember who I am stealing this from, maybe Stephen Hawking, maybe Kip Thorne, hell maybe even Sean, I honestly don’t remember — is this:

    If a spaceship begins travelling away from you at the speed of light, and you know it will travel in that same direction and same speed indefinitely, which is the most parsimonious universe? One in which that spaceship continues to exist, even though you will never even in principle be able to catch up with it or observe it? Or one in which the spaceship ceases to exist?

    I would argue the former, even though it requires more “ontological commitments” by Callender’s definition.

  • Pingback: Sort-of guest post: Sean Carroll comments on whether quantum mechanics gives evidence for God « Why Evolution Is True

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »