Turtles Much of the Way Down

By Sean Carroll | November 25, 2007 2:37 pm

Paul Davies has published an Op-Ed in the New York Times, about science and faith. Edge has put together a set of responses — by Jerry Coyne, Nathan Myhrvold, Lawrence Krauss, Scott Atran, Jeremy Bernstein, and me, so that’s some pretty lofty company I’m hob-nobbing with. Astonishingly, bloggers have also weighed in: among my regular reads, we find responses from Dr. Free-Ride, PZ, and The Quantum Pontiff. (Bloggers have much more colorful monikers than respectable folk.) Peter Woit blames string theory.

I post about this only with some reluctance, as I fear the resulting conversation is very likely to lower the average wisdom of the human race. Davies manages to hit a number of hot buttons right up front — claiming that both science and religion rely on faith (I don’t think there is any useful definition of the word “faith” in which that is true), and mentioning in passing something vague about the multiverse. All of which obscures what I think is his real point, which only pokes through clearly at the end — a claim to the effect that the laws of nature themselves require an explanation, and that explanation can’t come from the outside.

Personally I find this claim either vacuous or incorrect. Does it mean that the laws of physics are somehow inevitable? I don’t think that they are, and if they were I don’t think it would count as much of an “explanation,” but your mileage may vary. More importantly, we just don’t have the right to make deep proclamations about the laws of nature ahead of time — it’s our job to figure out what they are, and then deal with it. Maybe they come along with some self-justifying “explanation,” maybe they don’t. Maybe they’re totally random. We will hopefully discover the answer by doing science, but we won’t make progress by setting down demands ahead of time.

So I don’t know what it could possibly mean, and that’s what I argued in my response. Paul very kindly emailed me after reading my piece, and — not to be too ungenerous about it, I hope — suggested that I would have to read his book.

My piece is below the fold. The Edge discussion is interesting, too. But if you feel your IQ being lowered by long paragraphs on the nature of “faith” that don’t ever quite bother to give precise definitions and stick to them, don’t blame me.

***

Why do the laws of physics take the form they do? It sounds like a reasonable question, if you don’t think about it very hard. After all, we ask similar-sounding questions all the time. Why is the sky blue? Why won’t my car start? Why won’t Cindy answer my emails?

And these questions have sensible answers—the sky is blue because short wavelengths are Rayleigh-scattered by the atmosphere, your car won’t start because the battery is dead, and Cindy won’t answer your emails because she told you a dozen times already that it’s over but you just won’t listen. So, at first glance, it seems plausible that there could be a similar answer to the question of why the laws of physics take the form they do.

But there isn’t. At least, there isn’t any as far as we know, and there’s certainly no reason why there must be. The more mundane “why” questions make sense because they refer to objects and processes that are embedded in larger systems of cause and effect. The atmosphere is made of atoms, light is made of photons, and they obey the rules of atomic physics. The battery of the car provides electricity, which the engine needs to start. You and Cindy relate to each other within a structure of social interactions. In every case, our questions are being asked in the context of an explanatory framework in which it’s perfectly clear what form a sensible answer might take.

The universe (in the sense of “the entire natural world,” not only the physical region observable to us) isn’t like that. It’s not embedded in a bigger structure; it’s all there is. We are lulled into asking “why” questions about the universe by sloppily extending the way we think about local phenomena to the whole shebang. What kind of answers could we possibly be expecting?

I can think of a few possibilities. One is logical necessity: the laws of physics take the form they do because no other form is possible. But that can’t be right; it’s easy to think of other possible forms. The universe could be a gas of hard spheres interacting under the rules of Newtonian mechanics, or it could be a cellular automaton, or it could be a single point. Another possibility is external influence: the universe is not all there is, but instead is the product of some higher (supernatural?) power. That is a conceivable answer, but not a very good one, as there is neither evidence for such a power nor any need to invoke it.

The final possibility, which seems to be the right one, is: that’s just how things are. There is a chain of explanations concerning things that happen in the universe, which ultimately reaches to the fundamental laws of nature and stops. This is a simple hypothesis that fits all the data; until it stops being consistent with what we know about the universe, the burden of proof is on any alternative idea for why the laws take the form they do.

But there is a deep-seated human urge to think otherwise. We want to believe that the universe has a purpose, just as we want to believe that our next lottery ticket will hit. Ever since ancient philosophers contemplated the cosmos, humans have sought teleological explanations for the apparently random activities all around them. There is a strong temptation to approach the universe with a demand that it make sense of itself and of our lives, rather than simply accepting it for what it is.

Part of the job of being a good scientist is to overcome that temptation. “The idea that the laws exist reasonlessly is deeply anti-rational” is a deeply anti-rational statement. The laws exist however they exist, and it’s our job to figure that out, not to insist ahead of time that nature’s innermost workings conform to our predilections, or provide us with succor in the face of an unfeeling cosmos.

Paul Davies argues that “the laws should have an explanation from within the universe,” but admits that “the specifics of that explanation are a matter for future research.” This is reminiscent of Wolfgang Pauli’s postcard to George Gamow, featuring an empty rectangle: “This is to show I can paint like Titian. Only technical details are missing.” The reason why it’s hard to find an explanation for the laws of physics within the universe is that the concept makes no sense. If we were to understand the ultimate laws of nature, that particular ambitious intellectual project would be finished, and we could move on to other things. It might be amusing to contemplate how things would be different with another set of laws, but at the end of the day the laws are what they are.

Human beings have a natural tendency to look for meaning and purpose out there in the universe, but we shouldn’t elevate that tendency to a cosmic principle. Meaning and purpose are created by us, not lurking somewhere within the ultimate architecture of reality. And that’s okay. I’m happy to take the universe just as we find it; it’s the only one we have.

CATEGORIZED UNDER: Philosophy, Science
  • http://www.math.columbia.edu/~woit/wordpress Peter Woit

    Actually in the short piece you link to I don’t blame anyone, just point out that the claim by Davies that the mood among physicists is shifting in favor of the anthropic principle doesn’t reflect the reality that the great majority of serious physicists don’t want anything to do with it. It’s my impression this is true even within the string theory community.

  • Harvey

    Hi,

    This is completely off topic but I was wondering if you could help me?

    I am in an online debate with a Biblical creationist and since he has brought in Quantum Mechanics I thought that I would ask your advice.

    He states “Current scientifc assumptions (including those underpinning the evolutionist viewpoint) are increasingly being undermined by quantum science.”

    and

    “Some insist that genuine understanding demands explanations of the causes of the laws, but it is in the realm of causation that there is the greatest disagreement. Modern quantum mechanics, for example, has given up the quest for causation and today rests only on mathematical description.”(this was taken from the Encyclopaedia Britannic)

    That would sort of make the Lemon test in the Dover trial rather redundant, wouldnt it?”

    and on

    “I then raised the question as to what impact QM could have if causation is no longer an issue for science – it could indirectly open the door to ID as a viable theory as it was the causation that kept it out of the classroom, ref Dover. ”

    and

    “Now that QM has set the precedent, why can ID not use the same arguments to get into the science class?”

    and

    “why must ID have causation but according to Encyclopaedia Brit, Quantum Mechanics has “abandoned the search for causation???”

    I am a layman in terms of science and I am up on most creationist
    fallacies and feel confident enough to discuss biology, paleontology
    etc but quantum mechanics is bit beyond me from the little I can get
    the length scales in quantum theory and evolution are so far apart
    that it makes as much sense as measuring the distance between the
    earth and the sun with a 10 inch ruler..but trying to explain that is
    another matter.

    Any help/advice/hints in answering him would be greatly appreciated.

    Regards

    Harvey

  • http://badidea.wordpress.com Bad

    I’m hardly distinguished in distinguished company, but my own layperson’s response to Davies here.

    I think most of the critics are having all the same sorts of feelings. Mostly, we’re realizing that maybe Davies has a profoundly different and ambitious idea of what “science” is than the rest of us.

  • http://badidea.wordpress.com Bad

    Harvey, the ID question actually has a pretty obvious answer regardless of any particular knowledge of QM. To even GET to the claim that “first cause” ID is a useful explanation of anything, you have to first posit that everything must have causation. So once you’ve done that, ID cannot spin around and declare that the principle doesn’t apply it: that the designer is an exception. If you can have exceptions, then why can’t we all offer our exceptions, like making the universe itself an exception?

    But all of that only makes sense within the context of THAT ARGUMENT. If you’re just talking about science, period, then there is no solid rule in the first place to be had that everything must have a cause, or at least that we should always be capable of determining that it had a cause (in QM, most people think the last two statements are basically indistinguishable for all practical purposes, since if you just plain can’t find a cause, there is no way to tell which is true).

    He’s confusing the implications of accepting his argument (and then that causing a contradiction) with science in general, which may or may not accept is argument.

    As to the rest of his claims, I’m hardly an expert on QM, but I know enough about it to know that the vast majority of claims made about what QM implies for science or reality are BS. The problem with QM is that it’s so weird that it doesn’t lend itself to much of anything that’s analogous to the macro-world.

  • Matt

    This subject of the big “why” questions is the reason I love reading Galileo’s scientific work so much. In his early writings and commentary, he expressed utter disdain for those who spent time “philosophizing about nature.” Those aetherial questions of “why is what is the way it is,” were a waste of time in his mind.

    An investigator of nature primary purpose was to discover/describe patterns one finds in natural phenomena.

    Sean seems to say everything else that can be send in response, so best to end this here.

  • http://quantumfieldtheory.org nigel cook

    Paul Davies openly admits at http://aca.mq.edu.au/PaulDavies/prize.htm

    I was awarded the 1995 [million dollar] Templeton Prize for my work on the deeper significance of science. The award was announced at a press conference at The United Nations in New York. The ceremony took place in Westminster Abbey in May 1995 in front of an audience of 700, where I delivered a 30 minute address describing my personal vision of science and theology. … [Irrelevant waffle about involvement of British Royalty and politicians in religion.]

    I enjoyed at least one of Davies books at school, The Forces of Nature, 2nd ed., 1986. What first warned me that Davies was obsessed with orthodoxy and interested in that suppressing the scientific facts of physics, was the following claim of his on pages 54-7 of his 1995 book About Time:

    Whenever I read dissenting views of time, I cannot help thinking of Herbert Dingle… who wrote … Relativity for All, published in 1922. He became Professor … at University College London… In his later years, Dingle began seriously to doubt Einstein’s concept … Dingle … wrote papers for journals pointing out Einstein’s [SR] errors and had them rejected … In October 1971, J.C. Hafele [used atomic clocks flown around the world to defend SR] … You can’t get much closer to Dingle’s ‘everyday’ language than that.

    It turned out that Hafele’s paper didn’t defend SR at all, quite the opposite. Hafele in Science, vol. 177 (1972) pp 166-8, for the analysis of the atomic clocks uses G. Builder (1958), ‘Ether and Relativity’ in the Australian Journal of Physics, v11, p279, which concludes:

    … we conclude that the relative retardation of clocks … does indeed compel us to recognise the causal significance of absolute velocities.

    Dingle’s claim in the Introduction to his book Science at the Crossroads, Martin Brian & O’Keefe, London, 1972:

    … you have two exactly similar clocks … one is moving … they must work at different rates … But the [SR] theory also requires that you cannot distinguish which clock … moves. The question therefore arises … which clock works the more slowly?

    was therefore validated by Hafele’s results, since Builder’s analysis is identical to Dingle’s, contrary to the ridicule dished out by Davies.

    The underlying message from Davies is that mainstream fashionable consensus, not factual evidence, define what science is.

    [BTW, Einstein did get absolute motion wrong in Ann. d. Phys., v17 (1905), p. 891, where he falsely claims: ‘a balance-clock at the equator must go more slowly, by a very small amount, than a precisely similar clock situated at one of the poles under otherwise identical conditions.’ For the error Einstein made see http://www.physicstoday.org/vol-58/iss-9/pdf/vol58no9p12_13.pdf Einstein repudiated this in general relativity, e.g., he writes: ‘The special theory of relativity … does not extend to non-uniform motion … The laws of physics must be of such a nature that they apply to systems of reference in any kind of motion. … The general laws of nature are to be expressed by equations which hold good for all systems of co-ordinates, that is, are co-variant with respect to any substitutions whatever (generally co-variant).’ – Albert Einstein, ‘The Foundation of the General Theory of Relativity’, Annalen der Physik, v49, 1916 (italics are Einstein’s own).]

  • efp

    I’ve been asking myself why, why would Davies send such a silly, sloppy piece to be published by the NYT. What purpose does it serve? It undermines science, by telling faith-heads like ID proponents that science is just another religion. He may have more subtle points (not very), but that’s the message most people will take away from the piece and he knows it. Then it occurred to me: he did it to sell his book. That’s tapping into a huge market. Perhaps he’ll cash in on the church lecture circuit like the Discovery Institute hacks? Brilliant, nicely done!

  • http://badidea.wordpress.com Bad

    Nigel, I don’t see why getting an award from Templeton or being sympathetic to their values and hopes should necessarily be a black flag, though certainly there is some legitimate criticism to be had on the way Templeton pushes its funding and message. Davies has defended it on several occasions, and while I’m not particularly eager for or interested in the goals, the defenses are altogether unreasonable for people that do have religious inclinations and want to know if science can better direct where they’re pointing.

    Unfortunately for Davies, this OpEd really weakens his positions and defenses of that sort of program considerably, playing to exactly the sorts of legitimate fears many scientists have about science/religion “mixers.”

  • http://scienceblogs.com/pharyngula/ PZ Myers

    Hang on there…”PZ” isnt a very colorful moniker. It’s just short.

    If bloggers get colorful monikers, where’s yours?

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Anything with a “Z” in it is presumptively colorful. Even if you were born with it.

    I’m still working on a good moniker. Something imposing yet playful, like “Galileo Doombabble, Destroyer of Solecisms.” But I was too impatient to let that stop me from blogging.

  • http://www.sunclipse.org Blake Stacey

    “PZ” can be a colorful moniker if you’re a synaesthete! (-:

    As for the Templeton racket. . . by Janus, that sounds like easy money!

  • http://www.geocities.com/aletawcox/ Sam Cox

    I’ve always seen science as the process of observing, describing, measuring and relating. Any conclusions we draw are suspect, and subject to the results of the continuing refinement of our measurements.

    Speculating as to where the “laws” of physics (and nature in general) come from is of dubious value, for our conclusions are built on the results of relating a body of evidence which is constantly being refined. More important, perhaps, our conclusions are influenced by our own culture and point of view.

    Some time ago, I read Paul Davies: “The Mind of God”. At the end, Paul asserts that: “We are truly meant to be here”…an anthopic statement as much as an inference about the possible existence of diety.

    Since any cosmic models “beyond Einstein” must be inclusive in their ability to describe relativistic phenomena, for example, it might be possilbe, with appropriate fear and trepidation to draw a few general conclusions from what is presently understood about the nature of our existence, consciousness in general and the relationship of informational complexity to the universe as a whole.

    To my mind, it all comes back to how the universe is observed. Fish see it one way, Dogs, with their acute sense of smell “see” it another. We as humans have our own unique way of viewing the cosmos. Each of our points of view is as unique as our individual fingerprints.

    Science is a unifying “language”…a human way of approaching and solving problems which must be strictly conformed to if it is to remain meaningful and useful. We all have “reasons” for “believing” the way we do; that is to be expected, for our frames of reference are different.

    However, in science, our intellectual concepts must be reduced to mathematical models and these models must be verified by increasingly rigorous testing in the field. The concepts and models (our “laws” of nature) are manmade. If they can be verified, they are possibly useful in the construction of a technology- even understanding the nature of our existence- and all scientists are interested. If not, our concepts remain but a personal opinion.

  • Jason Dick

    The whole “ultimate cause” line of argument is inherently dishonest. Consider this situation:

    Imagine that we have discovered the “theory of everything”: we have found the correct theory for unifying gravity and quantum mechanics. This theory is simple and beautiful, it reduces to a singular equation which can itself be derived from a single physical principle. The person who wants to argue for the existence of God then states, “But what is your explanation for the existence for that physical principle? It must be God!” No, this is nonsense. You don’t get a free pass like that. Yes, there must be an ultimate explanation, at some point you reach an explanation for which there is no explanation. But to claim that any such explanation that is not God is not valid is just plain irrational.

    In fact, I contend, attempting to stick a being like God in as a “first cause” is itself fundamentally irrational. First, God, in the way it is typically defined, is a being that is itself unexplainable. So, in essence the argument is that the ultimate explanation is itself a mysterious entity which cannot be properly described. This is, of course, nonsense: if you don’t know what it is that is holding the place of an explanation, then you haven’t explained anything at all.

    Then there’s the problem that the explanation is itself monstrously complex. That is, if we consider the way people typically think of a deity, they think of one that is anthropomorphic, at least in the capacity to make decisions, and as such it “explains” the universe because it decided to make the universe as it is. Such a decision-making capacity requires tremendous complexity, making any such being that could fill the place of a decision-making creator God even more complex than that which it explains, reducing the whole edifice to a non-explanation in yet another way.

    As for potential ultimate explanations, I really like Max Tegmark’s mathiverse:
    http://space.mit.edu/home/tegmark/toe_frames.html

    The idea is pretty simple: perhaps the underlying principle at the heart of it all is nothing more than, “All mathematical structures have physical existence.” Certainly this is a very simple principle, enough such that I sincerely doubt that we can do better. The question remains as to whether or not it’s correct, and if we ever find the mathematical structure that is isomorphic to the region of the universe which we observe (the “theory of everything”), perhaps we will be able to say whether or not this mathiverse makes any sense.

  • http://exploringourmatrix.blogspot.com James McGrath

    Isn’t the decision to take meaning and purpose as cosmic principles as much a ‘leap of faith’ as the decision to treat them that way? The best religious viewpoints can offer is symbols of this transcendent mystery, but I don’t see how extrapolating from our experience of meaning to the meaningfulness of existence is any more or less justified than extrapolating from our experiences of meaninglessness. Indeed, I wonder if there is any worldview that can fully do justice to both.

    I do agree wholeheartedly that meaning is something we give to the universe, and is something that is not intrinsically connected to our explanations about the universe, to the extent that we have any. Well said!

  • jeff

    Paul Davies argues that “the laws should have an explanation from within the universe

    Seems to me that this is wrong even without assuming a God. (Aren’t there cosmologies where this universe is embedded within another?)

    The final possibility, which seems to be the right one, is: that’s just how things are.

    This will always be an unsatisfactory explanation, since it is in the nature of science to explain. If someone asks you why the sky is blue, should you say “that’s just how things are?” A more honest answer would be “we don’t know”. It’s best not to pretend to know what you don’t.

  • http://countiblis.blogspot.com Count Iblis

    The multiverse theory is increasingly popular, but it doesn’t so much explain the laws of physics as dodge the whole issue. There has to be a physical mechanism to make all those universes and bestow bylaws on them. This process will require its own laws, or meta-laws. Where do they come from? The problem has simply been shifted up a level from the laws of the universe to the meta-laws of the multiverse.

    I don’t understand this. Except for a probability distribution over the set of all universes there are are no “meta laws”. In the Tegmark ensemble a universe is just a mathematical model. The notion of “physical mechanism” doesn’t apply to the ensemble of universes. Physics is what an observer (a self aware mathematical substructure, according to Tegmak) experiences in his universe.

  • Sergey

    I wonder if there is any solid proof that the value of physical constants in far away galaxies is the same as here or that they were not changed since the moment of big bang ? I am not sure, but I presume that the answer is negative and there is no proof. If it is so, then our reliance on the known values of fundamental constant when we reason about remote past or remote future is an act of belief or faith.
    I bet that a logician used to deal with somewhat classical possible worlds semantics for logics of knowledge and belief would tend to descibe both belief and faith within framework of modal logic KD45. Let us see how the situation looks like from standpoint of modal epistemic logic.

    In modal logic of knowledge there is a an axiom “Know( F) is true then F is true” and this axiom does not hold in the logic of beliefs. Both operators Know and Believe have similar semantics:

    “Know(F) is true in world M” means that F holds in all worlds accessible from given world M

    “Believe(F) is true in world M” means that F holds in all worlds accessible from given world M

    The difference between the two cases is that in logic of knowledge the relation “R’ is accessible from R” has to be reflexive (which is the warranty that F is true on M as long as Know(F) is true on M) while in logic of belief this relation does not have to be reflexive. In both logics the accessibility relationship is roughly interpreted as “in all conservable worlds.”

    Now, if it is logically possible that fundamental constants can be different in far away galaxies, yet when we reason about far away galaxies in all situations we consider all fundamental constants are the same, then our selection of the set of possible worlds stipulates that we are working in the framework of the logics of belief. Bang! Davies was not wrong in some sense (at least from standpoint of modal logic. See for yourself if this can be useful.)

    I think that what makes most fundamental difference between science and religion is that in science we can change our beliefs remaining faithful to our occupation, but if a believer of some religion changes his beliefs, he find that he changed his religion. A set of beliefs defines a religion, but science can not be defined by a fixed set of beliefs. T

  • http://tyrannogenius.blogspot.com Neil B.

    First, folks, if Tegmark were right, there’d be no reliable continuity (lawful patterns continuing into the future) because of all the “universes” where attractive forces change into being some other rule than 1/r^2 (or r^(N-1) if N is large space dimensions) and etc. After all, they are describable – I just did. Our chance of being in a description that really simulated lawfulness long-term would be negligible, even if we were in such a model up to this point.

    Jason Dick: You are simply wrong that we have to describe something clearly for it to make sense as an explanation. Negative and wide-ranging notions, like “not X” can be coherent logical concepts. As long as what I am saying is somewhere in a class that would fit the bill I am looking for, I don’t need to characterize it any more narrowly. As for God being complex: you don’t understand the idea of the plenum, and of intelligence etc. as potential rather than some particular structure. Ironically, physicists should know better, because of “the vacuum” being able to generate virtual particles of all kinds without having structure. In any case, we really can’t expect any explanation of things to be simplistically accessible in the same way as the things themselves, regardless of whether it’s about “God” or some other scheme, since all of them have to “reach outside” the given and straightforwardly comprehensible in some sense eventually.

    Davies: Davies is right that there’s a strong faith component to science: the idea that the universe is susceptible to scientific methods. To a large extent that’s true, and we know it because of what we’ve already accomplished. But there is no reason to expect a priori or for any other reason I know, that everything about it would be. Why should it be? Does it have some obligation to do that? That would be a kind of ironic perspective coming from people who gripe about “anthropocentrism” wouldn’t it? (And don’t ask me to prove that it isn’t, for the claimant is the one with the burden of proof, not to be confused with whoever seems more outside some mainstream or orthodoxy versus the inside.)

    But when Davies says, described here as “…a claim to the effect that the laws of nature themselves require an explanation, and that explanation can’t come from the outside.” Assuming that’s a fair characterization (certainly well intentioned, but with Davies non-simplistic way of parsing things, I’m going to double check): this time I don’t agree with him at all. To me, it’s clearly the other way around: the laws of nature have to come from outside. The one thing this universe can’t do is just justify its own laws independently, because of the problem of existential preferability among all possible choices, much discussed among philosophers (but not widely appreciated among scientists, who are – despite their pretensions to being philosophically adept – mostly philosophical near-illiterates.)

    I give Sean credit for being suspicious of that opinion of Davies, presumably holding open the door for the idea that the source of the laws being the way they are *does* come from outside the universe. That’s a good expression of open-mindedness here, assuming it’s genuine and won’t be hemmed away later due to fear it helps ideas of God etc. (I don’t see why so many of you feel so *driven* to fight against any notion of existential dependency of the universe. If that would make science harder to do, tough luck: you have no right to assume material facts from matters of convenience.)

    One of the commenters well put it thus in a past thread, regarding this stuff having no reason to exist as such without an overarching foundation of being:

    Garth Barber on Nov 13th, 2007 at 9:06 am in “Please Tell Me What “God” Means”

    … And I am entitled to hold my opinion: “I define God as the author and guarantor of the laws of science – the agent that (constantly) “breathes fire into the equations, making a universe for them to describe.”…

  • Pingback: SF Diplomat()

  • John Merryman

    Safe to say we are nowhere near ultimate explanations, so the only real issue is whether the current set of axioms can be further reduced. Is this clog in the drain entirely due to factors beyond our control, or are there institutional strictures that prevent objective review of previous assumptions. I made a few arguments in a previous thread that were very basic in their reasoning, such as that time is a function of motion, like temperature, rather then dimensional basis for it, like space, so it would seem likely anyone here would be able to set me straight, yet the only one with the fortitude to address them was Jason and his defense ultimately boiled down to that if I wasn’t able to describe the problem in mathematical terms, it was meaningless. Which I pointed out was a retreat into formula, not a rebuttal.

    So, the question is, are our limits entirely due to the abilities of our knowledge, or does the fog of politics play some part as well?

    I may well be lowering the average intelligence of this discussion, but sometimes what we think we know is more dangerous than what we don’t, since so much has been invested in it.

  • Belizean

    Davies is only technically correct. Faith is an integral property of physics and religion in the same sense that heat is an integral property of ice and molten lead.

    Niel B wrote:

    …the laws of nature have to come from outside. The one thing this universe can’t do is just justify its own laws independently…

    That’s the crux of the matter. The naturalistic position is that there is no outside, the supernaturalistic position is that there is.

    Note that the “outside” must be supernatural. Otherwise it would (by definition) be intelligible to us and thereby part of the “inside”.

  • http://badidea.wordpress.com Bad

    JasonD, I like to put it like this:

    Saying that God did it is basically saying

    “A hypothetical being that can do anything at all did in a way we don’t understand.”

    Claiming that this explains anything at all is indeed nonsense. It can explain anything merely by definition (despite never actually explaining anything) and hence explains nothing. Of COURSE something that can do ANYTHING could have done this thing. But that doesn’t explain how it was, in particular, done, and that answer could work for anything at all.

  • Reginald Selkirk

    (Bloggers have much more colorful monikers than respectable folk.)

    Have you met Dr. Lionel Tiger?

  • Juergen

    Just a few words on the laws of nature:
    The laws of nature have to be the way they are in order for us to observe them.
    Would they be off just a fraction ( like gravitation, electromagnetic) there would be no observer, or at least not us, to question them. Ergo the laws of nature do NOT need an explanation! They are as they are, would they be any different the universe we live in would be very different.
    A different Universe on the other hand, would have different laws of nature!

  • Reginald Selkirk

    Neil B. said: Davies: Davies is right that there’s a strong faith component to science: the idea that the universe is susceptible to scientific methods. To a large extent that’s true, and we know it because of what we’ve already accomplished.

    I think the usual definition of “faith” in these discussion is “belief in the absence of, or even in spite of, supporting evidence.” What does it mean to say that we have “faith” in the evidence? Isn’t that absurd? Hypothesis and experimentation have been proceeding now for centuries, and the evidence is accumulating. Wouldn’t another word be more suitable, such as “confidence” or “trust”? What definition of “faith are you suing that you can make fit to both religion and science? Because if you are using different definitions, that is cheating.

  • http://countiblis.blogspot.com Count Iblis

    First, folks, if Tegmark were right, there’d be no reliable continuity (lawful patterns continuing into the future) because of all the “universes” where attractive forces change into being some other rule than 1/r^2 (or r^(N-1) if N is large space dimensions) and etc. After all, they are describable – I just did. Our chance of being in a description that really simulated lawfulness long-term would be negligible, even if we were in such a model up to this point.

    Such universes require more information to describe. One has to assume that universes that can be described with less bits are more likely. Note that an observer is itself a mathematical model that is simulated by a brain which in turn is described by the laws of physics.

    You can think of an observer as living in the simulation that the brain is computing. But the effective laws of physics of this virtual world are extremely complicated. In this virtual world the qualia we can experience are fundamental physical objects.

    This world exists as a universe in its own right, but because you need an enormous amount of information to describe it, we don’t find ourselves there, rather we see ourselves being simulated in this universe.

  • Not Required

    Sean C said: “The final possibility, which seems to be the right one, is: that’s just how things are. ”

    I guess SC is aware that many philosophers respond in just this way when talking about the low-entropy conditions at the beginning of the universe. I’m sure that SC would dismiss that. But why? That’s just how things were at the beginning of time.

    In fact, “that’s just how things are” is not so far from “It’s God’s Will”. Both of them are cop-outs.

  • Elliot

    the haiku master
    is embedded silently
    within the haiku

    e.

  • Cynthia

    In one sense, Paul Davies leans towards the proponents of ID, meaning the non-secular anthropists. In another sense, though, he leans towards the ID opponents, meaning the secular anthropists.

    But because some IDers appear rather pleased with Davies’ message (at least see his message in a positive light), this may serve as evidence that Davies’ anthropic leanings are slightly non-secular in origins.

    http://www.uncommondescent.com/intelligent-design/taking-science-on-faith/

    In the meantime, though, I’ll reserve judgement as to which way Davies leans till I hear what secular anthropists (namely the theorists studying the stringy Landscape) have to say about his message. But as long as the non-secular anthropists see ID as reality while the secular anthropists see ID as an illusion, the secular ones (Raphael Bousso, Joseph Polchinski, and Lenny Susskind, to name a few) won’t be too pleased to hear that Davies’ message pleases some IDers…

    Be mindful, though, despite the recent slew of media attacks on strings, I’m still of the strong opinion that the stringy Landscape offers, far and away, the best approach to the cosmological constant problem, not to mention the best approach to the origins of our pocket Universe!

  • John Merryman

    Three dimensions of space are simply the coordinate system of the point these three lines cross and the same space could be described by any number of such coordinate systems/frames. Therefore the motion of any particular coordinate system, relative to the other systems, isn’t an additional dimension, but a process of interaction, where any action is matched by an equal and opposite reaction, so that to the hands of the clock, it is the face going counterclockwise. There is no dimension of time, as one direction is balanced by the other. Motion in space creates a series of events, so that as one is replaced by the next, the previous recedes into the past. The physical reality isn’t moving along this narrative, it is creating it.

    If no one is willing to stand up and defend four dimensional spacetime, how much of the rest of what is being discussed is the modern equivalent of arguing how many angels can dance on the head of a pin?

  • bob

    why is so much time spent here talking about gods? It may be arrogant to say so but really, the subject borders on the infantile.

  • bob

    okay, that came out sounding like a criticism of Cosmic Variance but my point is actually just the opposite — I love this blog for its science and just don’t see any point discussing gods.

  • graviton383

    Do you think we’ll STILL be having these arguments in 300 years??

  • http://realityconditions.blogspot.com Alejandro

    Cynthia: According to Davies’s book (which I reviewed and criticized a while ago) he disagrees with both secular anthropicists (which put the “ultimate turtle” on a random selection from a multiverse) and IDers (which put it on the will of a transcendental deity). His preferred explanation for why the universe is what it is, and friendly for life, is some sort of teleological principle inherent in the laws of nature, instead of transcendent to it. He offers little articulation on how this principle would work, and even less arguments in support of it.

  • Cynthia

    Alejandro: Thanks so much for clearing up some of my misconceptions regarding Davies’ views on the Cosmos! And I must commend you on putting together an excellent review of “Cosmic Jackpot.” Even if Davies isn’t very good at creating his own philosophy, I think I’ll still read this book of his just because he’s good at explaining philosophies of others.

    BTW, I stumbled upon this talk of his at Fermilab 10/05 which seems to serve as a rough draft for “Cosmic Jackpot”:

    http://vmsstreamer1.fnal.gov/VMS_Site_03/Lectures/Colloquium/051005Davies/index.htm

  • http://zhogin.narod.ru Ivan

    Math or Magic/miracle ?
    Some people (e.g. physicists) prefer Math, because Math gives
    predictions (right or wrong, it depends; Magic can produce only
    illusions (of hope)).

    It is far not right that any math is suitable for our reality.

    (1) It seems we need a Field Theory — otherwise there will
    be some form of action at a distance, which is a kind of magic
    (hard spheres with Newton gravitation, this description was not
    appropriate for Newton himself); such a magic can look pretty
    well only in computer games, or in sci-fi books and movies.

    (2) That fundamental (or low-level) field theory should be
    NON-LINEAR — otherwise Achilles would not fill/see his turtle-pet
    (moreover, both would not exist) — and hyperbolic (and well-posed,
    with D>=4; and, advisably, should provide solutions with digital
    information, say, topological charges and quasi-charges,
    leading to quantum-like phenomenological (upper-level) models).

    (3) Last but not least: solutions of general positions (to this theory) should be eternal (like that sin(wt) for pendulum, where all time moments are of equal rights); and this requirement is very-very difficult (gradient catastrophe and singularities); today i know only one theory that meets all these demands.

    Assume we have a very perfect computer (not even without intellect and some sense of humor, although a bit artificial). That computer is not interested how we call our fields (amplitude of probability, metric field, or something else) — “no metaphysics, please”.
    He/she/it only asks:
    -“Please give me your equations. Pragmatism, yeah? “.
    – Oh, you know, we still should quantize that gravity, so at the moment we do not have a closed and self-consistent theory..
    -“Well, if you so convinced that you should do this, nothing can be done about (nichego ne popishesh’)”.

  • Michael T

    What strikes me as interesting in the discussion of Davies article and indeed “The Cosmic Jackpot”, is that it is firmly based in the Christian narrative (actually one can include all of the Abrahamic religions in this context). I think that it is important to recognize the boundaries and constraints it necessarily imposes upon the dialog. That said, if you were a Taoists you wouldn’t be having this discussion, as a matter of fact, Davies wold not have even written such a piece. But I digress.

    I hate to do this but to quote scripture Paul defined faith (in one of many translations) as “the assured expectations of realities not beheld”. Is it then fair to say that a scientific assertion unable to be tested falls within the founder of Christianities very definition of faith?

  • Harvey

    Bad

    Thanks for taking the time out to answer my question.

    many thanks

    Harvey

  • Thomas Larsson

    When I was in grad school, I heard a talk by Davies. I definitely got the impression from senior people that Davies was somebody that one should laugh at, like Fritjof Capra. But maybe they were just envious that he had made a bundle on his books. This was around or before the first string theory revolution, so string theory had nothing to do with it.

  • Pingback: Tro, vetenskap och ett synnerligen rationellt resonemang om det bortom det rationella « 1 är inte ett stort tal()

  • Jason Dick

    First, folks, if Tegmark were right, there’d be no reliable continuity (lawful patterns continuing into the future) because of all the “universes” where attractive forces change into being some other rule than 1/r^2 (or r^(N-1) if N is large space dimensions) and etc. After all, they are describable – I just did. Our chance of being in a description that really simulated lawfulness long-term would be negligible, even if we were in such a model up to this point.

    This is completely false, in a number of regards:
    1. Mathematical structures don’t change. They are a particular way because of the axioms used to generate them, and cannot change. Therefore any complex substructure that is capable of understanding the universe it is in will necessarily see a universe that has understandable, mathematical laws, and there would be a most fundamental representation of those laws that was completely invariant.
    2. Just because you can describe something doesn’t mean that that something is a mathematical structure. In order for it to be so, it must be free from contradiction, not just something that can be imagined.

  • tyler

    Oooh, a Tegmark paper brawl! I have *so* been waiting for this. Go!

  • http://www.gregegan.net/ Greg Egan

    1. Mathematical structures don’t change. They are a particular way because of the axioms used to generate them, and cannot change. Therefore any complex substructure that is capable of understanding the universe it is in will necessarily see a universe that has understandable, mathematical laws, and there would be a most fundamental representation of those laws that was completely invariant.

    That’s the simplest possibility, but I’m not sure that other possibilities are entirely ruled out. For example, suppose that a Turing machine, or any other system with computational universality, is taken to be the true underlying mathematical system. (For those who don’t believe a Turing machine can support consciousness, feel free to augment it with whatever extra structure is needed.)

    It’s not hard to imagine a Turing machine running a simulation of a world with, simultaneously, conscious beings with consistent memories, but variable and/or inconsistent “laws of physics”. Computer games do the latter all the time: there is usually no underlying set of universal rules, just a patchwork of ad hoc approaches to different phenomena. The conscious participants would then have to be supported by separate algorithms, rather than being supported by what they perceived as the laws governing the world around them.

    Now this is obviously a very ugly and inelegant class of mathematical structures, but if all mathematical structures exist, these ones certainly include conscious inhabitants, and would have to be counted.

    That said, though Tegmark talks about measures on the space of all mathematical structures, I don’t think he’s ever proposed an actual candidate for such a measure. Personally, I’m of the opinion that the measure is irrelevant! If all structures get to exist, then even if Turing machines with patchwork physics vastly “outnumbered” structures with uniform laws of physics, then I think it’s committing the selection fallacy to say that we learn anything by noticing that we’re not in the majority class. When the hypothesis says the minority class must exist along with the majority, P(at least one observer finds themself in the minority class | hypothesis) = 1, so the probability of the hypothesis is unaltered by the observation. IMO, Tegmark’s hypothesis is aesthetically appealing but completely untestable.

  • http://name99.org/blog99 Maynard Handley

    This is what happens when you go all squishy and start accepting statements like “religious people are just as intelligent as atheists, and deserve the same level of respect”.
    Dawkins and Hitchens have the right idea.

  • Peter Shor

    graviton says

    Do you think we’ll STILL be having these arguments in 300 years??

    I think this is the question which distinguishes philosophy from science, and I would bet that this is philosophy.

  • http://countiblis.blogspot.com Count Iblis

    I don’t think the argument will go on for 300 years. In a century from now we’ll have intelligent machines that more powerful than the human brain. Humans will then be replaced by machines. From the perspective of intelligent machines, the Tegmarkian way of looking at things is more natural than it is for us…

  • Jason Dick

    It’s not hard to imagine a Turing machine running a simulation of a world with, simultaneously, conscious beings with consistent memories, but variable and/or inconsistent “laws of physics”. Computer games do the latter all the time: there is usually no underlying set of universal rules, just a patchwork of ad hoc approaches to different phenomena. The conscious participants would then have to be supported by separate algorithms, rather than being supported by what they perceived as the laws governing the world around them.

    Then the invariant laws would be those that determine how the “laws of physics” change with time. Heck, you might even have something as complex as a self-referencing algorithm for the change of the laws, such that the laws can, in effect, modify themselves. But there would still be an invariant algorithm somewhere.

    That said, though Tegmark talks about measures on the space of all mathematical structures, I don’t think he’s ever proposed an actual candidate for such a measure. Personally, I’m of the opinion that the measure is irrelevant! If all structures get to exist, then even if Turing machines with patchwork physics vastly “outnumbered” structures with uniform laws of physics, then I think it’s committing the selection fallacy to say that we learn anything by noticing that we’re not in the majority class. When the hypothesis says the minority class must exist along with the majority, P(at least one observer finds themself in the minority class | hypothesis) = 1, so the probability of the hypothesis is unaltered by the observation. IMO, Tegmark’s hypothesis is aesthetically appealing but completely untestable.

    Whenever we deal with whether or not a theory is correct, we must necessarily deal with probabilities. This means, yes, that it is possible we are incorrect. Therefore it only makes sense to rule out a theory when the probabilities are so astronomical that we might as well consider it impossible.

    For example, if you found that there were two types of mathematical structure in which intelligent life could potentially evolve (structure A and structure B), and 10^10 more intelligent observers would evolve in A than in B, but we observe that we live inB, then we expect that there’s something wrong with our theory. A theory which instead predicts that there will be as many observers in structure A as in B, or more observers in B, is vastly more likely to be correct.

  • http://msm.grumpybumpers.com Coin

    Do you think we’ll STILL be having these arguments in 300 years??

    Of course we will. We were having them in 1707! Why wouldn’t one expect to be having them in 2307 as well?

  • http://tyrannogenius.blogspot.com Neil B.

    About faith in science: “Faith”, like lots of conceptual things, isn’t well defined. I prefer as a general definition, something you believe because of some attraction but in the *absence* of positive evidence (N.E.T. presence of negative evidence.) Remember that added traits can be assigned to a subcategory, so what you would still believe even in the face of contrary evidence could be called “blind faith” etc. The faith that *everything* about the universe can be discovered scientifically is “faith” because that degree of scope *is not* evidenced, despite all the things we have gotten such a handle on already (fallacy of presuming a trend *must* continue? Uh, some don’t.) In fact, it looks more like “blind faith” to me, since e.g. we already know that the specific time when a Co-60 nucleus will decay is not discoverable or explainable. Calling it “probability” doesn’t keep that from having those implications. Every method, every tool, has its pros and cons, its capabilities and its limitations. There was no reason to assume that the process used for discovering “laws” would lend itself to answering why they are like that to start with. It is an act of faith, of assuming apples from oranges, to think it will, regardless of how many particular processes have been explained *in terms of* the laws of nature.

    I can accept that many of you are suspicious of “metaphysics”, and I have no problem with someone believing or not in whatever is yet undecided or undecidable. But can’t you see the hypocrisy of blithely throwing around “other universes” and variable or other laws of physics if you say you don’t accept what isn’t scientifically accessible? Those things are just not accessible to current or maybe any level of science. Why aren’t you demanding laboratory proof of other universes, “the landscape”, other laws of physics in action, etc? I say – you don’t because of the common vulgar practice of letting “your own” get away with whatever they need to, and only complaining when the other side does it (like in political partisanship, which this argument parallels too much.) Some have accused philosophical theologians (PTers) like me of being “dishonest” – well, I have described real dishonesty. (But it’s OK for someone who admits to doing philosophy to use such ideas, since there’s no self-contradiction. Tough.)

    As for whether we should be able to describe “God” etc., no, the best argument is that the universe is not existentially self-sufficient and therefore “something else” is responsible. Such negative arguments are valid avenues of reasoning – we do not, again, have to get any precise handle on what is left if a given postulate (the universe *is* self-sufficient) is rejected. Those who are throwing around clearly uninformed and naïve pretensions of what you think the best arguments in theological philosophy are like, I can only say, you just don’t know how it’s done. Finally, don’t refer to PT as like “ID” – AFAICT, the IDers think of intervention in nature, even I suppose of creatures being made whole like Venus on the half-shell or Adam from dust. PTers consider the issues of existential dependency, why laws are what they are, etc: it has nothing to do with altering what’s here. (Sure, “if God existed maybe He could do that, right?”; and if nature wasn’t orderly, maybe it could act like that anyway too, doh – I am working off the apparent lack of *evidence* that it does, just like you are – only the interpretation of the “Why” is different.) Comparing PT to ID is like comparing liberals to communists. (BTW, many of the militant atheist/anti-metaphysicians remind me so of harsh talk-radio bullies like Rush Limbaugh and even Ann Coulter – it’s the same tough guy/gal, anti-sentimentalist (the other side are soft-hearted sissies and we are cold-hearted and red-blooded real men, etc.) bullying instinct at work. I don’t mean that is evidenced by failure to accept a given theological argument, but it’s out there.

  • http://magicdragon.com Jonathan Vos Post

    I like Greg Egan’s comment! Of course, there may be a way to locally test Tegmark’s theory, by probing whether or not our universe is logically consistent. Which calls to mind a story called “Luminous” … by Greg Egan.

    Davies opens a can of multidimensional worms with inter-relating Theology, Math, and Physics. This he does without axiomatizing his Theophysics and Theomathematics.

    Here’s my first cut at classifying, enumerating, and unifying some of the arguments made in this thread.

    Let G = “God exists.”
    Let M = “Math works (is consistent, etc.).”
    Let P = “The physical universe exists.”

    Not sure how to draw the symbol on HTML, so let’s use the word
    “proves” instead the symbol from Proof Theory.

    We have 6 metaphysical stances related to these 6 statements, for each
    of which I make a brief comment:

    P proves M (and Applied Math is more “real” than nonphysical abstract Math).

    P proves G (Deist and Creationist argument that the beauty and harmony
    of the cosmos prove the glory of the creator).

    G proves P (Spinoza’s theory that the universe exists “in the mind of God”).

    G proves M (God is the ultimate mathematician, Blak’s etching of God
    as Geometer).

    M proves G (specious Mathematical “proofs” of the existence of God).

    M proves P (Tegmark’s theory that we live inside a mathematical object).

    Pairs of these can give isomorphisms related to Medieval and Galileo’s
    claim that the Book of Nature and the Bible are two different views of
    the same thing.

    We can also provide 6 Unifications of Theomathematics and Theophysics:

    P proves M proves G
    P proves G proves M
    G proves P proves M
    G proves M proves P
    M proves G proves P
    M proves P proves G

    which can, if a loop is valid (such as M proves P proves G proves M),
    collapse God, Math, and Physics to equivalence.

    Many open questions remain. For eample: does “The Unreasonable
    Effectiveness of Mathematics in the Natural Sciences” by Eugene Wigner
    suggest that M proves P or P proves M?

    I don’t think there is any new content here; just an original
    notational way to classify a large body of writings from disparate
    authors.

    Of course, some of the arguments that I classify have intrinsic
    historical importance, or literary merit.

    For example:

    “The famous beginning of Psalm 19 announces that the heavens declare
    the glory of God and the sky declares his handiwork.”

    When I classify that as “P proves G” something has clearly been lost
    in translation.

    Books
    Desert Storm Understanding the capricious God of the Psalms.
    by James Wood October 1, 2007
    The New Yorker

    “What is God like? Is he merciful, just, loving, vengeful, jealous? Is
    he a bodiless force, a cool watchmaker, or a hot interventionist, a
    doer with big opinions, a busy chap up in Heaven? Does he, for
    instance, approve of charity and disapprove of adultery? Or are these
    attributes instead like glass baubles that we throw against the statue
    of his invisibility, inevitably shattering into mere words? The
    medieval Jewish thinker Maimonides thought that it was futile to
    belittle God by giving him human attributes; to do so was to commit
    what later philosophers would call a category mistake. We cannot
    describe his essence; better to worship in reverent silence. ‘Silence
    is praise to thee,’ Maimonides wrote, quoting from the second verse of
    Psalm 65….” [truncated]

    Needless to say, many of the “complaints, fears, hopes…prayers,
    songs, incantations…soliloquies” are mutually inconsistent, and much
    writing on these subjects is internally contradictory.

    And so, I think, is the Davies argument. Of course, he’s made over a megabuck from such arguments, so avoiding contradiction may not be his goal.

  • http://www.gregegan.net/ Greg Egan

    But there would still be an invariant algorithm somewhere.

    Sure. But my point is that it’s conceivable that the underlying regularity would be completely inaccessible to us, and once you allow Turing machines in the Tegmark hypothesis, you’ve predicted essentially every set of (computable) observations, however chaotic and irregular.

    For example, if you found that there were two types of mathematical structure in which intelligent life could potentially evolve (structure A and structure B), and 10^10 more intelligent observers would evolve in A than in B, but we observe that we live in B, then we expect that there’s something wrong with our theory.

    This is intuitively appealing, but I think it’s wrong. You stress probability, but Tegmark’s hypothesis leads to all its outcomes with certainty. If Tegmark’s hypothesis predicts our universe with certainty (and I think it does), and also another universe in which life is far more common by a factor of 10^10 (which I expect it also does, since you get to fiddle not only with the Drake equation but also fundamental constants), then why does our failure to live in the second universe falsify, or even lower the likelihood of, Tegmark’s hypothesis? When we make an observation, it is not preceded by a process in which we are selected at random from the pool of all conscious beings. When there are multiple universes containing conscious life, there are a multitude of observations — and the observations we make don’t need to be picked from some giant cosmic barrel in order for us to make them. Under such hypotheses, we are not testing (and can not test) what a “typical” observer will see.

    I think the false intuition here arises by comparison with probabilistic theories that deal with a finite number of trials. If theory A tells us that we have a fair coin, while theory B tells us that it’s biased for heads, then if we toss the coin 10 times and see 10 heads, we are entitled to favour theory B. But what’s different here is that only 1 in 2^10 of the possible outcomes is generated. Tegmark’s hypothesis generates all outcomes, not a random sample. The conditional probabilities are all equal to 1 , so Bayes’s theorem gives us nothing with which to refine our prior probabilities. (Well, there might be some conditional probabilities of 0, if you can think of something that literally can’t happen within any mathematical structure.)

    There’s a similar flawed intuition behind the Boltzmann Brain argument, which claims that we should favour cosmological parameters that rule out the future evolution of Boltzmann Brains, on the basis that if they eventually come into existence and outnumber us, that would render us vastly atypical. To which I think the correct response is “so what?” If theories with parameter set A and parameter set B both give comparable probabilities for planet-bound life like us to exist at all, then it’s irrelevant whether or not set A also predicts a far future full of 10^10 more conscious vacuum fluctuations than there ever were instances of conscious planetary life.

    Sean mentioned a paper a while back, Are We Typical? by Hartle and Srednicki that deals with some of these issues, though since it doesn’t address Tegmark it doesn’t confront head-on what it means when a hypothesis predicts essentially all possible observations.

  • John Merryman

    There’s an old African saying that if you want to travel fast, go alone, but if you want to travel far, go with a group.

    The problem, especially in the chaos of the modern world, is defining and motivating the group. That’s where arbitrary beliefs are useful. They separate the true believers from everyone else.

    Science does this subconsciously as well. That is what four dimensional spacetime is. The frame and the direction, with no conflict or paradox. Remember Christianity didn’t become the state religion of Rome because Jesus was such a nice guy, but because Constantine had a vision of the cross as a war totem. What better to get everyone focused and moving in the same direction, but the crosshairs of a two dimensional coordinate?

    Look at cosmology. It has coalesced around a theory of the universe as a single entity, going from start to finish and anytime the data is contradictory, some new energy, or force or additional theory is assumed, the theory is never questioned. Is that the epitome of true belief, or what?

    The problem is that it is just another house of cards or bubble. Usually these grow until they just can’t anymore, then it all falls flat as a pancake and everyone wonders how they ever thought pets.com, or that house, or that theory could have ever been that important, or that expensive. Someday, in the not to distant future, there will be a lot of Christians wondering why they didn’t get raptured before everything fell apart, but today’s cosmologists won’t have that epiphany. The economy supporting their endeavors to find the edge of the universe will collapse before it gives them that next bigger telescope they need and they will go to their graves believing, hoping someday the proper instruments are built.

  • http://www.geocities.com/aletawcox/ Sam Cox

    The parameters and constraints of a given model determine the reasonable possibilities for conscious existence in the cosmology.

    An eternal universe of infinite mass in tandem with flat space can eventually, by conjecture of a certain kind, produce anything and everything…everything is possible.

    An eternal universe of finite mass with closed space and invariant frames of reference still has great potential to produce complexity, and posesses built in engineering constraints which make the development of complexity plausable.

    Any cosmology which is finite in time, finite in mass and limited in spatial extent is highly unlikely to contain high levels of informational complexity.

    Note in paragraph 2…”conjecture of a certain kind”. I imply that such a cosmology is fatally flawed. Complexity requires well defined cosmological contraints to be formed, conserved and evolve, as organic evolution on Earth is only possible for fish while bodies of water exist, or higher animals so long as the atmosphere contains Oxygen…or the the fact that the inital origin of life required certain substances (amino acids etc) in pools of water and probably lightning.

    The idea that overall universal entropy can decline and informational complexity increase in an open soup of quantum fluctuations is highly suspect…infinite or not, perhaps ESPECIALLY if it is infinite!

    It is interesting that when we have so little understanding of the development of information and complexity in our own universe, we try to escape our dilemma by imagining infinite additonal sets of universes, the existence of which is only suggested by assuming our present hypotheses are correct.

    Any scientist needs to develop a profound respect for the universal existence of inorganic information and organic high complexity in the universe. We also need to remind ourselves, as we construct models, of the necessary quantum connection between observation and existence.

  • http://bodbrain.blogspot.com Aaron

    I am not a scientist, but I’m not sure I recall anywhere where it said that scientists have faith that the universe behaves according to “rational” physical laws. Quantum mechanics seemingly defies rationality, as does aspects of General Relativity (black holes). Of course, our understanding of rationality is biased by our own evolution. We’ve identified a few rules which the universe obeys. But because the universe obeys them does not make them rational, it just means it obeys them. We don’t need faith to believe this because we have evidence.

    Five hundred years ago, the reigning rationality was that the Sun revolved around the Earth. It is only rational in retrospect that the Earth revolves around the Sun. In this case, religion had to bend to science, but the science did not change. Religion will have to continue to redraw the bounds of faith to explain the new truths of science.

    Why does our universe obey these rules? The potential explanations are innumerable, at least as many as there habitable planets in the universe, but the observation, the rule, remains valid in each case. It doesn’t matter if the rule is rational, it only matters that you can observe it.

  • http://www.iidb.org RBH

    Davies’ blather exemplifies the fact that a sentence that has the syntactic form of an interrogative is not necessarily a sensible question.

  • Pingback: Woit’s loaded terms; Sean’s self-professed unbias; both a facade of hard science « Society with Jimmy Crankn()

  • http://quasar9.blogspot.com/ Quasar9

    Santa does not exist
    but you can visit Santa’s Grotto

    at the click of a ‘mouse’ – and almost at the speed of light
    birds do it, bees do it, even turtles do it…

  • Jud

    Sean, it seemed to me your post treated the questions “Could the fundamental laws of physics possibly be otherwise?” and “Does the universe have meaning and/or purpose?” as equivalent. I wonder if this equivalency is of necessity, i.e., even if we learn that the laws of physics could not be otherwise, does this necessarily bear at all on whether the universe has meaning or purpose?

    My instinct would be to answer in the negative, but I must admit I haven’t thought much about it.

  • http://quasar9.blogspot.com/ Quasar9

    The universe does not behave according to our pre-conceived ideas. It continues to surprise us.

    One might not think it mattered very much, if determinism broke down near black holes. We are almost certainly at least a few light years, from a black hole of any size. But, the Uncertainty Principle implies that every region of space should be full of tiny virtual black holes, which appear and disappear again. One would think that particles and information could fall into these black holes, and be lost. Because these virtual black holes are so small, a hundred billion billion times smaller than the nucleus of an atom, the rate at which information would be lost would be very low. That is why the laws of science appear deterministic, to a very good approximation. But in extreme conditions, like in the early universe, or in high energy particle collisions, there could be significant loss of information. This would lead to unpredictability, in the evolution of the universe.

    To sum up, what I have been talking about, is whether the universe evolves in an arbitrary way, or whether it is deterministic. The classical view, put forward by Laplace, was that the future motion of particles was completely determined, if one knew their positions and speeds at one time. This view had to be modified, when Heisenberg put forward his Uncertainty Principle, which said that one could not know both the position, and the speed, accurately. However, it was still possible to predict one combination of position and speed. But even this limited predictability disappeared, when the effects of black holes were taken into account. The loss of particles and information down black holes meant that the particles that came out were random. One could calculate probabilities, but one could not make any definite predictions. Thus, the future of the universe is not completely determined by the laws of science, and its present state, as Laplace thought. God still has a few tricks up his sleeve.

    That is all I have to say for the moment. Thank you for listening.

    Does God Play Dice? by Professor Stephen Hawking

  • http://scilearn.blogspot.com/ Freiddie

    I would usually consider such question as something beyond science. So, for a typical scientist, it’s not their jobs to ask such an overwhelming question. What I think is that if we keep on asking ourselves such question that bores deeper and deeper and becoming more and more fundamental, we end up falling in a bottomless pit. It’s just like that: once we find a set of general rules, we question them and form new sets of rules to explain those. Then we go further and question these rules and form another new set of more fundamental rules to explain these. It’s like proving in mathematics, you need theorems to prove theorems, and axioms to prove the more fundamental theorems. At the end of the day, you start questioning those axioms and find that you are digging into an infinite cycle of proving. This is how we are studying the universe. But we have to know where to stop or end up falling in a bottomless pit of reasonings.

  • Chemicalscum

    Tegmark’s hypothesis generates all outcomes, not a random sample. The conditional probabilities are all equal to 1 , so Bayes’s theorem gives us nothing with which to refine our prior probabilities.

    Exactly as I said in an earlier thread “Everybody’s got to be somewhere”.

  • http://countiblis.blogspot.com Count Iblis

    Greg, it is still the case that the same observer (observer = mathematical model = universe in its own right) can be found embedded in different larger universes. When we do physics we try to infer something about this larger universe.

    We can ask about the the probability distribution of some variable we are about to observe, given all the knowledge stored in our brains so far. This should be well defined in principle…

  • http://countiblis.blogspot.com Count Iblis

    Santa does not exist

    Not true, see here :)

    If this sentence is true, then Santa Claus exists.
    We need not believe, beforehand, that the sentence is true or that Santa Claus exists. But we can ask, hypothetically, if the sentence is true, then does Santa Claus exist?

    If the sentence is true, then what it says is true, namely that if the sentence is true, then Santa Claus exists. Therefore the answer to the hypothetical question must be yes: Santa Claus does exist if the sentence is true. However, that is exactly what the sentence states: not that Santa Claus exists, but that he exists if the sentence is true, which is just the hypothetical answer just established. Therefore the sentence is true after all, and since we have established that Santa Claus exists if the sentence is true, and that it is true, it follows that Santa Claus must exist.

  • http://tyrannogenius.blogspot.com Neil B.

    Why does anyone even refer to our universe as being *representable* by a mathematical structure, much less “being” one? I mean really, look at the wave function and it’s collapse: you can’t even model it properly because of simultaneity problems, maybe the issues of Renninger negative result measurements, unreliable detectors, etc. And don’t tell me, it’s OK because it’s just a representation, etc: No, if you really “believe in electrons” then “something” comes out of a nucleus or electron gun, and then appears at some spot and not anywhere else – and yet multiple shots show an interference pattern. It is absurd as a realist/mathematical concept. (BTW, multiple worlds and decoherence are BS anyway, since they still don’t get us to the actual localization itself from waves, they just hypocritically work off the collapse already being taken for granted and then just work it into their treatment of wave interaction and evolution.)

    Also: I wasn’t implying that everyone who is against the idea of God etc. is a hack, anymore than I think that everyone who is a “conservative” is a hack – but the parallel is right there: like the difference between sincere strict constructionists and cynical neocons. It’s much like the difference between sincere atheists or doubters (many of you here), with what appear to them to be perfectly good arguments (and some aren’t bad), versus manipulators (yes, often unconsciously of course) of notions of multiple worlds etc. to discredit God concepts, while pretending to uphold the old rational/postivist tradition that would have rejected both as unknowable or “meaningless.” I think Stenger is one of the worst at that – he wants to admit other worlds, while conveniently corraling possible alternatives into all being kind of like ours, instead of appreciating that unleashing “existability” opens a huge can of worms – maybe even God Herself.

  • D

    how surprising is it that people who argue about god and science without having a real understanding of what either one is get confused?

  • Pingback: What They're Saying About Davies' Op-Ed - Telic Thoughts()

  • tyler

    Egan, your arguments concerning the misuse of probabilistic arguments are the most cogent I have ever encountered. You have greatly helped me clarify my thinking on this important issue and I thank you for that. The Boltzmann argument, and its many parallels and restatements in other areas of modern debate, have always struck me as profoundly missing the point, and I appreciate your ability to articulate why this is true so clearly.

  • http://www.gregegan.net/ Greg Egan

    Count Iblis (#62), if you want to define all observers with the same subjective history to be “one observer”, that’s fine by me — it’s really a semantic issue, but this is a supportable choice of definition.

    But when this “one observer” with N different “threads” in N different universes makes a fresh observation, under Tegmark’s hypothesis you end up with exactly the same result as ever: there are now m different observers for the m results of the experiment, consisting of N_1, N_2, N_3, … N_m “threads” respectively, and the values of N_i are irrelevant, because even if N_1 << N_2 << … N_m, so long as N_1 is non-zero it is still a certainty — not an outcome with probability N_1/N — that result 1 of the experiment will be observed by someone. When I do this experiment and find result 1, I still have nothing to go on except:

    P(there will be someone with the history I had prior to the experiment who sees result 1 | Tegmark’s hypothesis) = 1

    I never had empirical access to N before, and I have no empirical access to N_1 now, or to any of the N_i, or to m, the number of ways I’ve been split. If I am trying to figure out whether the universe is governed by Tegmark’s hypothesis, or by some other model that predicts at least one observer seeing result 1 for the experiment I just did, then I have no basis for rejecting Tegmark’s hypothesis.

    I am not entitled to say “Gosh, under Tegmark’s hypothesis what are the odds that *my* consciousness ended up in this minority branch of my subjective future?”; that’s as misguided as saying “What are the odds that *I* get to be this particular one of the six billion humans on Earth that I actually am?”

    As Chemicalscum put it, “Everybody’s got to be somewhere”. Unless they’re nowhere. The only way you can try to falsify Tegmark is by constructing a potential observation (or set of observations, as large as you like) that Tegmark would predict literally no observer seeing. Given that we expect any real model of physics to be a sub-case of Tegmark’s grand catalogue, I’d be amazed if such a test can be devised, and even more amazed if the result falsified Tegmark’s hypothesis!

  • http://tyrannogenius.blogspot.com Neil B.

    Greg, it’s not that hard to cast doubt on Tegmark’s hypothesis of (apparently) radical modal realism (or at least, of “mathematical structures.”) As I’ve said before, the number (roughly) of describable universes (“possible worlds”, PWs) is much larger than the number of nice clean ones with simple and continued laws of physics. In other words, there are many more PWs with sloppy laws of attraction like 1/r^2.1223 and not even consistent between particles or in time (since we can describe that – I just did!), or filled with “electrons” of slightly or greatly varying masses, etc. Well, even if we have to “find ourselves” in a PW conducive to life, the chances are that even then, many features would still be sloppy, and not elegant “laws of physics.” Even worse, once we got to this point, there are many more PWs where things wouldn’t continue as they had before (just like many more toss-regimens of coins where, having gotten to 50 heads in a row out of 100 tosses total, there are more where the remaining tosses vary in all kinds of ways than the one which continues to come up heads, etc.) And don’t tell me, as some did hereabouts, those aren’t really “mathematical structures” since I guess they aren’t continuous functions I guess: matrices are real math, and unrelated numbers, and there’s Fourier analysis which can handle one function spliced onto another or chunks of unrelated hills and valleys, etc.

    Our being in a “nice elegant universe” is absurdly unlikely from the point of view of wild pan-realism for PWs, so I say there’s “Management” of some sort, regardless of just what sort of thing that is.

    PS: since it’s easier here: I still have trouble with that extra acceleration of a lateral moving particle in the planar mass field being consistent with the equivalence principle. After all, the components of acceleration can be compared separately, so the total being adjusted to conserve energy wouldn’t keep the accelerations of relatively moving bodies from being “different.” Also, what if it’s a stationary (relative to the plane) but rotating ring – how fast does that fall? At the rate appropriate to the rim velocity? Doesn’t that have problems? tx

  • http://www.gregegan.net/ Greg Egan

    Neil, you haven’t engaged at all with my argument on Tegmark. I agree with you that his hypothesis predicts many “non-elegant” universes. Unfortunately the numbers are invisible to us, and hence irrelevant. Tegmark’s hypothesis is untestable metaphysics; it is not refuted by the elegance of the universe, but given that it’s irrefutable in principle I am obviously not arguing that it should be treated as science. Equally, your claim about “Management” is untestable metaphysics. Go ahead and believe whatever you feel like believing, but the bottom line is you have no logical basis with which to persuade anyone else to share those beliefs.

    On the equivalence principle and different accelerations, consider this analogy. Pick a point P on a sphere, and consider all the geodesics — all the great circles — that pass through P. You should find it trivially easy to prove that the rate at which they “accelerate away from” each other, evaluated at P, is zero.

    To make this more precise: take two geodesics through P, and travel a distance s away from P along both of them, reaching points Q and R. Compute the length of the geodesic QR (this isn’t actually unique, but there’s a unique sensible choice when close to P). The second derivative of the length of QR with respect to s, evaluated at s=0, is zero.

    This is not something special about the sphere, it is a property of all geodesics on smooth manifolds, including the world lines of free-falling particles in GR. When two particles in free fall pass each other, if you then ask how far apart they are after a proper time of tau has passed for both of them, the second derivative of that distance wrt tau, evaluated at tau=0, will be zero. That is what the equivalence principle demands, and that’s what basic differential geometry guarantees.

    How, then, can someone using a certain coordinate system measure different accelerations for the coordinates of these particles?

    Go back to the sphere, and adopt the usual latitude and longitude for the coordinate system. Let P be the point with longitude 0, latitude 45 degrees south. Adam, travelling due south from P will have a constant longitude 0, and a latitude that is a linear function of the distance s that he has travelled. Both coordinates, in this case, obviously have second derivatives wrt s of zero.

    Now look at the coordinates of Eve, travelling from P along a great circle that also passes through, say, the point on the equator at 45 degrees east. The second derivative of Eve’s latitude and longitude wrt s will not be zero at P.

    Yet despite having these different “coordinate accelerations”, Adam and Eve measure no mutual acceleration away from each other at P.

  • http://www.gregegan.net/ Greg Egan

    Neil, I suspect what’s causing a lot of your confusion regarding GR is that you’re expecting the second rates of change of a particle’s coordinates to be the components of some kind of acceleration vector. That’s only true of Cartesian coordinates in flat spacetime. In GR, the only meaningful acceleration vector is the covariant derivative of the 4-velocity, and its components differ from the second derivatives of the particle’s coordinates by terms which involve the Christoffel symbols. Free-falling particles all have zero acceleration vectors, while of course their coordinates will generally have non-zero second derivatives. What’s more, you can’t take those coordinate second derivatives for two different particles and subtract them to find a “relative acceleration”, as if you were dealing with vectors.

    If this is all Greek to you, I’m afraid it’s not very practical for me or anyone else to try to explain the whole framework of GR in a series of off-topic blog comments. If you ever get serious about GR, try reading Sean’s online lecture notes.

  • http://tyrannogenius.blogspot.com NB

    Greg, yes the modal realist type claims are indeed untestable metaphysics, so I like to critiq

  • http://tyrannogenius.blogspot.com NB

    Greg, yes the modal realist type claims are indeed untestable metaphysics, but I can still critique on the basis of, “If we assumed soandso was right, what can we say then…” I wasn’t replying to your spefific take, just per your apparent general question, can we find consequences of Tegmark’s view that don[t sit with what we find ourselves in?
    Your description of GR is baffling, and all I can say is: the semi-popular discusssions sure don’t make any of those weird issues clear about acceleration etc. I still would like to know: what happens to the free-falling rotating ring, or even more complex objects with parts moving at different speeds?

  • jeff

    I am not entitled to say “Gosh, under Tegmark’s hypothesis what are the odds that *my* consciousness ended up in this minority branch of my subjective future?”; that’s as misguided as saying “What are the odds that *I* get to be this particular one of the six billion humans on Earth that I actually am?”

    Not only six billion humans, but all consciousnesses (great and small) in all multiverses ;)

    Whether or not it’s misguided depends on whether you view it phenomenologically or “objectively”. If you imagine someone else asking themself the why-am-I-me question, it seems silly – “of course you’re you, who else would you be?” However, if you ask the question of yourself, it becomes much more involved. So which is the right perspective? Unfortunately, reality is not, and has never been, separate from your consciousness. The perspective where you imagine someone else asking the question is only a model in *your* mind, whereas asking it of yourself is much more immediate and real.

    I like the why-I-am-me question because it leaves you between a rock and a hard place. You can make the question go away by accepting solipsism, but that is shocking and profound in it’s own right. If you reject solipsism, then you have a very difficult (if not impossible) question to answer.

  • http://countiblis.blogspot.com Count Iblis

    Reply to Greg #68,

    I agree that that for each of the possible outcomes there is an observer experiencing it (with probability 1). However, we can apply this reasoning to any stochastic experiment (multiverse or no multiverse) and I think you would not argue against the use of conventional probabilities in these cases.

    E.g., let’s consider a single observer defined as some algorithm/model embedded in some larger mathematical structure which is not uniquely defined when we specify the observer.

    Suppose the observer throws a coin 10,000 times and records how many times the coin lands heads up. The possible values range from zero to 10,000. In a multiverse setting all these possible outcomes are realized. For any n ranging from zero to 10,000 there is an observer with probability 1 who finds that the number of times the coin landed heads up is n.

    The reason why all the outcomes will be realized is because the information stored in the observer’s brain before he trows the coin does not contain enough information to fix the outcome of the coin trows. So, the same observer will be located in different universes where the initial conditions yield different outcomes.

    In a single universe setting, only one of the possible outcomes will be realized, of course. In that case there will therefore be an observer that observes whatever he is observing with probability 1. All other outcomes are observed with probability zero.

    So, Multiverse or no Multiverse, this notion of probability is not of much use. Clearly we do need a notion of certain states being more likely than other states. In this case we know that the number of head ups is approximately distributed according to a normal distribution with mean 5000 and standard deviation 50. The observer will observe a result between 4800 and 5200 with more than 99.99% certainty.

    In the Multiverse case we can say that more than 99.99% of the observers will observe an outcome in the range from 4800 to 5200. Since you don’t know in which universe you are before you throw the coins (i.m.o., you really are everywhere), you can be 99.99% sure that you’ll end up one of the observers who end up observing an outcome in the range from 4800 to 5200.

    Now, the case of the single universe is slightly more awkward, as you have to appeal to counterfactual intitial conditions to justify saying that you’ll observe an outcome between 4800 and 5200 with 99.99% certainty.

    In the single universe setting you need to appeal to “equally likely” but conterfactual initial conditions to justify the probability distribution, so one could argue that the probability distribution is more natural in hte multiverse setting.

    Another way of looking at this is in terms of entropy. An outcome of 5000 can be realized in 1000!/(500!)^2 = approximately 2.7 *10^(299) while an outcome of zero can be realized in only 1 way. All these possible realizations are equally likely.

    Perhaps one can justify doing statistics with all the observers, even though each observer can find himself in only one state, because the outcomes do not affect the observers in such a way as to change them irreversibly. If you observe 5023 tails up, you can imagine forgetting about that the next day.

    If you think about the number of heads up the next day, you will do a measurement on your long term memory and you’ll recall the number 5023. Before you recall that number you would be identical to all the other copies who observed different outcomes and are not constantly aware of the number (and who are identical to you in all other respects that you are aware of). So it is in fact a new measurement.

    At any time when you do not think of the number you are not located in one universe where there was a definite outcome. The ensemble of different outcomes is thus always relevant.

  • http://scipp.ucsc.edu/~aguirre Anthony A.

    Greg Egan (#51):

    The sort of reasoning you advocate here (and that Hartle and Srednicki wrote so nicely about) is seductive due to the annoying problems it solves, but I think it also severely (and I suspect unnecessarily) curtails our ability to use a given piece of data to distinguish between cosmological models. For example, as I argue
    here, by this reasoning no set of data that you have in hand can distinguish between our universe and a 500 kg thermal ball of gas that exists forever. (That is, while the usual Bolzmann’s brain problem is banished, another one appears to takes its place.)

    Anthony

  • http://countiblis.blogspot.com Count Iblis

    Coirrection:

    An outcome of 5000 can be realized in 10000!/(5000!)^2 = approximately 1.59*10^(3008) while an outcome of zero can be realized in only 1 way. All these possible realizations are equally likely.

  • http://www.gregegan.net/ Greg Egan

    Neil Bates (#73) wrote:

    Your description of GR is baffling, and all I can say is: the semi-popular discusssions sure don’t make any of those weird issues clear about acceleration etc. I still would like to know: what happens to the free-falling rotating ring, or even more complex objects with parts moving at different speeds?

    I’m afraid you’ve proved several times that any effort I put into calculating something like that is wasted, because you don’t know how to interpret results in GR — after complaining that “nobody told you it was like this”, you then go and ram the answers into yet another Newtonian kludge. If you’re genuinely curious, go and learn what you need to learn.

  • Pingback: » Davies on blogosphere()

  • Jason

    Greg Egan,

    This is intuitively appealing, but I think it’s wrong. You stress probability, but Tegmark’s hypothesis leads to all its outcomes with certainty. If Tegmark’s hypothesis predicts our universe with certainty (and I think it does), and also another universe in which life is far more common by a factor of 10^10 (which I expect it also does, since you get to fiddle not only with the Drake equation but also fundamental constants), then why does our failure to live in the second universe falsify, or even lower the likelihood of, Tegmark’s hypothesis?

    Here’s the thing. No matter what the real multiverse is like, if it is a multiverse, the vast majority of observers will exist in what we might call a “typical” universe. Therefore, it is not unreasonable to put good money on us being in a “typical” universe. It may be wrong, but as long as we can show that the probabilities are suitably astronomical before throwing out a hypothesis, this is unlikely to be the case.

    I think the false intuition here arises by comparison with probabilistic theories that deal with a finite number of trials. If theory A tells us that we have a fair coin, while theory B tells us that it’s biased for heads, then if we toss the coin 10 times and see 10 heads, we are entitled to favour theory B. But what’s different here is that only 1 in 2^10 of the possible outcomes is generated. Tegmark’s hypothesis generates all outcomes, not a random sample. The conditional probabilities are all equal to 1 , so Bayes’s theorem gives us nothing with which to refine our prior probabilities. (Well, there might be some conditional probabilities of 0, if you can think of something that literally can’t happen within any mathematical structure.)

    Right. So it’s exactly the same as this situation:

    Imagine that Sean has a red ball and a blue ball. For whatever reason, he wants to give us each one of the two, without us knowing before hand which one. Given our ignorance of Sean’s decision-making process, we should naturally bet on a 50/50 chance of getting either the red of the blue ball each.

    Now imagine that Sean has one million blue balls, but only one red ball. He gives out all of his balls to various people, and you and I are each one of those people. What probability would you place on obtaining the red ball? Remember that every single ball is handed out. But we are still forced to think of it in a probabilistic manner because each only see [i]one[/i] of them. And then, if you obtain the red ball, what are you going to think? With a 1 million to 1 chance of obtaining the ball purely randomly with uniform weighting, would it not make more sense if Sean’s method of choosing who to give the ball to somehow favored you?

    Think about it this way: if Sean gives the balls out randomly, then you have a 1/10^6 chance of obtaining the red ball. But if Sean decides that he wants to give the ball out to only frequent commenters on his blog, then your chances were much better to begin with. If you got the red one, then, isn’t this scenario more likely?

    Of course, as I said, we might be misled by this, so it requires rather careful analysis of the probabilities to ensure that being misled is as unlikely as possible. In this case, for instance, there’s around a 1/10,000 chance or so (assuming 100 frequent posters) that one of the frequent posters would have been chosen randomly, so we don’t really have much reason to favor the idea that that was the method Sean used. Furthermore, one would want to have independent confirmation of the theory. One does expect there to be a few exceptional things about anybody’s life, and there’s no reason to necessarily expect one to be related to another.

    In sum, let’s take the following scenario, comparing two competing theories using the weak anthropic principle:

    Theory A:
    1. 10^10 times as many observers in a type of universe that is very different from our own.

    Theory B:
    1. 10 times as many observers in a type of universe that is very different from our own.

    If the theories are otherwise equivalent, we would have strong reason to suspect that theory B is correct. But it would be foolish to stop there: we should seek other, independent methods of distinguishing between the two theories.

  • http://www.gregegan.net/ Greg Egan

    Anthony, Count Iblis, Jason, thanks for the comments. You’ve made me think twice about Hartle & Srednicki, though I’m not yet prepared to renounce their approach. Anthony wrote (here):

    So in my mind, the question of how we can reason in ‘multiverse’ cosmology in a way that (a) actually allows us to effectively discriminate between models, but (b) does not lead to any weird paradoxes, is still very much open.

    which makes me less convinced than before that there’s necessarily one right answer here. I’d recommend that people read Anthony’s whole post, along with Hartle & Srednicki, before concluding that this matter is all neatly tied up, either way.

    Count Iblis, when it comes to coin tosses in a single universe, I have no problem with adopting the strategy that I should assume a biased coin if I see an improbable run of heads and tails. Under either Copenhagen QM or classical mechanics with initial conditions free of weird conspiracies, if Alice and Bob are both shown multiple runs of 10,000-coin tosses that are sometimes from fair coins and sometimes from biased ones, then if Alice adopts a strategy of guessing that the coin is biased when the results are sufficiently improbable for a fair coin, she will certainly guess correctly more often than Bob, who ignores the data. And we all want to adopt a strategy that helps us guess the truth as often as possible.

    But even when you simply switch to a multiverse version of the coin toss scenario, I don’t think everything stays exactly the same, and it’s certainly not quite as obvious what our goal should be. If I stick to Alice’s strategy, I believe that will maximise the number of versions of me across the multiverse who guess the fairness of the coin correctly — with the versions weighted equally by microstate, i.e. each version who sees a different head/tail run gets counted separately. But what if these versions of me aren’t shown the run sequence, just told the total number of heads? Does that change anything? Should someone who was told there were 5000 heads really count 10^3008 times more than someone who was told there were none? I’m not saying I can’t see some logic in doing so, but it begins to seem a lot more subjective to me at this point. It’s obvious that when I’m one person I want to guess correctly as often as possible, but when I am (or there are) many people, it’s not quite so compelling a case to say “I want as many people in the multiverse as possible to guess correctly”, especially when there are different ways open to us to count the numbers of people.

    And when we switch from guessing whether a coin was fair or not to guessing which of two multiverse theories is true, maximising the number of people who guess the correct theory can lead to absurd results.

    Suppose theory A predicts that there is a single universe much like ours, while theory B predicts that there are 10^10 universes much like ours. In all other respects, they make identical predictions. In order to compare strategies for guessing which is true, we invoke a higher-level multiverse, in which the level 1 multiverse obeys theory A 50% of the time, and theory B 50% of the time. For concreteness, suppose the level-2 multiverse contains 10 level-1 multiverses obeying theory A, and 10 obeying theory B.

    The strategy “Always guess theory B” will lead to 10 * 10^10 universe-populations of people who guessed the true theory of their multiverse correctly. Random guesses would yield only half that number (plus a relatively tiny additional amount for the correct guesses of theory A). But despite the success on those terms of the “Always guess theory B” strategy, I do not believe there is any good reason to prefer theory B.

    Jason wrote:

    But it would be foolish to stop there: we should seek other, independent methods of distinguishing between the two theories.

    That’s good advice, and with luck it will eventually settle the Boltzmann brain issue (though some cosmologists go so far as to say that if contemporary evidence ends up pointing to a future full of Boltzmann brains, there must then be some unknown mechanism that will come along and destroy the universe before that happens.) A lot of the silliness of anthropic reasoning stems from the complete absence of all other data; it’s the first iteration of a process of refinement that needs to run for tens or hundreds of cycles to lead anywhere meaningful.

    But I’m beginning to suspect that even the issue of what evidence would count as falsifying Tegmark will remain disputed for centuries.

  • http://tyrannogenius.blogspot.com Neil B.

    But I’m beginning to suspect that even the issue of what evidence would count as falsifying Tegmark will remain disputed for centuries.

    Sure enough, but I still think you folks don’t really “get” the full scope of the modal realist concept: “everything exists” – I mean, every stinking configuration of whatever, that is an element of the Platonic mindscape. But the comments here seem to show the provincialism of still imaging you’re reliably in a universe with physical consistency and are just fiddling with details (like those playing with “landscapes” which depend on particular physical theories.) Note that a real “physical theory” involves a certain expectation that the substrate of the/a universe has some dependable character, instead of the sort of “describable thing” I mentioned that just acts any old way and need not be consistent from one time to another. (BTW “time” per se versus just 4-d structures of world lines, isn’t really logically definable: the latter is just like a tinkertoy sitting on a table with no “past-present-future”…) Well, that sort of assumption is just what Davies meant IIUC, and so he’s pretty right on target and shouldn’t be calumniated so much.

    Greg, if you can suffer just one more point about GR and the EP, and this is more about permissible torturing of semantics than physics per se: Sure, there’s an “acceleration” defined as you describe, which is zero for a free-falling particle (what an accelerometer measures, from relative intertial forces between adjacent masses like test body and spring-container.) But if you had completely avoided coordinate accelerations of the type I meant (equivalent to progress relative to floor levels, such as the significant banality of whether they hit a floor at the same time), then your statement that “the acceleration” of the transversely-moving body was larger by the factor (1 – v^2/c^2) would ironically have no meaning! But as I *read* the descriptions of the EP as a “semanticist” who expects clear exposition, not hidden behind “we know that what it really means is different from what it sounds, and that’s our secret to find out from grinding away at upper crust references”: it says that “gravitational fields can be transformed away in tiny regions by accelerations.” Well, that means that if I have a falling little chamber, the particles passing right by each other at the top will not reach the floor at the same time if they have relative motion. No matter how you defined “acceleration” for upper-class consumption, that *is* a way to tell the difference because it is a definable result. And I still want to know how a spinning ring or disk falls, please. Thank you.

  • Jason Dick

    As for the Boltmann Brain issue, part of the difficulty is how to measure probabilities in the first place. For example, let’s say that the laws of the universe are such that as a given region of the universe ages, Boltmann Brains will eventually vastly outnumber real observers. But if we have eternal inflation, and the “proper” measure of probability is to take the number of observers (real or BB) in an equal time slicing of the universe. In this situation, real observers will vastly outnumber BB observers at any given time, because young regions always vastly outnumber old ones in equal time slicings of the universe in the context of eternal inflation.

    Another possible solution would be to look at the probability of generations of new patches of inflation. Provided the production rate is high enough, even if inflation is not eternal young regions will still vastly outnumber old ones due to new ones always being generated.

  • http://tyrannogenius.blogspot.com Neil B.

    Greg: BTW I don’t imply that you are responsible for any confusing or crusty use of terms and framing in GR, or that it’s bad faith in that tradition either – it’s just rough on the non-insiders.

  • Jason Dick

    Update on the Hartle & Srednicki paper. One of the central claims of the paper is the following:

    We have data that we exist in the universe, but we have no evidence that we have been selected by some random process. We should not calculate as though we were.

    Resorting to probabilistic descriptions is not a statement that we are selected by a random process. Rather, it is a method of encapsulating our ignorance.

    To argue by analogy, consider quantum mechanics. Within quantum mechanics, there is no need to resort to a probabilistic description. The theory is, without the assumption of wave function collapse, a perfectly deterministic theory. But, as Everett showed in 1957, one can derive the appearance of wave function collapse from this perfectly deterministic theory. Resorting to probabilities is our method of encapsulating our ignorance as to what portion of the wave function of the universe represents “us”.

    By a similar token, we do not know which universe we should have found ourselves in, and, as a result, the use of probabilistic methods in the context of the weak enthropic principle encapsulates this ignorance.

  • http://tyrannogenius.blogspot.com Neil B.

    Jason: Sure, we don’t *know* what sort of universe we “should have found ourselves in,” but, given some background assumptions, we can guestimate some things – and why should we expect to do any better, or why should critics consider that a fatal flaw? I consider theoretical perfectionism to be a version of the straw-man fallacy. All of this is just guestimation – I say, accept that and just play with it, instead of either pretending it’s a slam dunk or playing anal-retentive logical priss.

    As for anyone “showing” that collapses can be incorporated into the deterministic wave function: I humbly submit (on the most general terms of proper semantics and logical hygiene) that he could not have done so. We still don’t know how the wave converts into or is also manifested as localization (hey, it’s just a wave, there is no inherent *mathematical* connection to localizations.) Just saying the localizations are spread over every possible place and we just end up in/as one of them is a copout, based on taking the observed effect for granted to begin with and then pretending one is explaining it from above (it’s a form of circular reasoning, even if rather subtle.) The same goes for the BS operations of decoherence and “many worlds” as “explanations” of “apparent” collapse. And BTW, how do those concepts deal with the wave redistribution forced by Renninger negative result measurements, and even worse, how does the wave respond to reports by unreliable detectors? I’m still waiting for a good answer to the last question.

  • http://www.gregegan.net/ Greg Egan

    Neil wrote:

    [The Equivalence Principle] says that “gravitational fields can be transformed away in tiny regions by accelerations.” Well, that means that if I have a falling little chamber, the particles passing right by each other at the top will not reach the floor at the same time if they have relative motion. No matter how you defined “acceleration” for upper-class consumption, that *is* a way to tell the difference because it is a definable result.

    Tiny regions of space-time, not tiny regions of space.

    At a given event, E, in space-time, you can adopt a coordinate system based on all the geodesics that pass through that event. These are known as Riemann normal coordinates.

    All timelike geodesics through E, in this coordinate system, are linear functions of proper time:

    x^i = c^i tau

    just as they’d be in flat space-time. The connection coefficients at E in this coordinate system vanish, along with the derivatives of components of the metric at E. In other words, by choosing a coordinate system at E based on how particles move in free-fall, you demonstrate that an infinitesimal region of space-time resembles flat space-time, in essentially the same way as a small part of the Earth’s surface resembles the flat geometry of a plane.

    On the Earth, if you pick a lamp post L and draw all the great-circle geodesics through it, you can use that to map points in a small piece of Euclidean space to a small region around L. [In detail: Pick two orthogonal directions at L that you want to be your x and y directions. For a point P in Euclidean space with Cartesian coordinates (x,y), find the unit vector u at L with coordinates (x,y)/|(x,y)|, then follow the geodesic at L whose tangent is u for a distance |(x,y)|. That takes you to f(P), where f is our map from a small neighbourhood of the origin in Euclidean space to a neighbourhood of L. Essentially the same construction works in space-time, but you have to do things slightly differently to take account of the existence of spacelike, null, and timelike directions.]

    If you take two geodesics at L, there is no reason to expect the second derivatives of their latitude or longitude (wrt distance along the geodesics) to be equal; it’s only in the geodesic-based coordinate system at L that everything is linear. Similarly, if you take two test particles that pass through the event E, there is no reason to expect the second derivatives of their coordinates (wrt proper time) in some arbitrary coordinate system to be equal. If you follow geodesics far from L, the absence of relative acceleration between them at L is not a promise of any special relationship between their latitude and longitude after you’ve gone some distance s along both. Similarly, after two test particles have passed at E, the absence of relative acceleration between them at E is not a promise that they will strike some distant third object (e.g. the planar mass we’ve been considering) “at the same time” in some particular coordinate system.

    Now suppose we have an elevator in free fall above a planar mass, and two test particles A and B. At a certain time, t=0, adopt coordinates based on the centre of the elevator and the orientation of its walls. For a short interval of time, particle A will just sit at x=y=z=0. Particle B, with transverse motion, will (for the same short interval) have elevator coordinates well approximated by x=vt, y=z=0. It will not hit the elevator floor at any small value of t (and of course neither will particle A). Rather, it will fly in a straight line until it hits the wall of the elevator, just as it would in flat space-time.

    The equivalence principle is a local statement about events in space-time close to E in both space and time. If someone in a bad popular science book has written something incompatible with this, take it up with the author, but please stop hallucinating conspiracies by relativists to lock you out of their club. Relativists have bent over backwards to make the subject accessible even to people too tight to buy a single textbook.

    And I still want to know how a spinning ring or disk falls, please.

    Then learn how to calculate it yourself. If you study enough GR to be able to do that, there’s a reasonable chance you’ll also end up understanding what the results of such a calculation would mean.

  • http://tyrannogenius.blogspot.com Neil B.

    OK, Greg, I will look into this and at least avoid griping about GR as such (here to you anyway) until I know enough to do that much. I do feel compelled to point out something about the falling elevator above the planar mass, which I see as a general consistency issue:
    Given “g”, the elevator reference floor will of course have the down-falling mass just resting right at that level. But I think you misdescribed the behavior of the particle with transverse motion. It will, relative to the floor, have “an acceleration” in that common sense of d^2y/dt^2 (well, the sense by which you originally defined the difference!) of: -(v^2/c^2)g, and thus describe a classic parabolic curve. That is not appropriately characterized as “flying straight” like it would in flat space-time, even momentarily, any more than it would be appropriate for a bullet just fired straight out in a gravity already equal to that same value, but without such a velocity-dependent effect.

    Yes, you talk about tiny intervals and space and time together etc, but “accelerations” are respectable and measurable instantaneous conditions of functions. So we could *know* that we weren’t just “floating in space” without using tidal differences. Well, maybe that doesn’t violate the EP anyway, but I will continue to consider it thus to be a rather shaky principle until I am satisfied that the whole shebang really justifies its value and framing.

  • http://www.gregegan.net/ Greg Egan

    But I think you misdescribed the behavior of the particle with transverse motion.

    You’re wrong. The equivalence principle says x=vt, y=z=0. These formulas are correct to second order in t: the instantaneous second derivatives of the elevator coordinates for the test particle are all zero, at t=0.

    You’re making false assumptions about general coordinate systems when you claim this is incompatible with different second derivatives for the two test particles’ coordinates, in the coordinate system fixed to the planar mass.

    Why don’t you try verifying the claims I made about geodesics on a sphere? You only need some simple vector geometry and calculus to do that.

    (1) Consider two geodesics passing through the point P. Travel a distance s along each geodesic, arriving at the points Q and R respectively. Compute the great-circle distance from Q to R. Prove that its second derivative as a function of s, evaluted at s=0, is zero.

    Hint: Without loss of generality, make P the north pole, and the first geodesic the prime meridian. The great-circle distance between two points on a unit sphere is the arccos of the dot product of the vectors from the centre of the sphere to the points. So I am asking for the second derivative wrt s of the function:

    arccos[ (sin s, 0, cos s).(sin s cos r, sin s sin r, cos s) ]

    where r is the constant longitude of the second geodesic (in radians), and the distance s we’ve travelled from the north pole gives us the co-latitude of both Q and R, because we’re assuming a unit sphere.

    (2) Compute the latitude and longitude of Q and R, as functions of s [here it would be a loss of generality to put P at the north pole, so that makes the calculation a bit harder]. Compute the second derivatives of these latitudes and longitudes, evaluated at s=0. Note that in general these will not be the same for Q and R.

    Once you believe both these results (and if you don’t currently believe them, do the calculations), then you ought to understand why there is no contradiction between the equivalence principle and the different coordinate accelerations of the test particles.

  • Jason Dick

    As for anyone “showing” that collapses can be incorporated into the deterministic wave function: I humbly submit (on the most general terms of proper semantics and logical hygiene) that he could not have done so.

    He did. It’s called quantum decoherence:
    http://en.wikipedia.org/wiki/Quantum_decoherence

    That you call this “BS” in no way impacts the result. Take quantum mechanics, don’t include any wave function collapse axiom, and you get the appearance of wave function collapse as a result of interactions.

  • http://tyrannogenius.blogspot.com Neil B.

    OK Jason, I will replicate my comment from “Things Happen, Not Always for a Reason” for your convenience:

    40.

    Jason, go and read and understand my critiques of the decoherence scam elsewhere on these threads (compare to what I just said about the equally wooly “multiple worlds” racket.) Specific localizations are what we actually observe, unless you are BSing us with that “illusion” and “appearing” conceit which violates all classic standards of empirical frankness. A wave which doesn’t collapse is just a wave, period, forever, not one or even a bunch of localizations (separated from each other by literally God only knows what – do you?) If they decohere, they would just forever stay “waves” which aren’t in the same relationship as before, unless you assume the consequences to begin with, that you were trying to prove. None of the mathematics of waves per se does or even *can* express or contain the localizations (since mathematical structures can’t produce true randomness, they are in effect “deterministic”! – so-called “random variables” are fiat entities of discourse about probabilities in general, not a genuine, formed machinery that can give us actual sequences.)

    Collapses/localizations are a bizarre and logically absurd feature of the like-it-or-not *universe* we actually live in, for honest folk to acknowledge first and foremost even if *maybe* explainable in a sincere sense someday. Decoherence is a circular argument using the surreptitious putting in by hand of the very events it is presuming to explain.

    BTW, what do you think of Greg Egan’s argument, first: that the acceleration of transversely moving bodies in the field of a very extended/”infinite” planar mass is higher than that of bodies falling straight down, and second: those kinds of accelerations really doesn’t matter for purposes of defining comparative acceleration in the equivalence principle, despite our clear ability to use that to show the distinction in terms of progression relative to a “floor”?

  • http://www.gregegan.net/ Greg Egan

    Neil

    I’ll make one last attempt to get you to actually think about curved spacetime and non-Cartesian coordinates.

    Consider all the geodesics that pass through the north pole. That’s easy: they are the lines of longitude. Call longitude phi and co-latitude (the angle measured from the north pole) theta.

    We can set up some nice coordinates in which these geodesics are locally linear. Define x = theta cos(phi), y = theta sin(phi). Geodesics are lines of fixed phi, phi=phi_0, so in our new (x,y) coordinates they will take the form y = tan(phi_0) x. In other words, they look just like straight lines. Taking the second derivative of y wrt x, we get zero. All geodesics look straight in this way, but note that we’re not just using the vacuous observation that “every smooth curve is a straight line to first order”. A non-geodesic curve passing through the north pole would not have the equation y = k x to second order; there would be a quadratic term as well.

    You can perform an analogous construction in space-time. In place of geodesics through the north pole of a sphere, use geodesics through the event E in space-time where the world lines of two free-falling test particles intersect. The spacetime coordinates you get this way will let you describe every geodesic through E with a linear equation, with no quadratic terms. That’s what anyone inside a free-falling elevator would measure: all the test particles would be seen to travel along straight lines with uniform velocities.

    Now, go back to the sphere, and look at all the geodesics that pass through another point: call it P, and place it at 0 degrees longitude, 45 degrees latitude (and 45 degrees co-latitude). Of course the north pole wasn’t special, and we could construct the same kind of nice x and y coordinates here that made the geodesics linear, with a bit more work, but we won’t do that. Instead, we want to know how the geodesics look using phi and theta as coordinates. On a map of a small region, latitude and longitude look almost Cartesian, so you might think you’d get linear equations for the geodesics in those coordinates too.

    But you don’t. Suppose you take a geodesic that passes through P, and hits the equator at longitude phi_E. Close to P (phi=0, theta=45 deg), to second order in phi such a geodesic is described by:

    theta(phi) = 45 deg + (1/2) cot (phi_E) phi + (1/4) cosec(phi_E)^2 phi^2

    So not only is there a non-zero quadratic term, it’s different for different geodesics.

    Of course the geodesics through P are no different from those through the north pole. They are still locally linear when described in sufficiently nice coordinates — the kind of coordinates a town planner might use if she doesn’t care about latitude and longitude but just wants to make distances and straight lines as easy to describe mathematically as possible.

    But anyone who has a reason to insist on describing things in terms of the coordinates phi and theta will find that these geodesics are quadratic functions theta(phi), with different quadratic coefficients for different geodesics.

    Equally, in the case of the falling test particles, anyone who is fixed relative to the planar mass and using coordinates in which the mass is stationary at z=0 will find z(t) for the falling test particles to be quadratic, and the quadratic coefficient in each case will be different.

    This is neither “torturing semantics” nor logically contradictory. It is simply a description of different measurements made with different coordinate systems. Is the second derivative of co-latitude as a function of longitude different for different geodesics that pass through P? Absolutely. Do people travelling along these geodesics close to P observe any mutual “acceleration” as a consequence of this fact? Absolutely not; as far as they’re concerned, everything measurable is linear to second order.

    If you understand (and preferably verify by your own calculations) what’s happening on the sphere, what’s happening in curved spacetime will no longer seem so strange.

  • Jason Dick

    Neil B.,

    How does the appearance of collapse “violate all standards of empirical frankness”? There really is no question that quantum mechanics, without any axiom of collapse, has the appearance of collapse upon interaction of a wave function with the environment. Furthermore, we have seen this appearance of collapse turn on slowly through experimental tests.

    And yes, this is a fully deterministic view of quantum mechanics. The randomness stems directly from the appearance of collapse, and is purely an artifact of us viewing the world from within the system described. The “frog” view, if you will, has this appearance of randomness, while the “bird” view has nothing of the sort.

  • http://tyrannogenius.blogspot.com Neil B.

    Jason, talking about “the appearance” of collapse is sophistry. There *is collapse, by any honest accounting. That is just what happens in our public, shared experience, which is the basis for all genuine science. It is not an “axiom” because it actually happens, and it is not “deterministic” by definition because we can’t predict where the hits will be or when (or can *you*?) If theorists want to matherbate (sic) with ideas inside their self-absorbed little heads, I don’t care, but I don’t want them telling me that the empirical structure is just “an illusion” (whatever that means) because they in their infinite wisdom know what the universe “ought” to be like. That is as repulsive as the Aristotelian Scholastics who discarded experimental evidence that didn’t fit into the teachings of Aristotle. That is one of the things science was supposed to rise above. How ironic.

  • http://badidea.wordpress.com Bad

    “I’ll make one last attempt to get you to actually think about curved spacetime and non-Cartesian coordinates.”

    Imagine this sentence said by a wizened old schoolmaster with spectacles, slowly drumming his smacking ruler against his palm, while a sheepish Neil stands there in an oversized British schoolboy uniform, looking at his own nervously shuffling feet and biting his lip.

    Ok, don’t, because it’s silly. But I couldn’t help imagining it.

  • http://tyrannogenius.blogspot.com Neil B.

    Bad, I gotta admit that’s cute, but really: what do you know or think about Greg’s specific claim of the higher acceleration of the transverse-moving body in the field of the extended planar mass (said not to be like a simple uniform field), not just the GR concepts in general (no pun intended.) I hadn’t heard of that, and I’m just trying to get a second opinion. Greg can rest and not feel like going another round (just yet, heh.) Did you at least try to appreciate my objections to that idea, and to the consequences? Sometimes the students make good Socratic pokes.

  • Jason dick

    Jason, talking about “the appearance” of collapse is sophistry. There *is collapse, by any honest accounting.

    The thing you have to recognize is that all we have are the results of experiments. That is, all that we can be sure of is the appearance of collapse. The thing to do, then, is that when two different theories predict the same experimental outcome, we should apply Occam’s Razor: consider the theory with fewer hypothetical entities as the more likely.

    And so, if quantum mechanics without an axiom of collapse can explain all experiments that show collapse, then it is highly unlikely for the axiom of collapse to describe reality. But fortunately, it doesn’t end there. In fact, quantum decoherence is not always a sudden deocherence: often the number of states that the interaction decomposes the system into is small enough, and the change is small enough, that the docoherence is merely partial instead of total. So we can see the “collapse” turn on slowly by carefully dialing the interaction that causes the decoherence. There is no way that this could happen with an axiom of collapse, as you either measure something or you don’t. There is no in between. And this has been tested.

  • http://arunsmusings.blogspot.com Arun

    Human beings have a natural tendency to look for meaning and purpose out there in the universe.

    Actually, meaning and purpose questions can be posed only in a religious framework (religion here indicates a system with Christianity as an exemplar); and cultures that don’t have religion (e.g., Buddhism is not a religion) don’t pose “meaning of life” questions.

    Therefore human beings don’t have a natural tendency to look for meaning and purpose in the universe, because those questions did not arise universally, but only in religious cultures, simultaneous with or after the rise of the religion.

    (The idea that religion is a human cultural universal is a nice piece of theology masquerading as knowledge, the argument is too big to fit in the margin, but the argument is now available on the web in the ebook here:

    http://colonial.consciousness.googlepages.com/theheatheninhisblindness
    )

  • Jason Dick

    Arun,

    I might agree that people may not have any tendency to pose general “meaning of life” questions, but it seems perfectly clear that each of us is very much interested in the meaning and purpose of our own life. Of course, this meaning is whatever we make it to be: there is no meaning imposed externally. There is no objective meaning. Our purpose is what we choose it to be. And this is, I think, a far more uplifting sense of purpose than one imposed from the outside by some inexplicable deity.

  • http://tyrannogenius.blogspot.com Neil B.

    Jason sayeth:

    The thing you have to recognize is that all we have are the results of experiments. That is, all that we can be sure of is the appearance of collapse.

    Yes, that’s what we have, but no sensible thinker considers the objective results to be merely “an appearance” in any sane sense. Collapse is not “an axiom,” it is what happens.

    The thing to do, then, is that when two different theories predict the same experimental outcome, we should apply Occam’s Razor: consider the theory with fewer hypothetical entities as the more likely.

    If anything deserves to be called “hypothetical” it is the wave, not the collapse which is the “given.” We don’t even know what it means to say that the wave functions “exist” per se, but the collapses are little spots right there on a screen etc. How could “shut up and calculate” folks be brushed off so glibly, regardless of whether you agree with them?

    QM without an “axiom” of collapse is, as I said, just waves staying waves forever and in one universe – if MW and decoherence say otherwise, then they are playing tricks with the logical and semantic framing of the issues. The experiments you mention are worth reflecting on, but I think they just show that the tendency to collapse (which is still an actual event each time) is a variable based on interactive parameters, not any big deal that.

    PS: My regards to you and others for gracefully bearing the brunt of my orneriness and florid language at times. Oh – I want your opinion on the differential acceleration wrangle as well!

    As for your second post, you’d probably like the sentiments in “The Fall of Freddie the Leaf” by Leo Buscaglia, which I favorably review in the Huckabee thread.

  • Jason Dick

    Yes, that’s what we have, but no sensible thinker considers the objective results to be merely “an appearance” in any sane sense. Collapse is not “an axiom,” it is what happens.

    This is an improper evaluation of the evidence. Again, the appearance of collapse is all that we can observe. This appearance can be described through one of two possible mechanisms:

    1. There is actual collapse.
    2. There is no collapse, but the underlying behavior causes observers to see collapse.

    For many observations, these two are completely indistinguishable. Though one might expect that the underlying behavior might lead to subtle differences that may appear in some experiments (this appears to be the case with quantum decoherence), even without such differences one can determine which of the two hypotheses is more likely to be correct by asking which of the two requires fewer assumptions. The answer is that option two has one fewer axiom, and therefore is to be preferred by default.

    If anything deserves to be called “hypothetical” it is the wave, not the collapse which is the “given.” We don’t even know what it means to say that the wave functions “exist” per se, but the collapses are little spots right there on a screen etc. How could “shut up and calculate” folks be brushed off so glibly, regardless of whether you agree with them?

    Actually, we do know what it means to say that wave functions exist: it means that in between measurements, the particles in question obey the relevant wave equations. Through many repeated observations, we have demonstrated that this is correct, at least to a very good approximation.

    QM without an “axiom” of collapse is, as I said, just waves staying waves forever and in one universe – if MW and decoherence say otherwise, then they are playing tricks with the logical and semantic framing of the issues.

    Yes, QM without any axiom of collapse is just waves staying waves forever. Yes, there is just one universe. But due to the interactions that exist within quantum mechanics, observers that are described by the same quantum mechanics necessarily observe collapse: different components of the same wave function lose coherence with one another after many interactions, and are no longer capable of interaction.

    This is very much like thermodynamics. Thermodynamics is an empirical set of laws that was derived directly from experiments. But we also know that thermodynamics can be derived by taking into account the specific properties of the individual components of the system and taking the large number limit. This derivation of thermodynamics from statistical mechanics shows us that, for example, the tendency towards equilibrium turns out to only be approximate. If you take a box of air, no matter the initial conditions it will tend towards a nearly uniform distribution. Provided the box is large and the air dense enough, statistical mechanics predicts that the deviations from uniformity will be so small or take so long that we will be incapable of detecting them.

    The relationship between quantum decoherence and wave function collapse is exactly analogous to that between statistical mechanics and thermodynamics.

    The experiments you mention are worth reflecting on, but I think they just show that the tendency to collapse (which is still an actual event each time) is a variable based on interactive parameters, not any big deal that.

    And what is the mechanism for this? What is the underlying physics of this mechanism?

    Quantum decoherence offers this without any additional assumptions. How many more assumptions will you add to the theory to replicate quantum decoherence just to avoid the many worlds interpretation?

  • Jason Dick

    Slight correction:
    By “after many interactions” above I meant “after many sorts of interactions”, not after a large number of interactions.

  • Jason Dick

    Oops, and I forgot to answer your question again. To be honest, I haven’t followed it closely. However, there is an arbitrariness between accelerations and gravitational fields. This arbitrariness is guaranteed by the equivalence principle, that states that at a single point the two are indistinguishable. By transforming between different coordinate systems related to one another by accelerations, one can change whether a feeling of acceleration is provided by acceleration or gravity.

  • http://www.gregegan.net/ Greg Egan

    Neil

    I’ve written up my analysis of the planar mass here:

    Weak-field GR near the centre of a light planar mass

    If you ever succeed in getting anyone else who is competent in GR to consider this matter, it might be more productive to point them to this page than to offer them a paraphrase of my conclusions. I doubt you’ll find anyone else willing to wade through the detailed calculations, but anyone who actually knows GR will get as far as the bold-faced summary in the introduction and tell you that whatever the quantitative details, the qualitative statement here is obvious.

    I’ll repeat what I noted earlier: the arxiv paper you found, The general relativistic infinite plane, also finds velocity-dependent accelerations in the static frame tied to the mass. The detailed formulas are different because the detailed space-time geometries are different, but the general phenomenon is, clearly, not absent even in solutions with exact planar symmetry.

  • http://tyrannogenius.blogspot.com Neil B.

    Greg, thank you so much for all your effort and patience (maybe you enjoyed Bad’s little fantasy…) I am honored to have stimulated a web paper, already skimmed, and will study it when I have time. I can’t wait to see to the extent I’m able, how well you handle what looks to me a contradiction: that the transverse-moving mass will accelerate away from a given floor level in a free-falling elevator (since *you* claimed its “acceleration” was “different”, not me!) while the down-falling mass won’t, and yet still not “really” have usefully different accelerations (not in *any* perspective at all?) for determining not being in a true, out-in-space IRF (IIUYC.) It’s odd I got no critique from others here.

    PS – You show savvy yet call yourself “a science fiction writer” – PhD but just didn’t get into the work? Just curious. And are you fully orthodox? I could almost swear, some of that stuff about accelerating elastic looks somewhat idiosyncratic. tx

  • http://www.gregegan.net/ Greg Egan

    that the transverse-moving mass will accelerate away from a given floor level in a free-falling elevator (since *you* claimed its “acceleration” was “different”, not me!) while the down-falling mass won’t, and yet still not “really” have usefully different accelerations (not in *any* perspective at all?)

    Apparently you still haven’t really looked at what I wrote about the analogous situation with geodesics on a sphere. If you ever really think about that, and understand it — draw some pictures, do some calculations, whatever it takes for you to grasp it — you will stop harping on about this non-contradiction. There’s nothing further about this on the web page because I have no reason to remind a general audience that the equivalence principle is true, and is not contradicted by this (or any other) prediction of General Relativity.

    No, I don’t have a PhD, just a BSc in Maths, but I taught myself GR from Misner, Thorne and Wheeler. The relativistic elasticity material on my web site is entirely orthodox, and as far as I can check is consistent with a PhD on the subject that I cite, but of course I can make no promise that everything is free of errors.

  • John Merryman

    Dennis Overbye has an article on this in the New York Times today, so I thought I’d open the thread back up.

    http://www.nytimes.com/2007/12/18/science/18law.html?8dpc

    Maybe both alternatives — Plato’s eternal stone tablet and Dr. Wheeler’s higgledy-piggledy process — will somehow turn out to be true. The dichotomy between forever and emergent might turn out to be as false eventually as the dichotomy between waves and particles as a description of light. Who knows?

    I would like to argue for both alternatives, as two sides of a cosmic convection cycle, where the expanding energy of the quantum world is disconnected and discontinuous, random and microcosmic. But like heat it is always expanding. That it is like the future, invisible, but alway drawing us forward.
    While the classic macrocosmic world we live in is reductionistically deterministic and lawful, orderly and mechanistic, but subject to entropy and gravity, it is collapsing and falling away into the past, even though it’s the only reality we can directly observe.
    This relationship isn’t just about physics, but all sorts of processes can be understood in terms of the energy rising up, as the structured order slowly, or sometimes rapidly, crumbles. For those looking for guidance, Complexity Theory covers much of this ground, with its dichotomy of top down order and bottom up process/chaos. Those of us out in the larger world can see it in any number of ways. Rising unstructured youth and the crumbling order of age. Dynamic societies replacing prior civilizations. Political movements toppling as the ground moves under them.
    Particles are energy that has started to contract and waves just wash over us. Like strings and their vibrations, we are always trying to put these two elements in the same equation and they just don’t fit. Maybe it’s trying to tell us something. Maybe Tao knows more then Moses.

  • bipolar2

    Hello: sorry about how dogmatic the stuff below sounds. But, this is hardly the place to explain or refine these notions. bipolar2

    ** “I have no need of that hypothesis” – LaPlace **

    “Materialism”, “certainty”, “uniformity”, “induction”, “determinism”, “scientific law”, “universal causality” are as dead as god — the belief in them is no longer believable.

    Nor, is a god hypothesis necessary. Was it only 200 years ago that LaPlace supposedly said this to Napoleon?

    The brief rebuttal is

    1.There is no such process as “induction” from “the facts” of nature.
    2. There are no necessary empirical truths. (No science is certain.)
    3. Every empirical statement must be falsifiable in principle.
    4. To be part of science, an empirical statement must be testable, hence refutable.
    5. “Materialism” is no part of science.
    6. Mathematics makes models. Models, however refined, are not reality.

    What follows from these now well-known propositions:

    1. No part of science presupposes any “uniformity of nature.” (No faith needed!)
    2. There are no “laws” in science — no need for a “law giver” or any “source.”
    3. If any religion makes an empirical claim; then, it could be false.
    4. In order to be considered scientific, empirical claims made by religion must specify conditions to test it — that is, show how it could be falsified.
    5. “God” doesn’t do mathematics. Mathematics doesn’t “describe” or “explain” the world.

    In practice, what does science have to say about arrogant religionists:

    With respect to science vs. western bible-based monotheism, the relationship is strongly asymmetrical in favor of science. Science is the arbiter of which statements about the world, empirical statements, are or are not “known” — that is, are given the always provisional metalinguistic accolade, ‘true.’ (What is the value of truth — Nietzsche’s question is still important.)

    True empirical statements are ‘methodologically fit’ according to the relevant testing procedures within science itself. This is the real meaning of ‘the scientific revolution’ — in what sphere is power vested?, who shall decide what is true?, and by what criteria?

    Neither ‘ethical fitness’, as in Heraclitus and his Stoic followers, nor ‘theological fitness’, as in Plato and his xian followers, is any longer considered a viable principle for assessing the truth of an empirical statement.

    Methodologically, whenever so-called “sacred” writings make claims about the natural world, they are subject to exactly the same forces of potential refutation as any other empirical claim. There is no “executive privilege” for god.

    bipolar2
    © 2007

  • Pingback: Sufficient Reason | Cosmic Variance | Discover Magazine()

  • Pingback: “I understand nothing” | Cosmic Variance | Discover Magazine()

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »