Humankind's Basic Picture of the Universe

By Sean Carroll | November 5, 2006 11:45 pm

Scott Aaronson has thown down a gauntlet by claiming that theoretical computer science, “by any objective standard, has contributed at least as much over the last 30 years as (say) particle physics or cosmology to humankind’s basic picture of the universe.” Obviously the truth-value of such a statement will depend on what counts as our “basic picture of the universe,” but Scott was good enough to provide an explanation of the most important things that TCS has taught us, which is quite fascinating. (More here.) Apparently, if super-intelligent aliens landed and were able to pack boxes in our car trunks very efficiently, they could also prove the Riemann hypothesis. Although the car-packing might be more useful.

There are important issues of empiricism vs. idealism here. The kinds of questions addressed by “theoretical computer science” are in fact logical questions, addressable on the basis of pure mathematics. They are true of any conceivable world, not just the actual world in which we happen to live. What physics teaches us about, on the other hand, are empirical features of the contingent world in which we find ourselves — features that didn’t have to be true a priori. Spacetime didn’t have to be curved, after all; for that matter, the Earth didn’t have to go around the Sun (to the extent that it does). Those are just things that appear to be true of our universe, at least locally.

But let’s grant the hypothesis that our “picture of the universe” consists both of logical truths and empirical ones. Can we defend the honor of particle physics and cosmology here? What have we really contributed over the last 30 years to our basic picture of the universe? It’s not fair to include great insights that are part of some specific theory, but not yet established as true things about reality — so I wouldn’t include, for example, anomalies canceling in string theory, or the Strominger-Vafa explanation for microstates in black holes, or inflationary cosmology. And I wouldn’t include experimental findings that are important but not quite foundation-shaking — so neutrino masses don’t qualify.

With these very tough standards, I think there are two achievements that I would put up against anything in terms of contributions to our basic picture of the universe:

  1. An inventory of what the universe is made of. That’s pretty important, no? In units of energy density, it’s about 5% ordinary matter, 25% dark matter, 70% dark energy. We didn’t know that 30 years ago, and now we do. We can’t claim to fully understand it, but the evidence in favor of the basic picture is extremely strong. I’m including within this item things like “it’s been 14 billion years since the Big Bang,” which is pretty important in its own right. I thought of a separate item referring to the need for primordial scale-free perturbations and the growth of structure via gravitational instability — I think that one is arguably at the proper level of importance, but it’s a close call.
  2. The holographic principle. I’m using this as a catch-all for a number of insights, some of which are in the context of string theory, but they are robust enough to be pretty much guaranteed to be part of the final picture whether it involves string theory or not. The germ of the holographic principle is the idea that the number of degrees of freedom inside some region is not proportional to the volume of the region, but rather to the area of its boundary — an insight originally suggested by the behavior of Hawking radiation from black holes. But it goes way beyond that; for example, there can be dualities that establish the equivalence of two different theories defined in different numbers of dimensions (ala AdS/CFT). This establishes once and for all that spacetime is emergent — the underlying notion of a spacetime manifold is not a fundamental feature of reality, but just a good approximation in a certain part of parameter space. People have speculated about this for years, but now it’s actually been established in certain well-defined circumstances.

A short list, but we have every reason to be proud of it. These are insights, I would wager, that will still be part of our basic picture of reality two hundred years from now. Any other suggestions?

CATEGORIZED UNDER: Science
  • http://www.kea-monad.blogspot.com Kea

    “by any objective standard, has contributed at least as much over the last 30 years as (say) particle physics or cosmology to humankind’s basic picture of the universe.”

    Go Scott! He’s onto it.

    Sorry Sean, but you’re really losing this battle. The Holographic principle comes from monadic dualities in higher topos theory, which is all about quantum logic. And as for your breakup of matter components – it’s just plain wrong. The dark matter is black holes, which is also understood in terms of quantum computation.

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    Wow. I mean, the statement just strikes me as odd. Computer science (at least the parts that aren’t mislabeled engineering) is rightfully a subfield of Math… just as astronomy, nowadays, is rightfully a subfield of (mostly) Physics.

    Now, yes, math is the substrate upon which Physics is etched… but Physics is more about our Universe than Math is. If you think about things the way Max Tegmark does, Math is about every conceivable Universe. Me, our universe is so damn cool by itself that I’m happy just trying to understand it.

    Now, I know Aaronson wants us to contain ourselves to theoretical physics and cosmology — but I maintain that that is impossible, and also unreasonable. Theroetical cosmology has long been informed by experiment. Hell, ask Clifford — even String Theory is informed by experiment. But in astronomy in particular, the observers and theorists seem to talk to each other a lot, more so than in many disciplines. You see theorists in observing proposals, for example, and sometimes they’re leading them.

    So I’ll violate the terms of the challenge and plunge merrily ahead (and while I’m at it, I’ll sometimes go over 30 years):

    * the scale of our Universe. 400 years ago, we knew about the Solar System, and the stars were a mysterious unaddressable “firmament.” 100 years ago we had the Great Debate, shortly after which we had shown for sure that there are many, many galaxies just like our own, and that our Univers is, not to put too fine a point on it, butt-huge. You can sit around all day doing thought experiments about packing boxes into the boot of a car (unless you’re on this side of the pond, in which case you pack boxes into the boot of an elephant or something), but all of that is just brain-play compared to actually seeing and knowing just how huge and wild our Universe is.

    * the resolution of Olber’s paradox. The Universe has a finite age. And it’s expanding, so redshift provides another handy helper to the resolution of the paradox. (Either could do it alone, of course, but we know both are there, so, hey, fly with it.)

    * By the way, did I mention that the Universe is expanding? Who’d’ve thunkit? Not Einstein. Pretty cool, huh? Now, Einstein did figure it out just sitting in his room with a pencil, but tuned in a precarious unstable balance to make it not happen because it did seem absurd to him… but it’s probably fair to say that the expansion of the Universe is a triumph of cosmology, and maybe even of theoretical cosmology.

    * Where the chemical elements came from. The finite-aged Universe just gave us Hydrogen and Helium, and a little trace deuterium, lithium, and beryillium. (Not enough of the latter three to warrant capitalizing their names.) Meanwhile, theoretical nuclear astrophysicists worked out where all of the rest came from… that is, stars, living and dying. Damn cool, if you ask me. Anybody who isn’t horribly aenemic can be 100% that some of the atoms in their body have been through a supernova. Mabye that isn’t as esoteric as Dumblefinger’s 18th Dimensional Hypothesis About Nosepicking, or something else I really don’t understand, but damn it’s cool, and it’s demonstrably about our real, physical, world. (If you prick us, do we not bleed?)

    * Neutrinos have mass!!!! We figured this out and proved it by looking at the Sun!

    * The inventory thing Sean talked about above.

    * We know the age of the Universe back to some “we don’t know how to think before here” point to within 5%.

    * We can explain Galieo’s apocryphal experiment through Einstein’s relativity… and Einstein’s relativity was for a long time “proven” only via astronomical observations (Mercury, lensing, etc.). Of course, nowadays, GR is the stuff of engineering (GPS, and the next version of ntp I intend to write).

    * On the most fuandmental level, reality is stochastic, in complete contrast to the mental models of the world we’ve evolved in our brains. How bizarre is that? Did anybody expect that? What’s more, if it weren’t for all that quantum stuff and transistors followed by integrated circuits, there wouldn’t even be a field called “computer science” (although math well predates all of that, of course). I guess cosmology can’t lay any claim to this one, but hey, it’s Physics.

    * There is a supermassive black hole at the center of any respectable galaxy. 10 billion years ago, all of those black holes spent something like 10 million year periods shining as quasars (or wimpier cousins) as they were being fed. What’s more, the processes associated with all of this limited the buildup of stars in the galaxy. And supermassive black holes are way cooler than anything I can code up in Perl or Lisp or something that. Hu-ah!

    Computer Science is to Physics as Madden 2007 is to Superbowl XLII. Except that I’m way better at Madden (well, 2005) than I would be playing real football. But otherwise. You know. Analogy.

    Or something.

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    The dark matter is all black holes?

    Did I miss a memo?

  • Pingback: Galactic Interactions » Blog Archive » My science is more fundamental than yours!()

  • http://www.kea-monad.blogspot.com Kea

    Did I miss a memo?

    Don’t worry. You’re not the only one.

  • http://math.ucr.edu/home/baez/ John Baez

    Rob Knop wrote:

    * the resolution of Olber’s paradox. The Universe has a finite age.

    * By the way, did I mention that the Universe is expanding?

    * Where the chemical elements came from.

    These were understood more than 30 years ago. Scott cleverly stacked the deck by starting his clock in 1986, right around when fundamental theoretical physics stalled out. Since then most of the progress in fundamental physics has come from observations in astronomy – but these items you mention above come from an earlier era.

    So, Scott wins this game. If he’d pitted theoretical computer science against math or biology, he would have had a much tougher time.

  • Thomas Larsson

    The main thing that we didn’t know 30 years ago is that the standard model is so good. Pretty much all exotic extensions that people have thought of – GUTs, technicolor, susy, extra-dimensions, … – have been ruled out or are at least looking increasingly unnatural. That is an important discovery, even if it is a negative one.

  • http://math.ucr.edu/home/baez/ John Baez

    Whoops – I can’t subtract. Scott actually started his clock ticking in 1976, not 1986. This makes his job harder: the Standard Model was busily being confirmed then. If he’d gone back to 1970, when the Standard Model was being formulated, he would have been in serious trouble.

  • http://www.kea-monad.blogspot.com Kea

    But the Standard Model cannot be properly understood without operads and other categorical beasts. Scott definitely wins.

  • http://www.scottaaronson.com Scott Aaronson

    Thanks, Sean! John Baez gets it exactly right — I did pick my timeframe very carefully. Of course, even when we restrict to the last 30 years, Lambda>0 and the holographic principle make for some serious competition. But I stand by my subjective and ultimately meaningless claim!

  • Pingback: danvk.org » Four things()

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    Since 1976, we’ve got

    * three families of neutrinos, confirmed

    * neutrino oscilllations -> neutrino mass

    * gamma ray bursters at cosmological distances

    * age of the universe known to 5%

    * Sean Carroll gets PhD

    * inflation

    * cmb isotropic, fluctuations right for structure growth

    * cmb dipole, we know which way we’re going

    * dark matter = cold, dark matter = non-baryonic, dark matter = real

    * spatial curvature flat

    * universe accelerating, thus dark energy

    * smbh at core of every big galaxy ; bh/bulge relationship

    * elliptical galaxies = result of galaxy mergers

    * planetary systems ubiquitous, including ones not like ours

    * gravitational waves seen from decay of hbinary pulsar

    * ed witten being smarter than you (independent of the definition of “you”)

  • http://www.scottaaronson.com Scott Aaronson

    One other thing, Sean. You write:

    The kinds of questions addressed by “theoretical computer science” are in fact logical questions, addressable on the basis of pure mathematics. They are true of any conceivable world, not just the actual world in which we happen to live.

    That might be less true than you think. Polynomial-time Turing machines are indeed a mathematical construct, but the reason people studied them in the first place is that they believed they accurately modelled what can efficiently be computed in the physical world. If large-scale quantum computers are built (and assuming that factoring is hard for classical computers), that belief will have been experimentally falsified.

    Of course, a large part of physics also consists of assuming a model and then working out its mathematical consequences. But I agree with you that, on the whole, theoretical computer scientists place much less emphasis on the model’s being “physical,” and much more emphasis on mathematical minutiae like the probabilities summing to unity. :)

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Well, let’s keep our eggs in the right baskets. Proofs about what polynomial-time Turing machines can do will be applicable to any polynomial-time Turing machines, regardless of what the laws of physics happen to be. The connection between such hypothetical constructs and the real physical world is a completely separate question (regardless of what the motivations may have been), and that may indeed depend on the laws of physics.

    Is there a concrete example of a profound insight about the physical world — one that wouldn’t have been relevant had the laws of physics been different — from theoretical computer science over the last 30 years? E.g. from quantum information theory? Would anyone like to defend a claim that our ability to factor large numbers using a quantum computer contributes as much to our basic picture of the universe as dark energy, or emergent spacetime? (Just among us friends.)

  • http://www.scottaaronson.com Scott Aaronson

    Would anyone like to defend a claim that our ability to factor large numbers using a quantum computer contributes as much to our basic picture of the universe as dark energy, or emergent spacetime? (Just among us friends.)

    Can I be your friend? :) If so, I’ll be happy to defend that exact claim. Shor’s algorithm didn’t change quantum mechanics, but along with a few other results from the mid-nineties, it did represent a completely new way of thinking about quantum mechanics. Here are some examples of statements that are “obvious” from a post-Shor perspective:

    (1) To really understand QM, you need to consider entangled states of hundreds or thousands of particles, not just two or three.

    (2) On the other hand, the basic conceptual features of QM can be not only understood, but studied in great detail, without ever encountering such concepts as boson, fermion, energy, commutator, or wave-particle duality.

    (3) Entanglement, far from being “spooky”, is a quantifiable resource like energy or time.

    (4) Feynman’s path-integral formalism basically boils down to the statement that BQP (the class of problems solvable efficiently by a quantum computer) is contained in PP (Probabilistic Polynomial-Time).

    (5) Schrodinger’s cat is not a very interesting example of a large entangled state. A much more interesting example is the “cluster state”: basically, a 2D lattice of spins subject to pairwise nearest-neighbor Hamiltonians.

    (6) Despite the fact that amplitudes vary continuously, quantum information can in principle be protected against noise for an arbitrarily long time — and for that reason, should be thought of as digital rather than analog.

    (7) When studying spectral gaps in condensed-matter systems, often the key question to ask is whether they decrease polynomially or exponentially as the number of particles goes to infinity.

    Let me make a falsifiable prediction: that over the next few decades, statements like the above will come to seem as obvious to “mainstream” physicists as they now seem to Shorians — and that this will affect, not only how they talk about foundational issues, but also what experimental goals they consider worth reaching and how they teach QM to undergrads.

  • Thomas Dent

    “the Standard Model cannot be properly understood without operads and other categorical beasts”

    I must have missed another one of those memos. “To: S.Weinberg, S.Glashow, etc., etc. You don’t properly understand the Standard Model. Go away and learn your operads. Sincerely, Category theorists.”

    Aaronson compensated for the 30-year limit by letting in ‘the Universe’ – just the place where cosmology and astrophysics have got us so much further. It seems to me that all his examples are not particularly about this ol’ Universe, actually about our picture of the methods by which reasoning and logic can operate.

    As he said:

    “discoveries about the capacities of finite beings like ourselves to learn mathematical truths.”

    Then where is ‘the Universe’? Maybe he means the universe as seen by mathematicians, in which the important constitutents are proofs, algorithms and computations.

    Is the most important thing about the cosmological constant the fact that (if it really is constant) it disallows computations involving more than 10^122 bits?

  • http://www.scottaaronson.com Scott Aaronson

    It seems to me that all his examples are not particularly about this ol’ Universe, actually about our picture of the methods by which reasoning and logic can operate.

    Quantum computing has taught us, if it wasn’t clear already, that “the methods by which reasoning and logic can operate” (or at least feasibly operate) depend on the laws of the universe. The limits of feasible computation are not knowable a priori; they can only be discovered empirically. I realize that much of what I’m saying is controversial, but I hope everyone can agree at least on that.

    The way I usually think is in terms of equivalence classes on universes, where two universes are to be identified if they support the same sorts of computation and communication. I know that’s not the only way to think about physics, but it’s a way that’s already been very fruitful for quantum information, and I expect that it will become more prevalent in the future. You should try it sometime!

    Maybe he means the universe as seen by mathematicians, in which the important constitutents are proofs, algorithms and computations.

    Yes.

    Is the most important thing about the cosmological constant the fact that (if it really is constant) it disallows computations involving more than 10^122 bits?

    Yes.

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    Well, let’s keep our eggs in the right baskets.

    Let me defend a little bit the bringing in of experiment. The point is that Computer Science doesn’t have an experimental branch. The closest thing to it is computer engineering.

    With physics or cosmology — the whole field doesn’t exist without experiment, even if you only want to talk about theoretical discoveries.

    But as for profound insights into the nature of our Universe– I like your trump cards, and agree with you completely on those. Part of my reason for hammering away is that the progres of science does only rarely include deep profound paradigm-shifting things, but all the time has regular beating away at the problems and pulling away of bits of the veil to see hints of what’s beyond. And it’s all real about our Physical universe. But, yeah, that’s me whining that I don’t think the terms of the challenge are right… although I might get into a long semantic argument about “basic picture of the Universe,” but I’ve already done enough of that :)

    (Oh, and by the way, I’m fully a clown — fringe science! Not just island, but Kea too.)

    -Rob

  • http://countiblis.blogspot.com Count Iblis

    Computer science will make important contributions to physics in the future. Questions like why we find ourselves in “this world” and not in any other possible world can only be answered by considering the ensemble of all possible worlds.

  • http://catdynamics.blogspot.com Steinn Sigurdsson

    Enough here: dark matter is almost certainly not black holes – the bounds on the contribution of the cosmological density by quiescent black holes is very strict and far below the observed cosmological density for pretty much any plausible black hole mass.
    Bernie Carr reviews this issue every few years and plops down the new constraints, which get tighter each time.

    here is a recent discussion on primordial black holes, which includes some useful limits on their density http://arxiv.org/pdf/astro-ph/0511743

    only plausible mass range is ~ 10^16 kg – which is sub-lunar, and requires some amazing fine tuning to produce in such numbers. Don’t know any plausible physics that would make a cosmological density of black holes at that mass.

    A more comprehensive review is found in Carr’s 1994 ARAA paper.
    http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1994ARA%26A..32..531C&data_type=PDF_HIGH&type=PRINTER&filetype=.pdf

  • Cynthia

    Sean, like so many things, I’m so with you on this one!

    Computer science – unlike physics – can freely crossover into the realm of fantasy and still remain intact. Stated differently, physics – in comparison to computer science – must always stay hinged upon Reality; otherwise, physics falls apart. In other words, when physics – as opposed to computers – wanders outside the bounds of Nature, physics morphs into nonsense. Simply put, this is what truly distinguishes the (unadulterated) natural science of physics from the (adulterated) artificial science of computers.

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    Computer science – unlike physics – can freely crossover into the realm of fantasy and still remain intact.

    Have you been playing too much World of Warcraft? :)

    -Rob

  • http://countiblis.blogspot.com Count Iblis

    Cynthia, what you call fantasy may be reality for creatures living in that “fantasy world”. They may regard our universe as a “fantasy world”.

  • Cynthia

    Rob, thanks, I couldn’t said it any better.;)

    Count Iblis, if you think that creatures elsewhere regard our universe as mere fantasy it’s probably because these “elsewhere creatures” exist within another pocket universe outside our particular Hubble Bubble of the Landscape.

  • http://quthoughts.blogspot.com Joe

    Hi Sean,

    You asked if anyone would defend the claim that “our ability to factor large numbers using a quantum computer contributes as much to our basic picture of the universe as dark energy, or emergent spacetime?”

    Let me start out by saying that I do not agree with Scott Aaronson’s claim.

    I can’t defend that claim, impressive and all as Shor’s algorithm is, however, I think it would be misguided to write off QIP as a mechanism for understanding the universe. Certainly Bell’s inequality is a massive achievement, and while QIP did not really exist at the time, the result can certainly be considered a quantum information result.

    Many results of quantum information theory are simply different ways of looking at physical laws. The no signalling condition, for example, is a somewhat more general rule than special relativity on it’s own implies.

    To me, at least, it seems that information theory can be viewed as another formulation of physics. There has certainly been a huge amount of progress in this area over the last 30 years, starting with the Holevo bound.

    That said, I think Scott was specifically referring to computation and not information theory, but I maybe wrong.

  • http://quthoughts.blogspot.com Joe

    Oops, missed Scott’s post.

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    The “extreme Max Tegmark” viewpoint suggests that any mathematical structure that could represent something like a “universe”, does.

    If there is a formalism to describe a Universe that has things in it that do something that could be called thought, then, effectively, those things exist just as assuredly as I do.

    All of that is way too philosophical for me — but that’s sort of, in a sense, the idea that other fanatasy creatures may “exist”.

    I’m not sure much of that really has a lot to do with Physics as I understand it, though. I’ll stick with run-of-the-mill distant-Hubble-volume-in-our-own-Universe style “parallel Universes,” which is already pretty freaky to the common brain. (Never mind other nucleated Universes in the inflating bulk of Eternal Inflation or Landscape or whatever you want to call it.) I’m not a convinced subscriber to the “extreme Max Tegmark” viewpoint.

    All of which has drifted way off topic. Perhaps. Unless you’re suggesting that computer science is making things similar to what was in James P. Hogan’s Entoverse (a cool SF concept whose execution I found disappointing).

    -Rob

  • Aaron Bergman

    The idea that all ‘possible worlds’ actually exist predates Tegmark and, I think, is due to a particular philosopher who’s somewhat infamous for it, although I’m forgetting his name.

    I don’t want to seem too rude, but most of Scott’s statement 1-7 above seem silly to me. Entanglement just isn’t the be all, end all of quantum mechanics, and two dimensional vector spaces aren’t either (ok — that one is a cheap shot). Quantum information theory is cute and all that, but it doesn’t solve any of the conceptual foundation of quantum mechanics. It also doesn’t particularly help you compute the energy states, of say, Helium, much less something like beznene.

    It’s also not true that the path integral is captured by the Weiner measure if you want to ever do quantum field theory.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    I’ve deleted some of less sensible off-topic posts, and responses thereto, so as to preserve some chance of having a useful conversation. And because I am a Communist, afraid of new ideas, &c.

  • http://countiblis.blogspot.com Count Iblis

    Cynthia, yes that may be the case. You can actually also regard the creatures as universes in their own right. What I mean is that you can consider the algorithm that a brain is running using a neural network as describing a virtual world. In that virtual world things like pain, taste etc. objectively exist.

    Then the question is why we find ourselves embedded in this particular universe? I think that in the future scientists will ponder such questions and then physics and computer science will have become the same thing.

  • http://countiblis.blogspot.com Count Iblis
  • http://www.pieterkok.com/index.html PK

    As soon as you realise that any carrier of information must be a physical system (and therefore obeys the laws of physics), you can ask how information processing differs in different physical theories. This, to some extent, is what theoretical computer scientists do, nevermind the toy models that may be unphysical. Theoretical physicists study plenty of unphysical models in order to understand more about the world (1+1 dimensions, anyone?). So TCS is an honourable branch of theoretical physics. :-)

  • http://cosmicwatercooler.blogspot.com beajerry

    These questions must be forwarded to Slartibarfast.

  • http://eskesthai.blogspot.com/2006/04/computer-language-and-math-joined-from.html Plato
  • George Ellis

    Sean said way back,

    “What physics teaches us about, on the other hand, are empirical features of the contingent world in which we find ourselves — features that didn’t have to be true a priori”.

    So how on earth, rather I mean how in the universe, does the holographic principle – the second supposed major achievement of physics – relate to this criterion? What is the *empirical data* supporting this proposal?

  • Cynthia

    PK, if I so dare to understand you correctly, the following is my paraphrase of your comment#33. Physics – being the true study of Nature – can extend beyond a 1+1 dimensional Universe. By contrast, though, computer science – being only a holographic study of Nature – is confined to a 1+1 dimensional universe. Hence, theoretical computing is merely a subheading under the greater theoretical physics.

  • http://quthoughts.blogspot.com Joe Fitzsimons

    Cynthia, I think the 1+1 dimensions PK mentioned was a reference to a quantum gravty model, not to computer science.

    The main point, as far as I can tell, is that there is a mapping between physical laws and information theoretic laws, and so information theory could reasonably be considered physics.

  • http://countiblis.blogspot.com Count Iblis

    Classical mechanics is non-computable, see here.

  • http://www.anthropic-principle.ORG island

    John Baez said:
    …in 1986, right around when fundamental theoretical physics stalled out. Since then most of the progress in fundamental physics has come from observations in astronomy

    Can I quote you as saying in the past that GR with a cosmological constant is still the most conservative mainstream approach to explaining our expanding universe, and are the assumptions that are being taken for granted here, (about the nature of observed dark matter and dark energy), what you were referring to being afraid was going to happen in yet another past statement of concern about this very issue?

    Fundamental theoretical physics may have stalled-out 30 years ago, but the assmuptions that are being taken for granted seem to have accelerated exponentially like a runaway universe… ;)

  • http://eskesthai.blogspot.com/2005/04/holographical-mapping-of-standard.html Plato

    Gerard “t Hooft:

    No ‘Quantum Computer’ will ever be able to out perform a ‘scaled up classical computer.’

    Examples for consideration are, “Beyond Einstein(LIGO) and Seti?”

  • http://www.scottaaronson.com Scott Aaronson

    Thanks, Aaron! I’m glad to have someone vehemently disagree with my statements — what I was worried about is that they were too obvious.

    Entanglement just isn’t the be all, end all of quantum mechanics, and two dimensional vector spaces aren’t either (ok — that one is a cheap shot).

    Indeed; that’s why we study 2^n-dimensional vector spaces for large values of n.

    Quantum information theory is cute and all that, but it doesn’t solve any of the conceptual foundation of quantum mechanics. It also doesn’t particularly help you compute the energy states, of say, Helium, much less something like beznene.

    Computing the energy states of helium is cute and all that, but how does it help us find an efficient quantum algorithm for graph isomorphism? :)

    Seriously, look at Guifre Vidal’s papers — he’s already used quantum information techniques to get several-order-of-magnitude efficiency improvements in ground state computations for 1D spin chains. Of course, if we could build a quantum computer, then we’d get enormous improvements for such problems.

  • http://quthoughts.blogspot.com Joe Fitzsimons

    Quantum information theory is cute and all that, but it doesn’t solve any of the conceptual foundation of quantum mechanics. It also doesn’t particularly help you compute the energy states, of say, Helium, much less something like beznene.

    Two points:
    1) Quantum computers can simulate quantum systems efficiently.
    2) The Gottesman-Knill theorem is just one example of a quantum information result which directly affects classical simulation.

    QIP has had a major effect on how we look at simulating quantum systems. So the examples you gave are actually very poor if you intend to list things QIP hasn’t helped us with.

  • Cynthia

    Joe Fitzsimons, thanks for kindly reminding me of the huge difference between classical computing and quantum computing!

  • A former student

    Guifre Vidal’s work is cute and interesting, *but* it applies to 1D spin chains (and 1d systems) mostly. This is a pretty serious limitation, given that physics is more than just about 1d systems.
    Furthermore the extension of DMRG based techniques to higher dimensions is non-trivial, even from a computational viewpoint. In this sense, his work (and the work of S.R White which his methods are based on) does not make significant progress in answering problems of the sort Aaron Bergmann has raised. The state of the art in these questions remains older and less fashionable methods like Density functional theory, which are not a consequence of Quantum information theory.

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    Fundamental theoretical physics may have stalled-out 30 years ago, but the assmuptions that are being taken for granted seem to have accelerated exponentially like a runaway universe…

    Both of these statements are unsupportable.

    Yeah, it’s true, the Standard Model of Particle Physics has been in place and has been working for 30 years, which makes it seem like no big huge discoveries are being made.

    However, there is a wide swath of space between “stalled out” and “making fundamental paradigm-changing discoveries.” Inflation, for instance, is something that’s come in the last 30 years. The holographic principle that Sean mentions above. There is work going on about understanding better the more complicated behavior of Standard Model particles (e.g. when you can get statistical and do RHIC stuff). Lots of people spend lots of time thinking about how neutrinos work. There’s stuff going on in fundamental theoretical physics; it’s hardly stalled. (And, heck, let’s not forget that the concrete idea of Grand Unification comes out of the 70’s.)

    And as for the exponential increase in the assumptions being taken for granted : I fail to see that at all. Sure, we’ve got postulates an axioms and so forth, but given that the whole thing is working, it’s hardly unreasonable. And, there’s not a huge increase at all. There aren’t “epicycles” constantly being added or any such. Yeah, people take the cosmological constant seriously in a way that they didn’t 10 years ago … but that is because of data, not because of assumptions. The CC was already long there. The data fit it well. To throw it out and assume something else would be the increase of assumption, not believing the CC.

    I don’t appreciate your leaving Rob’s unsupported shot at me on the board, while removing my factual statement back to him.

    I’m not sure what you’re talking about; the “good luck Steinn” post sure looks deleted to me.

    -Rob

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    George (#35) — the empirical data are the existence of gauge theories and gravity. We now know (from theoretical work, admittedly) that these two things are not entirely separate — a theory can look like gauge fields in flat spacetime in one limit, and like gravity in another. The laws of black hole mechanics speak strongly to the idea that something like this is happening in our real world. I don’t know exactly how it will play out, and am happy to admit that its far from proven, but I think it’s a major achievement of our last 30 years to discover that spacetime can be emergent in this way.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    And will our more sensible commenters please remember that replying to crackpots is as crucial a part of the Crackpot Dynamic as the original crackpottery itself? Resist the temptation!

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    We now know (from theoretical work, admittedly) that these two things are not entirely separate — a theory can look like gauge fields in flat spacetime in one limit, and like gravity in another.

    That theoretical work is pretty solid, though, isn’t it?

    I have to admit that it’s all beyond my understanding… but then, I don’t even really fully understand the Higgs Mechanism, I’m embarassed to admit. (I’m an observer; I know how to take data and make pretty pictures.) One of these days I’ll have to sit down and seriously think about it.

    -Rob

  • onymous

    George wrote:

    “So how on earth, rather I mean how in the universe, does the holographic principle – the second supposed major achievement of physics – relate to this criterion? What is the *empirical data* supporting this proposal?”

    Sean has given the zero-order answer: we know there is gravity, and we have good reasons to suspect that any gravitational theory is holographic. Unfortunately like most things in quantum gravity, it’s hard to find experiments that can test it.

    I like to think there’s one other bit of very suggestive empirical data, which I hope will eventually point us in the direction of a real theory of cosmology. Namely, we know that there was an inflationary epoch in the past: spacetime then was, approximately, deSitter space with a large cosmological constant. We know that there is dark energy now: spacetime in the future will be, approximately, deSitter space with a small cosmological constant. The decrease of the c.c. between these two epochs is consistent with the idea of dS/CFT (originally due to Strominger in hep-th/0106113 and hep-th/0110087), which is that time evolution in our universe corresponds to running up the RG flow of some boundary theory.

    Unfortunately the technical details are difficult — there are good reasons to think any theory of quantum gravity on deSitter space is drastically different from usual theories of quantum gravity, and perhaps that any deSitter space is unstable — but I think it’s still a compelling picture. The empirical data here is limited, essentially just to two cosmological constants (past and future), but suggestive. Maybe someday this will be understood as the first hint of the real dynamics driving cosmology.

  • Jack

    Sean said, “I don’t know exactly how it will play out, and am happy to admit that its far from proven, but I think it’s a major achievement of our last 30 years to discover that spacetime can be emergent in this way.”

    I think it’s misleading to say that this is an example of “emergent spacetime”. You could equally well argue that what has been shown is that spacetime is fundamental and that gauge theory is “emergent” from it. I’m not saying that this is sensible, just that it is *as* sensible! What we are seeing here, I’m afraid, is that old physics-sociology issue: people trained in particle theory who are uncomfortable with GR and want to be able to dismiss it as “emergent”, which, to their minds, is equivalent to “not really important”. But yes, of course, something really important has been discovered here. But it doesn’t have anything to do with “emergence”.

  • A former Student

    Joe Fitzsimons,
    “1) Quantum computers can simulate quantum systems efficiently.”

    True. How does this differ from actually performing an experimental observation
    on the quantum system in question. I am truly ignorant about this, so would be
    glad to know if this is a dumb question or not.

    I wonder how this qualifies as quantum information theory (or computer science, per Scott Aaronson), in any case, especially since such things do not currently exist.

  • http://www.anthropic-principle.ORG island

    Rob, I just don’t get why you think for example that inflationary theory is unchallengable when you have many reasonable physicists arguing just the opposite.

    The standard interpretation is that thermodynamic arrow of time necessarily requires low entropy initial conditions, which John Page pointed-out, would be extremely improbable. Rather than solving this problem, the inflation theory further aggravates it because the reheating or thermalization at the end of the inflation era, necessarily increases entropy, meaning that the initial state of the universe had to be even more orderly than in other Big Bang theories that don’t have an inflationary phase.

    Lawrence Krauss pointed out that the amplitude of the quadrupole moment of the CMBR is unexpectedly low, and the other low multipoles are observed to be preferentially aligned with the ecliptic plane. Thhis is a signature of what is known as, “non-Gaussianity”, which contradicts the simplest models of inflation, requiring more bandaids and cream.

    If the microwave background at the multipoles is correlated with the geometry and direction of motion of the solar system, and the incoherence manifests via octopole and quadrupole components in a bound universe, then there should be a center of gravity at the center of the visible universe that correlates to the ecliptic.

    Call me what you want when I can’t be reasoned with, but the physicists that taught me physics explained the flaw in any understanding, with hard physics and facts, without prejudicial preference for any given cosmological models and theory until something has been definitively decided, but especially before they label and dismiss you out of hand, and I appreciate that Rob hasn’t done that.

    Why are quantum fields in curved space limited only to creating particles during rapid inflation?.. since it works just fine to explain expansion without inflation.

    You know where I’m going and I don’t get shot down… so I won’t repeat myself…

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    Jack, gauge theories have spacetimes just like gravity theories do. From the point of view of one side of the duality, the spacetime on the other side looks “emergent.” Which nobody thinks means “not important.”

    And the idea that I am uncomfortable with GR is somewhat falsified by the fact that I wrote a book about it.

  • Chris W.

    Sean,

    “Emergent” in what sense? Could you elaborate on this? Given the context I get the sense that some people (at least) mean that, in light of the relevant dualities, curved/dynamic spacetime can be regarded as “emergent” from a more conventional gauge theory formulated in flat spacetime. (Of course, I’m not referring here to the old perturbative analysis of a spin-2 field in Minkowski spacetime.)

    This is to be distinguished from another sense of “emergent”, namely that spacetime as a continuous manifold may emerge as an approximation to an underlying discrete dynamics, in somewhat the same sense that the apparent continuity of condensed matter emerges from the dynamics of its atomic or molecular constituents in spacetime. This point of view bears a somewhat closer similarity to certain alternative approaches to quantum gravity, although the analogy breaks down for the obvious reason that the discrete “constituents” in these approaches cannot be referred to a background; there is no background structure.

  • http://quthoughts.blogspot.com Joe Fitzsimons

    A former Student,

    Well, essentially, a universal quantum computer can synthesise any Hamiltonian. You don’t need to have the Hamiltonian naturally occuring in your system.

    You can construct a universal quantum computer out of a chain of spin-1/2 particles interacting via an Ising interaction, but can happily simulate, say, an RKKY interaction, something the system does not have.

    It is not the same as doing the experiment, since they are very different systems.

    You use a quantum algorithm to perform the simulation, just as you use a classical algorithm to do simulations on a classical computer.

    Whether the devices exhist or not does not bother theoretical computer scientists. You seem to be confusing them with programmers, etc. They are much more closely related to mathematicians (some would say they are mathematicians).

    Also, who said quantum computers don’t exhist? We’re just small scale at the moment.

  • http://golem.ph.utexas.edu/~distler/blog/ Jacques Distler

    “Emergent” in what sense?

    In the sense, in our current understanding, that the CFT on the boundary is taken as the definition of quantum gravity in the “bulk” AdS space. In an appropriate limit , this definition is supposed to reduce to semiclassical supergravity. (And many calculations corroborate that it does.)

    It didn’t have to work out this way. A priori, 4-dimensional N=4 Super Yang-Mills would seem to have little to do with General Relativity on 10-dimensional spacetime, AdS5× S5. And, yet, the latter “emerges” in an appropriate limit.

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    To what Jacques just said, let me add that you shouldn’t think of “emergent” as necessarily referring to some underlying discrete structure. More generally, it refers to collective phenomena characteristic of the behavior of some apparently-different set of degrees of freedom. In particular, a holographic duality can’t be something so straightforward as a Planck-scale discretization of spacetime; it’s a different number of dimensions! In AdS/CFT, there isn’t any discretization — instead, the continuous local degrees of freedom in one description look quite non-local in the other.

    This gets to the heart (at the risk of derailing the discussion yet again) of one reason why so many people are fond of string theory. Saying “maybe spacetime is discrete at the Planck scale” is easy to do, and people have been doing it for years, with various degrees of promise. The holographic duality of AdS/CFT seems like something much more profound, involving non-locality and different numbers of dimensions and UV/IR correspondences in a deep-rooted way. Nobody would just be sitting around in their armchair, thinking deep thoughts about the nature of spacetime, and say “Hey, maybe if we look at quantum gravity with anti-de Sitter boundary conditions, it will be dual to a large-N conformal field theory in Minkowski space.” You had to be led there, bit by bit, by struggling to understand the individual puzzles presented by different pieces of the theory along the way. And it paid off big-time.

  • Chris W.

    [Sean]

    … let me add that you shouldn’t think of “emergent” as necessarily referring to some underlying discrete structure.

    Right. The distinct uses of the word have been quite evident to me for some time. String theorists seem generally anxious to distance themselves from the “spacetime probably has a discrete basis” outlook. Their use of the word “emergent” seems more closely allied to that of condensed matter field theorists, where “emergent phenomena” seem to be associated with somewhat similar correspondences between what are ostensibly very different (often field theoretic) descriptions of a system. Of course this has inspired people like Robert Laughlin to go rather far afield (no pun intended) from condensed matter, but let’s not get into that.

  • Richard

    Count Iblis said:

    Then the question is why we find ourselves embedded in this particular universe?

    I’m always puzzled how muddled expressions like this and its many variations arise, and they always lead to absurdities if carried to conclusion with the same lack of rigor. A concept of embedding always involves two objects, and a map from one into another preserving some kind of structure. In this case, then, we have on the one hand, an implied state of otherness (“we” are not of this universe) mediated by that embedding or map, but on the other hand, the structure that we are faithfully mapped onto already exists as a subset of that universe, so we have a kind of house-of-mirrors identity crisis. Ack! I think we definitely need to be speaking in a more rigorous framework.

    On a lighter note, one of my favorite cartoons of all time was a single panel showing a woman removing laundry from a dryer. The caption on the panel said something like “Somewhere in a parallel universe” and the woman is shown exclaiming: “Well what do you know, extra socks again!”

  • George Ellis

    When I ask how does the holographic principle relate to empirical data, the response from Sean is

    “the empirical data are the existence of gauge theories and gravity. We now know (from theoretical work, admittedly) that these two things are not entirely separate — a theory can look like gauge fields in flat spacetime in one limit, and like gravity in another. The laws of black hole mechanics speak strongly to the idea that something like this is happening in our real world”,

    and onymous says

    “Sean has given the zero-order answer: we know there is gravity, and we have good reasons to suspect that any gravitational theory is holographic. Unfortunately like most things in quantum gravity, it’s hard to find experiments that can test it.”

    These responses illustrate how under the way physics is now being done by the theoretical physics community, the old concept of proof by theoretical prediction and experimental confirmation is being eroded and replaced by something much weaker. Here `experimental proof’ is just a statement that a new theory implies there is a link between two already well tested experimentally confirmed topics. So what is the new observation that will confirm this theoretical link, for example the prediction of a new particle? or how about a prediction of the mass of some known particle? Where is the hard data that such a link exists?

    Just for the record, while the existence of astrophysical black holes is well established, the laws of black hole mechanics have not been experimentally confirmed. They remain well-based but unproven theoretical predictions, and so do not provide the needed empirical link.

    If the holographic principle is one of the two major achievements of physics in recent decades, then physics is no longer a solidly empirically based subject.

  • onymous

    George wrote:

    “These responses illustrate how under the way physics is now being done by the theoretical physics community, the old concept of proof by theoretical prediction and experimental confirmation is being eroded and replaced by something much weaker. Here `experimental proof’ is just a statement that a new theory implies there is a link between two already well tested experimentally confirmed topics.”

    It’s not that the old concept of proof is being eroded, it’s that the experimental confirmation is difficult. Quantum gravity is just inherently hard to test, no matter what. If you don’t like the lack of experimental support, the options are (a) tell everyone to stop working on quantum gravity, or (b) be patient and hope that eventually a better theoretical understanding will lead to new ways of testing things. I won’t say choice (a) is inherently unreasonable, but we do know that we need quantum gravity in order to really make sense of the universe. So I think choice (b) is preferable: maybe someday someone will come up with a clever experimental test. It’s not that no one tries, it’s that quantum gravitational effects are inherently extremely suppressed in every situation we can probe. Cosmology offers hints, but for obvious reasons it’s not the ideal laboratory.

    (There are other highly indirect ways to try to get experimental confirmation that theory is on the right track, like the viscosity bound from AdS/CFT that seems to be borne out by RHIC data. At the moment such things are fairly crude, but still nontrivial.)

    If you think quantum gravity research should be abandoned because there’s no obvious hope of testing it in the near future, I can’t really argue, but I think it would be a mistake to give up so soon. We didn’t even know about the cosmological constant until quite recently. Maybe we’ll get more surprises.

  • http://golem.ph.utexas.edu/~distler/blog/ Jacques Distler

    So what is the new observation that will confirm this theoretical link…?

    To answer that question, we need to step back for a moment and ask what it means to confirm this conjectured link.

    As I said, in a certain limit, quantum gravity in AdS becomes semiclassical. While we don’t currently have an independent definition of the full quantum gravity theory, we do understand the semiclassical theory.

    So one thing we could do to check the conjecture is to compute something in the gauge theory, compute the same observable in the supergravity approximation and compare.

    Whoa, there! says George, I asked for an empirical check on the conjecture, not one of your fancy-pants computations.

    Well, OK. We can’t do quantum gravity experiments in AdS. But we can do experiments involving gauge theories. So, instead of comparing a calculation on the supergravity side with another calculation on the gauge theory side, we could compare it with a measurement on the gauge theory side.

    For extra bonus points, we should make the observable something that no one knows how to calculate on the gauge theory side. If we succeed, we then not only add evidence that the conjecture is true, but also show that it is useful, in the sense that it allows us to calculate interesting quantities in the gauge theory that we otherwise would not be able to do.

    One such application is the computation of finite-temperature transport coefficients of strongly-coupled gauge theory plasmas. The relevant experiments are being done at RHIC.

    As “onymous” says, those checks are rather crude, at present.

    On the experimental side, the error bars are large, and the interpretation of the results are still open to question (some argue that we don’t even have definitive evidence that we’ve seen the quark gluon plasma). On the theoretical side, one needs to argue that the transport coefficients (or, appropriate ratios) are relatively “universal,” and don’t much depend on the details of the gauge theory. (This, at least, can be checked, by studying the same quantity in different AdS/CFT backgrounds.)

    I’m sorry if this state of affairs doesn’t quite fit your schema for the way “good science” is supposed to operate. But it seems to be the best we can do, at present. We really are trying to do the best we can …

  • Paul Valletta

    Can someone clarify the concept of “emergence” and “evolving” :

    http://en.wikipedia.org/wiki/Emergence

    A specific situation in a specific model would be great, ie if some models use the same terminology, WRT the same process’s?

  • Jack

    Sean: *of course* I wasn’t suggesting that you are uncomfortable with GR! OK, let’s forget the sociology stuff and get back to “emergence”. I take it that when people say that something is “emergent” they mean that it arises in some non-trivial way from something more basic. Color is an emergent property of things around us, in the sense that the color of an object is not a fundamental property. We *understand* the more fundamental things from which it “emerges”. I don’t think AdS/CFT is like this [yet]. Sure, it has given us a tremendously powerful *alternative way* of thinking about gravity [in certain cases]. That’s great. But has it really given us an understanding of some more fundamental something-or-other from which spacetime “emerges” at the Big Bang? If so, what is that? Somebody mentioned dS/CFT, which would be far more relevant if it could be made to work. But even there, you have an *equivalence* between a cosmological spacetime and some weird gauge theory on S^3, which would be great, but where is anything emerging? Nobody thinks that the theory on S^3 is more fundamental than the deSitter dynamics, do they?

    For the skeptics: all this is on-topic! We are trying to assess the importance of AdS/CFT. Is it a really important technical advance, or is it something even more important, a real advance in our understanding of the basic nature of spacetime? I say yes to the first, not yet to the second. [But I would like to be convinced.] In particular I’m not convinced that giving a *definition* of something allows us to claim that it is emergent. It’s great, it’s good, Juan well deserves his 4000 cites. But emergent spacetime? Where?

  • A former student

    Joe Fitzsimons,
    I am aware of the distinction between computer scientists and programmers-however the point is that the proofs of algorithms and other mathematical statements are purely that until it is possible to implement them within an actual quantum computer. So describing these algorithms as having explicated fundamental physical concepts is rather stretching it, given that it is unclear whether quantum computers can be scaled beyond their present size due to things like decoherence. In short, as a physicist it does not help me much if a hypothetical device can solve my problem in polynomial time, if such a device does not exist or cannot be made.

    Secondly:
    “Well, essentially, a universal quantum computer can synthesise any Hamiltonian. You don’t need to have the Hamiltonian naturally occuring in your system.”

    If I am not wrong this is mathematically equivalent to stating that one can write the Hamiltonian in a basis of qubits of the appropriate dimension. However, I am not sure that in general quantum systems can be simulated efficiently (e.g with a polynomial number of quantum gates(?) etc) with quantum computing algorithms (as you assert earlier). This maybe and is possibly true, but to my knowledge there is no general demonstration of this fact, and I think this distinction is important. I may be wrong about this since I dont have expertise in this area, and would be glad to know if such proofs exist.

  • http://www.mpe.mpg.de/~erwin/ Peter Erwin

    onymous said:
    If you think quantum gravity research should be abandoned because there’s no obvious hope of testing it in the near future, I can’t really argue, but I think it would be a mistake to give up so soon. We didn’t even know about the cosmological constant until quite recently. Maybe we’ll get more surprises.

    I don’t think anyone is suggesting that quantum gravity research should be abandoned; of course it shouldn’t. It’s vital that it continue. But I agree with George Ellis that it doesn’t make sense to claim that exciting conjectures like the holographic principle should be elevated to the status of “most important empirical discoveries about the universe,” when there is as yet no experimental evidence for them — or even clear experimental predictions. Particularly when there are so many other examples of empirical discoveries, such as most of the things listed by Rob Knop.

    (I’ll mention in passing that, given Scott Aaronson’s “past thirty years” cutoff, the discovery of the existence of dark matter — or whatever the hell is causing its effects — would certainly qualify, since really convincing evidence only started showing up in the mid/late-1970s.)

  • http://www.scottaaronson.com Scott Aaronson

    Former student: It’s perfectly conceivable that a fundamental reason will be discovered why quantum computers can never be built. It’s also perfectly conceivable that a physical system will be discovered whose Hamiltonian can’t be simulated by a quantum computer with polynomial overhead.

    I desperately hope that one or the other of these things will happen, since it would be the biggest scientific thrill of my life! The former discovery would imply either that quantum mechanics is false, or else that there’s some fundamental process (maybe a gravitational decoherence process?) layered on top of quantum mechanics that’s unlike anything we can currently imagine. The latter discovery would imply a “failure of reductionism”: the existence of a physical system whose Hamiltonian can’t be decomposed into a reasonable number of “local” (e.g. 2- or 3-particle) interactions. (If a Hamiltonian is a sum of polynomially many local terms, then certainly one can simulate it efficiently on a quantum computer — see here for some pointers to the literature.)

    Thus, either discovery would contribute much more to “explicating fundamental physical concepts” than the mere confirmation of our current belief: namely, that the class of functions that are feasibly computable in physical reality coincides with Bounded-Error Quantum Polynomial-Time.

  • http://quthoughts.blogspot.com Joe

    If I am not wrong this is mathematically equivalent to stating that one can write the Hamiltonian in a basis of qubits of the appropriate dimension. However, I am not sure that in general quantum systems can be simulated efficiently (e.g with a polynomial number of quantum gates(?) etc) with quantum computing algorithms (as you assert earlier)

    Actually I didn’t assert this. Hamiltonians with only two particle interactions can be simulated efficiently (as can 3,4,5, etc.). An arbitrary Hamiltonian has an exponential number of free parameters, and so needs an exponential number of gates to simulate.

  • http://brahms.phy.vanderbilt.edu/~rknop/blog/ Rob Knop

    Scott, re: post #67 : if you keep saying stuff like that, I’m gonna have to be forced to re-evaluate my idea that computer science is a subdiscipline of math, and start wondering if it’s physics instead. And you know that the last thing a good red-blooded American ever wants is to be forced to re-evaluate his preconceptions.

    Of course, “quantum computing” has that quantum in its name already.

    Re: Hamitonians with arbitrary numbers of free parameters : is that really something realistic to worry about? In math and physics, when we get to large numbers of particles we go to continuum treatments. Thus, calculus, thus, field theory.

    -Rob

  • W. Pauli

    Yeah, who cares about neutrinos? What are they good for anyways? Higher topos are the real thing.

  • http://countiblis.blogspot.com Count Iblis

    Richard,

    We are in fact part of this universe. The question is why. I mean, the brain is a formally describable system, and therefore defines (in the Tegmark ensemble) a universe in its own right.

    This is why “computer science” is more fundamental than physics. In physics you just postulate a universe and try to find the fundamental laws. If, on the other hand, you assume that an ensemble of all posible worlds exists, then many more questions can be raised that don’t make sense in traditional physics.

  • W. Pauli

    This is why “computer science” is more fundamental than physics.

    Not to mention the dotcom revolution…

  • http://countiblis.blogspot.com Count Iblis

    Pauli,

    The dotcom revolution is nothing compared to what is coming :)

  • A former student

    Scott,
    Thanks for the link. I agree with you that either situation would be tremendously exciting and
    have fundamental physical implications. Personally, I am hoping for the first – I rather like the idea of decoherence as a limiting process. But they don’t fall into your 30 year timeline :) . BTW,
    I am not sure that limitations due to decoherence would imply that quantum mechanics is false
    or that there is a fundamental source of decoherence. It may be a statistical consequence, like with entropy in classical statistical systems?. I think this alone would be of great importance.

  • http://eskesthai.blogspot.com/2005/04/holographical-mapping-of-standard.html Plato
  • http://eskesthai.blogspot.com/2004/11/do-we-need-radically-new.html Plato
  • admin1

    An inventory of what the universe is made of.

    So far there have been 76 comments about the religious statement quoted above in the name of physics which is the religion of our time. I bet that none of the commentators can tell or care to distinguish why this is a religious statement. To me it is not surprising that Doctors of Philosophy who are the direct descendants of pre-Newtonian scholastic Doctors comment about one of the oldest scholastic/religious concepts of all time. This is also a proof of how Doctors are doctoring the internet, but why are others who are not Doctors of Philosophy contributing to the religious speculation in the name of science? I understand this is physics, the way reading the mind of god is physics, but it is not science.

  • Pingback: Scott Aaronson on the String Wars | Cosmic Variance()

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] cosmicvariance.com .

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »