In the last post I suggested that nobody should come to these parts looking for insight into the kind of work that was just rewarded with the 2012 Nobel Prize in Physics. How wrong I was! True, you shouldn’t look to me for such things, but we were able to borrow an expert from a neighboring blog to help us out. John Preskill is the Richard P. Feynman Professor of Theoretical Physics (not a bad title) here at Caltech. He was a leader in quantum field theory for a long time, before getting interested in quantum information theory and becoming a leader in that. He is part of Caltech’s Institute for Quantum Information and Matter, which has started a fantastic new blog called Quantum Frontiers. This is a cross-post between that blog and ours, but you should certainly be checking out Quantum Frontiers on a regular basis.

When I went to school in the 20th century, “quantum measurements” in the laboratory were typically performed on ensembles of similarly prepared systems. In the 21st century, it is becoming increasingly routine to perform quantum measurements on single atoms, photons, electrons, or phonons. The 2012 Nobel Prize in Physics recognizes two of the heros who led these revolutionary advances, Serge Haroche and Dave Wineland. Good summaries of their outstanding achievements can be found at the Nobel Prize site, and at Physics Today.

Serge Haroche developed cavity quantum electrodynamics in the microwave regime. Among other impressive accomplishments, his group has performed “nondemolition” measurements of the number of photons stored in a cavity (that is, the photons can be counted without any of the photons being absorbed). The measurement is done by preparing a Rubidium atom in a superposition of two quantum states. As the Rb atom traverses the cavity, the energy splitting of these two states is slightly perturbed by the cavity’s quantized electromagnetic field, resulting in a detectable phase shift that depends on the number of photons present. (Caltech’s Jeff Kimble, the Director of IQIM, has pioneered the development of analogous capabilities for optical photons.)

Dave Wineland developed the technology for trapping individual atomic ions or small groups of ions using electromagnetic fields, and controlling the ions with laser light. His group performed the first demonstration of a coherent quantum logic gate, and they have remained at the forefront of quantum information processing ever since. They pioneered and mastered the trick of manipulating the internal quantum states of the ions by exploiting the coupling between these states and the quantized vibrational modes (phonons) of the trapped ions. They have also used quantum logic to realize the world’s most accurate clock (17 decimal places of accuracy), which exploits the frequency stability of an aluminum ion by transferring its quantum state to a magnesium ion that can be more easily detected with lasers. This clock is sensitive enough to detect the slowing of time due to the gravitational red shift when lowered by 30 cm in the earth’s gravitational field.

With his signature mustache and self-effacing manner, Dave Wineland is not only one of the world’s greatest experimental physicists, but also one of the nicest. His brilliant experiments and crystal clear talks have inspired countless physicists working in quantum science, not just ion trappers but also those using a wide variety of other experimental platforms.

Dave has spent most of his career at the National Institute of Standards and Technology (NIST) in Boulder, Colorado. I once heard Dave say that he liked working at NIST because “in 30 years nobody told me what to do.” I don’t know whether that is literally true, but if it is even partially true it may help to explain why Dave joins three other NIST-affiliated physicists who have received Nobel Prizes: Bill Phillips, Eric Cornell, and “Jan” Hall.

I don’t know Serge Haroche very well, but I once spent a delightful evening sitting next to him at dinner in an excellent French restaurant in Leiden. The occasion, almost exactly 10 years ago, was a Symposium to celebrate the 100th anniversary of H. A. Lorentz’s Nobel Prize in Physics, and the dinner guests (there were about 20 of us) included the head of the Royal Dutch Academy of Sciences and the Rector Magnificus of the University of Leiden (which I suppose is what we in the US would call the “President”). I was invited because I happened to be a visiting professor in Leiden at the time, but I had not anticipated such a classy gathering, so had not brought a jacket or tie. When I realized what I had gotten myself into I rushed to a nearby store and picked up a tie and a black V-neck sweater to pull over my levis, but I was under-dressed to put it mildly. Looking back, I don’t understand why I was not more embarrassed.

Anyway, among other things we discussed, Serge filled me in on the responsibilities of a Professor at the College de France. It’s a great honor, but also a challenge, because each year one must lecture on fresh material, without repeating any topic from lectures in previous years. In 2001 he had taught quantum computing using my online lecture notes, so I was pleased to hear that I had eased his burden, at least for one year.

On another memorable occasion, Serge and I both appeared in a panel discussion at a conference on quantum computing in 1996, at the Institute for Theoretical Physics (now the KITP) in Santa Barbara. Serge and a colleague had published a pessimistic article in Physics Today: Quantum computing: dream or nightmare? In his remarks for the panel, he repeated this theme, warning that overcoming the damaging effects of decoherence (uncontrolled interactions with the environment which make quantum systems behave classically, and which Serge had studied experimentally in great detail) is a far more daunting task than theorists imagined. I struck a more optimistic note, hoping that the (then) recently discovered principles of quantum error correction might be the sword that could slay the dragon. I’m not sure how Haroche feels about this issue now. Wineland, too, has often cautioned that the quest for large-scale quantum computers will be a long and difficult struggle.

This exchange provided me with an opportunity to engage in some cringe-worthy rhetorical excess when I wrote up a version of my remarks. Having (apparently) not learned my lesson, I’ll quote the concluding paragraph, which somehow seems appropriate as we celebrate Haroche’s and Wineland’s well earned prizes:

“Serge Haroche, while a leader at the frontier of experimental quantum computing, continues to deride the vision of practical quantum computers as an impossible dream that can come to fruition only in the wake of some as yet unglimpsed revolution in physics. As everyone at this meeting knows well, building a quantum computer will be an enormous technical challenge, and perhaps the naysayers will be vindicated in the end. Surely, their skepticism is reasonable. But to me, quantum computing is not an impossible dream; it is a possible dream. It is a dream that can be held without flouting the laws of physics as currently understood. It is a dream that can stimulate an enormously productive collaboration of experimenters and theorists seeking deep insights into the nature of decoherence. It is a dream that can be pursued by responsible scientists determined to explore, without prejudice, the potential of a fascinating and powerful new idea. It is a dream that could change the world. So let us dream.”

If you happen to have been following developments in quantum gravity/string theory this year, you know that quite a bit of excitement sprang up over the summer, centered around the idea of “firewalls.” The idea is that an observer falling into a black hole, contrary to everything you would read in a general relativity textbook, really would notice something when they crossed the event horizon. In fact, they would notice that they are being incinerated by a blast of Hawking radiation: the firewall.

This claim is a daring one, which is currently very much up in the air within the community. It stems not from general relativity itself, or even quantum field theory in a curved spacetime, but from attempts to simultaneously satisfy the demands of quantum mechanics and the aspiration that black holes don’t destroy information. Given the controversial (and extremely important) nature of the debate, we’re thrilled to have Joe Polchinski provide a guest post that helps explain what’s going on. Joe has guest-blogged for us before, of course, and he was a co-author with Ahmed Almheiri, Donald Marolf, and James Sully on the paper that started the new controversy. The dust hasn’t yet settled, but this is an important issue that will hopefully teach us something new about quantum gravity.

**Introduction**

Thought experiments have played a large role in figuring out the laws of physics. Even for electromagnetism, where most of the laws were found experimentally, Maxwell needed a thought experiment to complete the equations. For the unification of quantum mechanics and gravity, where the phenomena take place in extreme regimes, they are even more crucial. Addressing this need, Stephen Hawking’s 1976 paper “Breakdown of Predictability in Gravitational Collapse” presented one of the great thought experiments in the history of physics. Read More

When it comes to microwaves from the sky, the primordial cosmic background radiation gets most of the publicity, while everything that originates nearby is lumped into the category of “foregrounds.” But those foregrounds are interesting in their own right; they tell us about important objects in the universe, like our own galaxy. For nearly a decade, astronomers have puzzled over a mysterious hazy glow of microwaves emanating from the central region of the Milky Way. More recently, gamma-ray observations have revealed a related set of structures known as “Fermi Bubbles.” We’re very happy to host this guest post by Douglas Finkbeiner from Harvard, who has played a crucial role in unraveling the mystery.

**Planck, Gamma-ray Bubbles, and the Microwave Haze**

“Error often is to be preferred to indecision” — *Aaron Burr, Jr.*

Among the many quotes that greet a visitor to the Frist Campus Center at Princeton University, this one is perhaps the most jarring. These are bold words from the third Vice President of the United States, the man who shot Alexander Hamilton in a duel. Yet they were on my mind as a postdoc in 2003 as I considered whether to publish a controversial claim: that the microwave excess called the “haze” might originate from annihilating dark matter particles. That idea turned out to be wrong, but pursuing it was one of the best decisions of my career.

In 2002, I was studying the microwave emission from tiny, rapidly rotating grains of interstellar dust. This dust spans a range of sizes from microscopic flecks of silicate and graphite, all the way down to hydrocarbon molecules with perhaps 50 atoms. In general these objects are asymmetrical and have an electric dipole, and a rotating dipole emits radiation. Bruce Draine and Alex Lazarian worked through this problem at Princeton in the late 1990s and found that the smallest dust grains can rotate about 20 billion times a second. This means the radiation comes out at about 20 GHz, making them a potential nuisance for observations of the cosmic microwave background. However, by 2003 there was still no convincing detection of this “spinning dust” and many doubted the signal would be strong enough to be observed.

**The haze**

In February 2003, the Wilkinson Microwave Anisotropy Probe (WMAP) team released their first results. Read More

Everyone always wants to know whether the wave function of quantum mechanics is “a real thing” or whether it’s just a tool we use to calculate the probability of measuring a certain outcome. Here at CV, we even hosted a give-and-take on the issue between instrumentalist Tom Banks and realist David Wallace. In the latter post, I linked to recent preprint on the issue that proved a very interesting theorem, seemingly boosting the “wave functions are real” side of the debate.

That preprint was submitted to *Nature*, but never made it in (although it did ultimately get published in *Nature Physics*). The story of why such an important result was shunted away from the journal to which it was first submitted (just like Peter Higgs’s paper where he first mentioned the Higgs boson!) is interesting in its own right. Here is that story, as told by Terry Rudolph, an author of the original paper. Terry is a theoretical physicist at Imperial College London, who “will work on anything that has the word `quantum’ in front of it.”

————————

There has long been a tension between the academic publishing process, which is slow but which is still the method by which we certify research quality, and the ability to instantaneously make one’s research available on a preprint server such as the arxiv, which carries essentially no such certification whatsoever. It is a curious (though purely empirical) observation that the more theoretical and abstract the field the more likely it is that the all-important question of priority – when the research is deemed to have been time-stamped as it were – will be determined by when the paper first appeared on the internet and not when it was first submitted to, or accepted by, a journal. There are no rules about this, it’s simply a matter of community acceptance.

At the high-end of academic publishing, where papers are accepted from extremely diverse scientific communities, prestigious journals need to filter by more than simply the technical quality of the research – they also want high impact papers of such broad and general interest that they will capture attention across ranges of scientific endeavour and often the more general public as well. For this reason it is necessary they exercise considerably more editorial discretion in what they publish.

Topics such as hurdling editors and whether posting one’s paper in preprint form impacts negatively the chances of it being accepted at a high-end journal are therefore grist for the mill of conversation at most conference dinners. In fact the policies at Nature about preprints have evolved considerably over the last 10 years, and officially they now say posting preprints is fine. But is it? And is there more to editorial discretion than the most obvious first hurdle – namely getting the editor to send the paper to referees at all? If you’re a young scientist without experience of publishing in such journals (I am unfortunately only one of the two!) perhaps the following case study will give you some pause for thought.

Last November my co-authors and I bowed to some pressure from colleagues to put our paper, then titled *The quantum state cannot be interpreted statistically*, on the arxiv. We had recently already submitted it to *Nature* because new theorems in the foundations of quantum theory are very rare, and because the quantum state is an object that cuts across physics, chemistry and biology – so it seemed appropriate for a broad readership. Because I had heard stories about the dangers of posting preprints so many times I wrote the editor to verify it really was ok. We were told to go ahead, but not to actively participate in or solicit pre-publication promotion or media coverage; however discussing with our peers, presenting at conferences etc was fine.

Based on the preprint *Nature* themselves published a somewhat overhyped pop-sci article shortly thereafter; to no avail I asked the journalist concerned to hold off until the status of the paper was known. We tried to stay out of the ensuing fracas – is discussing your paper on blogs a discussion between your peers or public promotion of the work?

The price of university textbooks (not to mention scholarly journals) is like the weather: everyone complains about it, but nobody does anything about it. My own graduate textbook in GR hovers around $100, but I’d be happier if it were half that price or less. But the real scam is not with niche-market graduate textbooks, which move small volumes and therefore have at least some justification for their prices (and which often serve as useful references for years down the road) — it’s with the large-volume introductory textbooks that students are forced to buy.

But that might be about to change. We’re very happy to have Marc Sher, a particle theorist at William and Mary, explain an interesting new initiative that hopes to provide a much lower-cost alternative to the mainstream publishers.

(**Update:** I changed the title from “Open Textbook” to “Nonprofit Textbook,” since “Open” has certain technical connotations that might not apply here. The confusion is mine, not Marc’s.)

——————————————————

**The textbook publishers’ price-gouging monopoly may be ending.**

For decades, college students have been exploited by publishers of introductory textbooks. The publishers charge about $200 for a textbook, and then every 3-4 years they make some minor cosmetic changes, reorder some of the problems, add a few new problems, and call it a “new edition”. They then take the previous edition out of print. The purpose, of course, is to destroy the used book market and to continue charging students exorbitant amounts of money.

The Gates and Hewlett Foundations have apparently decided to help provide an alternative to this monopoly. The course I teach is “Physics for Life-Scientists”, which typically uses algebra-based textbooks, often entitled “College Physics.” For much of the late 1990′s, I used a book by Peter Urone. It was an excellent book with many biological applications. Unfortunately, after the second edition, it went out of print. Urone obtained the rights to the textbook from the publisher and has given it to a nonprofit group called OpenStax College, which, working with collaborators across the country has significantly revised the work and has produced a third edition. They have just begun putting this edition online (ePub for mobile and PDF), **completely free of charge**. The entire 1200 page book will be online within a month. People can access it without charge, or the company will print it for the cost of printing (approximately $40/book). Several online homework companies, such as Sapling Learning and Webassign, will include this book in their coverage.

OpenStax College Physics’ textbook is terrific, and with this free book available online, there will be enormous pressure on faculty to use it rather than a $200 textbook. OpenStax College plans to produce many other introductory textbooks, including sociology and biology textbooks. As a nonprofit they are sustained by philanthropy, through partnerships, and print sales, though the price for the print book is also very low.

Many of the details are at a website that has been set up at http://openstaxcollege.org/, and the book can be downloaded at http://openstaxcollege.org/textbooks/college-physics/download?type=pdf. As of the end of last week, 11 of the first 16 chapters had been uploaded, and the rest will follow shortly. If you teach an algebra-based physics course, please look at this textbook; it isn’t too late to use it for the fall semester. An instructor can just give the students the URL in the syllabus. If you don’t teach such a course, please show this announcement to someone who does. Of course, students will find out about the book as well, and will certainly inform their instructors. The monopoly may be ending, and students could save billions of dollars. For decades, the outrageous practices of textbook publishers have not been challenged by serious competition. This is serious competition. OpenStax College as a nonprofit and foundation supported entity does not have a sales force, so word of mouth is the way to go: Tell everyone!

*Perhaps best known in the field of particle physics as the co-author of the Higgs Hunter’s Guide, Jack Gunion has been in the theoretical trenches of the search for the Higgs boson for several decades now. He is a senior professor and leader of the theoretical particle physics group at UC Davis, where he has been a member of the faculty for over 25 years. Here is a guest post from him on today’s big news from CERN.*

Tuesday December 13 has been a very exciting day for particle physics. The ATLAS and CMS experiments at the Large Hadron Collider (LHC) announced today that they are both seeing hints of a Higgs boson with properties that are close to those expected for the Standard Model (SM) Higgs boson as originally proposed by Peter Higgs and others. While the “significance” of the signals has not yet reached “discovery level” (5 sigma in technical language) the two experiments both see signals that exceed 2 sigma so that there is less than a 5% chance that they are simply statistical fluctuations. Most persuasively, the signals in the channels with excellent mass determination (the photon-photon final decay state and the 4-lepton final state) are all consistent with a a Higgs boson mass of around 125 GeV IN BOTH EXPERIMENTS. This coincidence in mass between two totally independent experiments (as well as independent final states) is persuasive evidence that the photon-photon and 4-lepton excesses seen near 125 GeV are not mere statistical fluctuations.

Observation of the Higgs with approximately the SM-like rate suggests that to first approximation the Higgs is being produced as expected in the SM and that it also decays as predicted in the SM. Many theorists, including myself, have suggested that a Higgs might be produced as in the SM but might have extra decays that would have decreased the photon-photon and 4-lepton decay frequencies to an unobservable level, making the Higgs boson much harder to detect at the LHC. The level of the observed excesses argues against such extra decays being very important. The photon-photon and 4-lepton detection modes were originally proposed and shown to be viable for a SM-like Higgs boson by myself and collaborators (in particular, Gordy Kane and Jose Wudka) way back in 1986-1987. It has taken a long time (25 years) for the technology and funding to reach the point where these detection modes could be examined. I often joked that I was personally responsible for forcing each of the LHC collaborations to spend the 30 million dollars or so needed to build a photon detector with the energy resolution required. Fortunately, it seems that the money was well-spent and the ATLAS and CMS detectors both found ways to build the needed detectors, a real triumph of international collaboration and technical expertise. Also key is the very successful operation of the LHC that has produced the enormously large number of collision events needed to dig out the Higgs signal from uninteresting ‘background’ events. Until this summer produced the first very weak signs of the Higgs, I was beginning to wonder if the Higgs would be discovered during my lifetime. Fortunately, simplicity (i.e. a very conventional SM-like Higgs boson) seems to have prevailed and ended my wait.

Perhaps you’ve heard of the Higgs boson. Perhaps you’ve heard the phrase “desperately seeking” in this context. We need it, but so far we can’t find it. This all might change soon — there are seminars scheduled at CERN by both of the big LHC collaborations, to update us on their progress in looking for the Higgs, and there are rumors they might even bring us good news. You know what they say about rumors: sometimes they’re true, and sometimes they’re false.

So we’re very happy to welcome a guest post by Matt Strassler, who is an expert particle theorist, to help explain what’s at stake and where the search for the Higgs might lead. Matt has made numerous important contributions, from phenomenology to string theory, and has recently launched the website Of Particular Significance, aimed at making modern particle physics accessible to a wide audience. Go there for a treasure trove of explanatory articles, growing at an impressive pace.

———————–

After this year’s very successful run of the Large Hadron Collider (LHC), the world’s most powerful particle accelerator, a sense of great excitement is beginning to pervade the high-energy particle physics community. The search for the Higgs particle… or particles… or whatever appears in its place… has entered a crucial stage.

We’re now deep into Phase 1 of this search, in which the LHC experiments ATLAS and CMS are looking for the **simplest possible** Higgs particle. This unadorned version of the Higgs particle is usually called the Standard Model Higgs, or “SM Higgs” for short. The end of Phase 1 looks to be at most a year away, and possibly much sooner. Within that time, either the SM Higgs will show up, or it will be ruled out once and for all, forcing an experimental search for more exotic types of Higgs particles. Either way, it’s a turning point in the history of our efforts to understand nature’s elementary laws.

This moment has been a long time coming. I’ve been working as a scientist for over twenty years, and for a third decade before that I was reading layperson’s articles about particle physics, and attending public lectures by my predecessors. Even then, the Higgs particle was a profound mystery. Within the Standard Model (the equations that used at the LHC to describe all the particles and forces of nature we know about so far, along with the SM Higgs field and particle) it stood out as a bit different, a bit *ad hoc*, something not quite like the others. It has always been widely suspected that the full story might be more complicated. Already in the 1970s and 1980s there were speculative variants of the Standard Model’s equations containing several types of Higgs particles, and other versions with a more complicated Higgs field and ** no** Higgs particle — with a key role of the Higgs particle being played by other new particles and forces.

But everyone also knew this: you could not simply take the equations of the Standard Model, strip the Higgs particle out, and put nothing back in its place. The resulting equations would not form a complete theory; they would be self-inconsistent. Read More

The question of the day seems to be, “Is the wave function real/physical, or is it merely a way to calculate probabilities?” This issue plays a big role in Tom Banks’s guest post (he’s on the “useful but not real” side), and there is an interesting new paper by Pusey, Barrett, and Rudolph that claims to demonstrate that you *can’t* simply treat the quantum state as a probability calculator. I haven’t gone through the paper yet, but it’s getting positive reviews. I’m a “realist” myself, as I think the best definition of “real” is “plays a crucial role in a successful model of reality,” and the quantum wave function certainly qualifies.

To help understand the lay of the land, we’re very happy to host this guest post by David Wallace, a philosopher of science at Oxford. David has been one of the leaders in trying to make sense of the many-worlds interpretation of quantum mechanics, in particular the knotty problem of how to get the Born rule (“the wave function squared is the probability”) out of the this formalism. He was also a participant at our recent time conference, and the co-star of one of the videos I posted. He’s a very clear writer, and I think interested parties will get a lot out of reading this.

———————————-

**Why the quantum state isn’t (straightforwardly) probabilistic**

In quantum mechanics, we routinely talk about so-called “superposition states” – both at the microscopic level (“the state of the electron is a superposition of spin-up and spin-down”) and, at least in foundations of physics, at the macroscopic level (“the state of Schrodinger’s cat is a superposition of alive and dead”). Rather a large fraction of the “problem of measurement” is the problem of making sense of these superposition states, and there are basically two views. On the first (“state as physical”), the state of a physical system tells us what that system is actually, physically, like, and from that point of view, Schrodinger’s cat is seriously weird. What does it even mean to say that the cat is both alive and dead? And, if cats can be alive and dead at the same time, how come when we look at them we only see definitely-alive cats or definitely-dead cats? We can try to answer the second question by invoking some mysterious new dynamical process – a “collapse of the wave function” whereby the act of looking at half-alive, half-dead cats magically causes them to jump into alive-cat or dead-cat states – but a physical process which depends for its action on “observations”, “measurements”, even “consciousness”, doesn’t seem scientifically reputable. So people who accept the “state-as-physical” view are generally led either to try to make sense of quantum theory without collapses (that leads you to something like Everett’s many-worlds theory), or to modify or augment quantum theory so as to replace it with something scientifically less problematic.

On the second view, (“state as probability”), Schrodinger’s cat is totally unmysterious. When we say “the state of the cat is half alive, half dead”, on this view we just mean “it has a 50% probability of being alive and a 50% probability of being dead”. And the so-called collapse of the wavefunction just corresponds to us looking and finding out which it is. From this point of view, to say that the cat is in a superposition of alive and dead is no more mysterious than to say that Sean is 50% likely to be in his office and 50% likely to be at a conference.

Now, to be sure, probability is a bit philosophically mysterious. Read More

The lure of blogging is strong. Having guest-posted about problems with eternal inflation, Tom Banks couldn’t resist coming back for more punishment. Here he tackles a venerable problem: the interpretation of quantum mechanics. Tom argues that the measurement problem in QM becomes a lot easier to understand once we appreciate that even classical mechanics allows for non-commuting observables. In that sense, quantum mechanics is “inevitable”; it’s actually classical physics that is somewhat unusual. If we just take QM seriously as a theory that predicts the probability of different measurement outcomes, all is well.

Tom’s last post was “technical” in the sense that it dug deeply into speculative ideas at the cutting edge of research. This one is technical in a different sense: the concepts are presented at a level that second-year undergraduate physics majors should have no trouble following, but there are explicit equations that might make it rough going for anyone without at least that much background. The translation from LaTeX to WordPress is a bit kludgy; here is a more elegant-looking pdf version if you’d prefer to read that.

—————————————-

Rabbi Eliezer ben Yaakov of Nahariya said in the 6th century, “He who has not said three things to his students, has not conveyed the true essence of quantum mechanics. And these are Probability, Intrinsic Probability, and Peculiar Probability”.

Probability first entered the teachings of men through the work of that dissolute gambler Pascal, who was willing to make a bet on his salvation. It was a way of quantifying our risk of uncertainty. Implicit in Pascal’s thinking, and all who came after him was the idea that there was a certainty, even a predictability, but that we fallible humans may not always have enough data to make the correct predictions. This implicit assumption is completely unnecessary and the mathematical theory of probability makes use of it only through one crucial assumption, which turns out to be wrong in principle but right in practice for many actual events in the real world.

For simplicity, assume that there are only a finite number of things that one can measure, in order to avoid too much math. List the possible measurements as a sequence

$latex A = left( begin{array}{ccc} a_1 & ldots & a_Nend{array} right). $

The a_{N} are the quantities being measured and each could have a finite number of values. Then a *probability distribution* assigns a number P(A) between zero and one to each possible outcome. The sum of the numbers has to add up to one. The so called *frequentist* interpretation of these numbers is that if we did the same measurement a large number of times, then the fraction of times or frequency with which we’d find a particular result would approach the probability of that result in the limit of an infinite number of trials. It is mathematically rigorous, but only a fantasy in the real world, where we have no idea whether we have an infinite amount of time to do the experiments. The other interpretation, often called Bayesian, is that probability gives a best guess at what the answer will be in any given trial. It tells you how to bet. This is how the concept is used by most working scientists. You do a few experiments and see how the finite distribution of results compares to the probabilities, and then assign a confidence level to the conclusion that a particular theory of the data is correct. Even in flipping a completely fair coin, it’s possible to get a million heads in a row. If that happens, you’re pretty sure the coin is weighted but you can’t know for sure.

Physical theories are often couched in the form of equations for the time evolution of the probability distribution, even in classical physics. One introduces “random forces” into Newton’s equations to “approximate the effect of the deterministic motion of parts of the system we don’t observe”. The classic example is the Brownian motion of particles we see under the microscopic, where we think of the random forces in the equations as coming from collisions with the atoms in the fluid in which the particles are suspended. However, there’s no *a priori* reason why these equations couldn’t be the fundamental laws of nature. Determinism is a philosophical stance, an hypothesis about the way the world works, which has to be subjected to experiment just like anything else. Anyone who’s listened to a geiger counter will recognize that the microscopic process of decay of radioactive nuclei doesn’t seem very deterministic. Read More

Following the guest post from Tom Banks on challenges to eternal inflation, we’re happy to post a follow-up to this discussion by Don Page. Don was a graduate student of Stephen Hawking’s, and is now a professor at the University of Alberta. We have even collaborated in the past, but don’t hold that against him.

Don’s reply focuses less on details of eternal inflation and more on the general issue of how we should think about quantum gravity in a cosmological context, especially when it comes to counting the number of states. Don is (as he mentions below) an Evangelical Christian, but by no means a Young Earth Creationist!

Same rules apply as before: this is a technical discussion, which you are welcome to skip if it’s not your cup of tea.

———————-

I tend to agree with Tom’s point that “it is extremely plausible, given the Bekenstein Hawking entropy formula for black holes, that the quantum theory of a space-time , which is dS in both the remote past and remote future, has a finite dimensional Hilbert space,” at least for four-dimensional spacetimes (excluding issues raised by Raphael Bousso, Oliver DeWolfe, and Robert Myers for higher dimensions in Unbounded entropy in space-times with positive cosmological constant) if the cosmological constant has a fixed finite value, or if there are a finite number of possible values that are all positive. The “conceptual error … that de Sitter (dS) space is a system with an ever increasing number of quantum degrees of freedom” seems to me to arise from considering perturbations of de Sitter when it is large (on a large compact Cauchy surface) that would evolve to a big bang or big crunch when the Cauchy surface gets small and hence would prevent the spacetime from having both a remote past and a remote future. As Tom nicely puts it, “In the remote past or future we can look at small amplitude wave packets. However, as we approach the neck of dS space, the wave packets are pushed together. If we put too much information into the space in the remote past, then the packets will collide and form a black hole whose horizon is larger than the neck. The actual solution is singular and does not resemble dS space in the future.”

So it seems to me that, for fixed positive cosmological constant, we can have an arbitrarily large number of quantum states if we allow big bangs or big crunches, but if we restrict to nonsingular spacetimes that expand forever in both the past and future, then the number of states may be limited by the value of the cosmological constant.

This reminds me of the 1995 paper by Gary Horowitz and Robert Myers, The value of singularities, which argued that the timelike naked singularity of the negative-mass Schwarzschild solution is important to be excluded in order to eliminate such states which would lead to energy unbounded below and instabilities from the presumably possible production (conserving energy) of arbitrarily many possible combinations of positive and negative energy. Perhaps in a similar way, big bang and big crunch singularities are important to be excluded, as they also would seem to allow infinitely many states with positive cosmological constant.

Now presumably we would want quantum gravity states to include the formation and evaporation of black holes (or of what phenomenologically appear similar to black holes, whether or not they actually have the causal structure of classical black holes), which in a classical approximation have singularities inside them, so presumably such `singularities’ should be allowed, even if timelike naked singularities and, I would suggest, big bang and big crunch singularities should be excluded. Read More

X

E-mail address:

Password:

Remember me