New Physics at LHC? An Anomaly in CP Violation

By Sean Carroll | November 14, 2011 3:12 pm

Here in the Era of 3-Sigma Results, we tend to get excited about hints of new physics that eventually end up going away. That’s okay — excitement is cheap, and eventually one of these results is going to stick and end up changing physics in a dramatic way. Remember that “3 sigma” is the minimum standard required for physicists to take a new result at all seriously; if you want to get really excited, you should wait for 5 sigma significance. What we have here is a 3.5 sigma result, indicating CP violation in the decay of D mesons. Not quite as exciting as superluminal neutrinos, but if it holds up it’s big stuff. You can read about it at Résonaances or Quantum Diaries, or look at the talk recently given at the Hadronic Collider Physics Symposium 2011 in Paris. Here’s my attempt an an explanation.

The latest hint of a new result comes from the Large Hadron Collider, in particular the LHCb experiment. Unlike the general-purpose CMS and ATLAS experiments, LHCb is specialized: it looks at the decays of heavy mesons (particles consisting of one quark and one antiquark) to search for CP violation. “C” is for “charge” and “P” is for “parity”; so “CP violation” means you measure something happening with some particles, and then you measure the analogous thing happening when you switch particles with antiparticles and take the mirror image. (Parity reverses directions in space.) We know that CP is a pretty good symmetry in nature, but not a perfect one — Cronin and Fitch won the Nobel Prize in 1980 for discovering CP violation experimentally.

While the existence of CP violation is long established, it remains a target of experimental particle physicists because it’s a great window onto new physics. What we’re generally looking for in these big accelerators are new particles that are just to heavy and short-lived to be easily noticed in our everyday low-energy world. One way to do that is to just make the new particles directly and see them decaying into something. But another way is more indirect — measure the tiny effect of heavy virtual particles on the interactions of known particles. That’s what’s going on here.

More specifically, we’re looking at the decay of D mesons in two different ways, into kaons and pions. If you like thinking in terms of quarks, here are the dramatis personae:

  • D0 meson: charm quark + anti-up quark
  • anti-D0: anti-charm quark + up quark
  • K-: strange quark + anti-up quark
  • K+: anti-strange quark + up quark
  • π-: down quark + anti-up quark
  • π+: anti-down quark + up quark

Let’s look at the D0 meson. What happens is the charm quark (much heavier than the anti-up) decays into three lighter quarks: either up + strange + anti-strange, or up + down + anti-down. If it’s the former, we get a K- and a K+; if it’s the latter, we get a π- and a π+. Here’s one example, where D0 goes to K- and K+.

Of course the anti-D0 can also decay, and the anti-charm will go to either anti-up plus strange plus anti-strange, or anti-up plus down plus anti-down (just the antiparticles of what the D0 could go to). But if you match up the quarks, you see that the decay products are exactly the same as they were in the case of the original D0: either K- and K+, or π- and a π+.

Here’s where the search for CP violation comes in. If you take a D0 meson and “do a CP transformation to it,” you get an anti-D0, and vice-versa. So we can test for CP violation by comparing the rate at which D0’s decay to the rate of anti-D0’s. That’s basically the way Cronin and Fitch discovered CP violation, except that they started with neutral kaons and anti-kaons and watched them decay.

One problem is that the LHC itself doesn’t treat particles and anti-particles equally. It collides protons with protons, not protons with anti-protons. (It’s easier to make protons, so you get a higher luminosity [more events] if you stick with just protons.) So you end up making a lot more D0’s than anti-D0’s. In principle you can correct for that if you understand everything there is to understand about particle physics and your detector, but in practice we don’t. So the LHCb experimentalists did a clever thing: rather than just measuring the decay of D0’s and anti-D0’s into either kaons or pions, they measured them both, and then took the difference. This procedure is meant to cancel out all of the annoying experimental features, leaving only the pristine physics underneath. (If there is a nonzero difference in the CP violation rates between decays into kaons and decays into pions, at least one of those decays must itself violate CP.)

And the answer is: there is a noticeable difference! It’s -0.82%, plus or minus 0.24%, for a total of 3.5 sigma. (82 divided by 24 is about 3.5.) And the prediction from the Standard Model is that we should get almost zero for this quantity — maybe 0.01% or thereabouts.

So what could be going on? As Jester says, this is a surprising result — there aren’t a lot of models on the market that predict this level of CP violation in D0 decays but not in any of the other experiments we’ve already done. But the general idea, if you wanted to come up with such a model, would be to add new heavy particles that gently interfere with the process by which the charm quark in the above diagram decays into lighter quarks.

If I were to guess, I’d put my money on this result going away. But it stands a fighting chance! If it does hold up, to be honest it would be a bit frustrating — we would know that something new was going on, but not have too much of an idea what exactly it would be. But at least we’d know something about where to look, which is a huge advantage.

Truth in advertising notice: folks who write articles or press releases about CP violation are contractually obligated to say that this will help explain the matter-antimatter asymmetry in the universe. That might be true, or … it might not. My strong feeling is that we should be excited by discovering new particles of nature, and not rely on the crutch of relating everything to cosmology.

  • beagle197

    I was reading it as DO, but probably is D – zero

  • Lemuel Pitkin

    So I’m curious about this 3-sigma thing. In a normal distribution, you get a result that far from the mean in 0.3% of trials, right? But (1) is 0.3% a big or small number, i.e. how many trials are there (per year, say) from which a result could potentially be reported? And (2) how much certainty is there that the underlying data generating process is well-approximated by a normal distribution? I’m sure it’s a better assumption than in the social sciences, but still, isn’t there a possibility that there’s some fat tails that make high-sigma events more common? Or does the very large number of independent observations (something that, again, is just not possible in the social sciences) mean that the normal distribution must be an extremely close approximation?

  • Michael Pierce

    Madame Wu strikes again!

    Few things in nature frustrate me like trying to understand the underpinnings P and CP violation. I’ve never gotten a better understanding than basic electro-weak phenomenology and a “Nature has some kind of weird version of chirality, that’s just the way it is.” Congratulations to confusing me yet again. sigh… haha!

    I look forward to the day when some clever folks further complicate things by finding CPT violations.

  • AI

    I doubt CP symmetry is really violated in Nature.

    I think what we are seeing is due to CP being locally broken by some asymmetrical background field we don’t yet understand and which is influencing certain decays.

  • Phillip Helbig

    @2: Think of it this way: For every 300 experiments, one will give a completely spurious 3-sigma result. This doesn’t have to be explained any more than it has to be explained why someone wins the lottery every week, even though it is unlikely. :-)

    Imagine an athlete who practices every day and makes a note of “personal best” records. With time, the personal best will get better. However, unless the daily average also increases, this is probably due not to improvement but simply to more trials turning up the occasional outlier.

  • James

    Could it not decay to a kaon and pion (indeed, isn’t that more likely, since an ud vertex has a larger amplitude than a us)?

  • Pingback: LHC reveals hints of ‘new physics’ in particle decays « physics4me()

  • http://None Bashir Bomai

    I find particle physics entirely stimulating.However,I’d like to know about the quirkiness of a large model.That model is Saturn.Why does Saturn deflect micro dust away from it.Why can’t you egg heads look at this phenomena?Could the answer be what is happening on this giant be a model.I’ve read the explanation(one side of it),I still believe that the behaviour of this body, its moons and rings,are a large laboratory of matter/anti-matter interactions.The beautiful decay anomally concepts are ok but a little bit exotic.The answer is out there.

  • Shantanu

    I am still confused why this result is interesting. if this result is confirmed, does this mean that CP violation occurs in strong interactions also?

  • CP

    James @6: Yes, it could indeed. However KPi is not a CP eigenstate and cannot be used to measure A_{CP}. On the other hand, decays of D0 and D0bar to KPi (both the Cabibbo-favoured and DCS final states) are very interesting in their own right and are under active study at LHCb (and have been previously investigated by CDF and others).

  • James

    OK, with regard to my previous comment, I think I was just misinterpreting Sean’s description to imply that only these decays occurred, rather than that only these decays were studied. Thanks.

  • Pingback: Heeft LHCb toch een CP schending waargenomen?()

  • Sean

    Sorry to give the misinterpretation about the decay channels. For future reference, you should always head to the Particle Data Group:

    Clicking on “Charmed Mesons” and reading through a long pdf file, we find that D0 decays to a kaon + a pion about 5% of the time.

    Shantanu, it doesn’t (necessarily) mean there is CP violation in the strong interactions; indeed, it would be very hard to put strong CP violation in this interaction and not have seen it elsewhere. What it means is that weak CP violation (in the CKM matrix) is not enough to account for this result, so we need new physics somehow — probably new interactions altogether.

  • bt

    Time to bust out the LHC Rap song again:

  • Pingback: Ενδείξεις για “νέα Φυσική” στο LHC « physicsgg()

  • Pingback: A Hint of Physics Beyond the Standard Model: CP Violation Possibly Detected in Charmed Meson Decays « Whiskey…Tango…Foxtrot?()

  • Gizelle Janine

    …Just valid in many ways I guess. I REALLY want to say “I told you so!!!! We needed an improvement in the laws of physics!!!!” to to alot of people, but not yet. 😀

    Great post. Keeeeeeep going!

  • ben-hqet

    Can the SM account for this? According to a paper in Phys Lett B 222 (1989)501 (look at the date, this is called “making a prediction” in science) YES. It says that in the absence of large SU(3) breaking in these decays one should expect order 1 CP asymmetries. The authors then retract a bit and state that “This is of course very unlikely; the preferred explanation … is that SU (3) violating effects are large in this decay.” (This discussion is in the next to last paragraph in that paper). So now we know, the preferred explanation is half-way, there is some SU(3) breaking and some enhancement of the amplitude that leads to large CP violation (the one the authors call 2F+G).

  • Lemuel Pitkin

    For every 300 experiments, one will give a completely spurious 3-sigma result.

    I understand this, but it’s true only *if* the data-generating process has a normal (Gaussian) distribution. What if the underlying phenomenon followed, say, a power-law distribution instead?

  • Tom Ames

    @19: It’s true regardless of the distribution underlying the process.

    In each experiment you’re estimating the mean of some distribution. Even if this underlying distribution is not normal, the distribution of the means WILL be (for large enough samples).

    See the Wikipedia entry on the Central Limit Theorem.

  • Pingback: Gran sorpresa en el LHC gracias a LHCb: La asimetría CP en el modelo estándar se oculta en las partículas con encanto « Francis (th)E mule Science's News()

  • Pingback: Ο επιταχυντής LHC δίνει υπόνοιες για μια «νέα φυσικ()

  • Shantanu

    Thanks Sean for the clear explanation.

  • Lemuel Pitkin

    In each experiment you’re estimating the mean of some distribution. Even if this underlying distribution is not normal, the distribution of the means WILL be (for large enough samples).

    Sorry, but this is not true. It’s true for any distribution with a finite variance, but not all natural processes follow such distributions. If the underlying process follows a power-law distribution, for instance, then you will see an excess of high-sigma events no matter how large your sample gets.

  • Pingback: A Notable Discrepancy at the LHCb Experiment | Of Particular Significance()

  • Pingback: Der Mond-Schatten vor der Kosmischen Strahlung « Skyweek Zwei Punkt Null()

  • jimthompson

    This comment isn’t about the possible new physics, it’s about the extremely annoying repeating of the “5 sigma to really believe it” myth. Follow that, and very little of astronomy over the past 3 or 4 decades survives! (case to point: there may be NO (or very, very few) extrasolar planet detections that can pass this test). The discovery of the accelrating expansion rate of the universe wouldn’t have passed this test when it was first announced (to great acclaim and damn few doubters) either, as another obvious example. Yes, you particle physicists (and related theorists) may actually need this rule of thumb but many other branches of physics just laugh at it (atmospheric physics comes to mind also). And you have several real live working astronomers who are members of this blog also…..

  • PhysicsDude

    @27 This may be true but the 5 sigma rule is an “industry standard” in particle physics and this will always be a requirement for a discovery. The main reason being that preliminary results can often lead to 3 or 4 sigma measurements that after further study on larger datasets can be washed out to 1 or 2 sigma. Hopefully this won’t be the case here 😉 but the 5 sigma limit is a failsafe. With regard to this application in atmospheric and astronomy: The datasets used in PP are VAST and this is frequently not the case in the other areas you mention. Hence, in the high statistics world of particle physics, a small effect can be “blown up” artificially due to systematic errors and the artificial training of analyses. Implementing the 5 sigma limit on a discovery ensures that, for the most part, the result is due to a physics effect rather than anything else.

  • mfb

    Particle physicists search for deviations from theory predictions in many channels. Just from the number of measurements, you expect some to get a statistical deviation of the order of 3 sigma, without any new physics (or errors in the theory predictions). 5 sigma, however, is safe. That is new physics or an error somewhere, but not a statistical fluctuation.


Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Sean Carroll

Sean Carroll is a Senior Research Associate in the Department of Physics at the California Institute of Technology. His research interests include theoretical aspects of cosmology, field theory, and gravitation. His most recent book is The Particle at the End of the Universe, about the Large Hadron Collider and the search for the Higgs boson. Here are some of his favorite blog posts, home page, and email: carroll [at] .


See More

Collapse bottom bar