Alex Stone is the author of Fooling Houdini: Magicians, Mentalists, Math Geeks and the Hidden Powers of the Mind. His writing has appeared in DISCOVER, Harper’s, Science, The New York Times, and The Wall Street Journal.
There was a time when people thought of playing cards as cosmic instruments. Fortunes were told, fortunes were lost, and the secrets of the universe unveiled themselves at the turn of a card. These days we know better. And yet, a look at the mathematics of card shuffling reveals some startling insights.
Consider, for instance, the perfect, or “faro” shuffle—whereby the cards are divided exactly in half (top and bottom) and then interleaved so that they alternate exactly. Most people think shuffling tends to mix up a deck of cards, and usually that’s true, because a typical shuffle is sloppy. But a perfect shuffle isn’t random at all. Eight consecutive perfect shuffles will bring a 52-card deck back to its original order, with every card in the pack having cycled through a series of predictable permutations back to its starting place. This holds true for any deck, regardless of its size, although eight isn’t always the magic number. If you have 25 cards, it takes 20 shuffles, whereas for 32 cards it only takes 5; for 53 cards, 52 shuffles are needed. You can derive a formula for the relationship between the number of cards in the deck and the number of faro shuffles in one full cycle.
Amir D. Aczel has been closely associated with CERN and particle physics for a number of years and often consults on statistical issues relating to physics. He is also the author of 18 popular books on mathematics and science.
By now you’ve heard the news-non-news about the Higgs: there are hints of a Higgs—even “strong hints”—but no cigar (and no Nobel Prizes) yet. So what is the story about the missing particle that everyone is so anxiously waiting for?
Back in the summer, there was a particle physics conference in Mumbai, India, in which results of the search for the Higgs in the high-energy part of the spectrum, from 145 GeV (giga electron volts) to 466 GeV, were reported and nothing was found. At the low end of the energy spectrum, at around 120 GeV (a region of energy that attracted less attention because it had been well within the reach of Fermilab’s now-defunct Tevatron accelerator) there was a slight “bump” in the data, barely breaching the two-sigma (two standard deviations) bounds—which is something that happens by chance alone about once in twenty times (two-sigma bounds go with 95% probability, hence a one-in-twenty event is allowable as a fluke in the data). But since the summer, data has doubled: twice as many collision events had been recorded as had been by the time the Mumbai conference had taken place. And, lo and behold: the bump still remained!
This gave the CERN physicists the idea that perhaps that original bump was not a one-in-twenty fluke that happens by chance after all, but perhaps something far more significant. Two additional factors came into play as well: the new anomaly in the data at roughly 120 GeV was found by both competing groups at CERN: the CMS detector, and the ATLAS detector; and—equally important—when the range of energy is pre-specified, the statistical significance of the finding suddenly jumps from two-sigma to three-and-a-half-sigma!
Fermilab’s Tevatron, the largest particle accelerator in the United States, was shut down on September 30 after a celebrated career of 28 years that has provided us with some of the greatest discoveries in particle physics. This leaves the European lab CERN (see photo on left) to lead the way into future discoveries with its Large Hadron Collider.This landmark in experimental physics is an opportunity to reexamine the theoretical model physicists have constructed and relied on in their search to understand the workings of the universe: the standard model of particle physics. The standard model is a comprehensive theory about nature’s elementary particles and the forces that control their behavior, and it has been constructed over a half-century of intensive work by many theoretical physicists as well as experimentalists. The model has worked amazingly well, harmoniously combining theory and experiments and producing extremely accurate predictions about the behavior of particles and forces. But could the model now be beginning to show some cracks?
It all started on a wintry evening in 1928. While staring at the flames in the fireplace at St. John’s College, Cambridge, Paul Dirac made one of the most important discoveries in the history of science when he saw how to combine the Schrödinger equation of quantum mechanics with Einstein’s special (but not general) theory of relativity. This achievement launched relativistic quantum field theory—which forms the theoretical basis for the standard model—and produced two immediate consequences: an explanation of the spin of the electron, and Dirac’s stunning prediction of the existence of antimatter (confirmed a few years later with the discovery of the positron).
In the late 1940s, Richard Feynman, Julian Schwinger, and Sin-Itiro Tomonaga, all working independently, presented the first quantum field theory, called quantum electrodynamics, which explained the electromagnetic interactions of electrons and photons. It forms the first part of the standard model by handling interactions that are controlled by the electromagnetic field. The theory’s success inspired other theoretical physicists to construct similar quantum field theories for addressing the actions of the weak and strong nuclear forces—thus together accounting for everything in particle physics except for the action of gravity, the subject of Einstein’s general theory of relativity. By the 1970s, the result, the standard model, was ready: a quantum field theory of all elementary particles—leptons and quarks and their interactions through the actions of particles (such as the photon) called bosons.