Amir D. Aczel has been closely associated with CERN and particle physics for a number of years and often consults on statistical issues relating to physics. He is also the author of 18 popular books on mathematics and science, and has been awarded both Guggenheim Foundation and Sloan Foundation fellowships. Many thanks to Steven Weinberg of the University of Texas at Austin and to Barton Zwiebach of MIT for their helpful comments.
Readers of this blog have probably heard the standard fare about how the Higgs boson “gives mass” to everything in the universe, probably with some kind of analogy, like the one about a famous person walking through a crowded room, pulled every which way by admiring crowds, and that these connections “make the person massive“—as the Higgs field does with particles. Now that we finally seemed to have pinned down the elusive particle, I want to explain where the Higgs came from and what it does. While our understanding of the particle comes from some complicated math, the formulas actually tell a fascinating story, which I’ll recount in this post. All you need to keep in mind is that in the modern understanding of physics, categories aren’t as starkly separate as you might think: particles can be represented as waves or fields, and a force can also be viewed as a particle or a field.
So, a fraction of a second after the Big Bang, the universe had four kinds of “photons” floating around—the usual photon of light, and three other massless particles that “look” and act just like the photon. We label them: W+, W-, and Z. They are bosons, meaning carriers of force, as is the usual photon.
At the Big Bang, the universe also had one, unified, mighty force called the Superforce ruling it. But a tiny fraction of a second before the era I am talking about, the Superforce began to break down, successively “shedding off” part of itself to make the force of gravity, and another part of itself to make the strong nuclear force, which later would be active inside the nuclei of all matter, holding quarks inside protons and neutrons once these composite particles came into being. The two forces, gravity and the strong force—important as they are—do not enter our main story today.
The remnant we have of the Superforce at the time we are talking about, a tiny fraction of a second after the Big Bang, has three forces of nature held together inside it: electricity, magnetism, and something called the weak nuclear force, which later would be responsible for beta decay, a form of radioactivity. You may remember from a physics course that “electromagnetism” unifies electricity and magnetism, as Maxwell taught us over a century ago. But, during the era I am talking about, there are really three linked forces: electro-magnetic-weak; all three are held together as the electroweak force that remained from the Superforce after it had shed off gravity and the strong force.*
Mark Anderson has an M.S. in astrophysics, is a contributor to Discover, and has written about science and history for many other publications. His new book The Day the World Discovered the Sun: An Extraordinary Story of Scientific Adventure and the Race to Track the Transit of Venus has just been published by Da Capo.
Also see Paul Raeburns’s explanation of what investigating Venus can teach us about our own planet.
The 2004 Venus transit at sunrise
On Tuesday afternoon—for those in North, Central and parts of South America—the planet Venus will pass directly in front of the sun for seven hours. This rare spectacle, called the Venus transit, occurs twice within a decade, then not again for more than a century. But as fleeting as they are, transits of the past provided invaluable information about our place in the solar system—and, astronomers hope, this transit could help us glean more information on planets elsewhere in the galaxy.
In the 1760s, some of the age’s top explorers and scientists collaborated on dozens of expeditions across the planet to observe the Venus transit. These voyages launched the legendary careers of Captain Cook and the surveyors Mason and Dixon. The expeditions also represented the world’s first big science project—forefather to today’s Large Hadron Collider and Human Genome Project, in which an international community of hundreds or thousands collaborates on a single fundamental scientific problem at the frontier of human knowledge.
In the balance hung two of the greatest scientific and technological puzzles of the 18th century: discovering the Sun’s distance from the Earth and finding one’s longitude at sea. Read More
Amir D. Aczel has been closely associated with CERN and particle physics for a number of years and often consults on statistical issues relating to physics. He is also the author of 18 popular books on mathematics and science.
By now you’ve heard the news-non-news about the Higgs: there are hints of a Higgs—even “strong hints”—but no cigar (and no Nobel Prizes) yet. So what is the story about the missing particle that everyone is so anxiously waiting for?
Back in the summer, there was a particle physics conference in Mumbai, India, in which results of the search for the Higgs in the high-energy part of the spectrum, from 145 GeV (giga electron volts) to 466 GeV, were reported and nothing was found. At the low end of the energy spectrum, at around 120 GeV (a region of energy that attracted less attention because it had been well within the reach of Fermilab’s now-defunct Tevatron accelerator) there was a slight “bump” in the data, barely breaching the two-sigma (two standard deviations) bounds—which is something that happens by chance alone about once in twenty times (two-sigma bounds go with 95% probability, hence a one-in-twenty event is allowable as a fluke in the data). But since the summer, data has doubled: twice as many collision events had been recorded as had been by the time the Mumbai conference had taken place. And, lo and behold: the bump still remained!
This gave the CERN physicists the idea that perhaps that original bump was not a one-in-twenty fluke that happens by chance after all, but perhaps something far more significant. Two additional factors came into play as well: the new anomaly in the data at roughly 120 GeV was found by both competing groups at CERN: the CMS detector, and the ATLAS detector; and—equally important—when the range of energy is pre-specified, the statistical significance of the finding suddenly jumps from two-sigma to three-and-a-half-sigma!
In the early 1960s, Princeton physicist Robert Dicke invoked the anthropic principle to explain the age of the universe. He argued that this age must be compatible with the evolution of life, and, for that matter, with sentient, conscious beings who wonder about the age of the universe. In a universe that is too young for life to have evolved, there are no such beings. Over the decades, this argument has been extended to other parameters of the universe we observe around us, and thus to questions such as: Why is the mass of the electron 1,836.153 times smaller than that of the proton? Why are the electric charges of the up and down quarks exactly 2/3 and -1/3, respectively, on a scale in which the electron’s charge is -1? Why is Newton’s gravitational constant, G, equal to 6.67384 x 10-11? And, the question that has deeply puzzled so many physicists for a century (since its discovery in 1916): Why is the fine structure constant, which measures the strength of electromagnetic interactions, so tantalizingly close to 1/137 —the inverse of a prime number? (We now know it to far greater accuracy: about 1/137.035999.) Richard Feynman wrote: “It’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the ‘hand of God’ wrote that number, and ‘we don’t know how he pushed his pencil’” (QED: The Strange Theory of Light and Matter, page 131, Princeton, 1985). The great British astronomer Arthur Eddington (who in 1919 proved Einstein’s claim that spacetime curves around massive objects by making observations of starlight grazing the Sun during a total solar eclipse) built entire numerological theories around this number; and there is even a joke that the Austrian physicist and quantum pioneer Wolfgang Pauli, who throughout his life was equally obsessed with the number 137, asked God about it when he died (in fact: in a hospital room number 137) and went up to heaven; God handed him a thick packet and said: “Read my preprint, I explain it all here.” But if constants of nature are simply what they are, nothing more can be said about them, right?
Well, our viewpoint may suddenly change if a startling new finding should be confirmed through independent research by other scientists. Recently, astrophysicist John Webb of the University of New South Wales in Sydney, Australia, and colleagues published new findings that indicate that the fine structure constant may not be a constant after all—it may vary through space or time. Through observations of galaxies that lie 12 billion light-years roughly to the north with those at the same distance lying to the south, the team discovered variations in the fine structure constant amounting to about 1 part in 100,000. It is not clear whether quantum effects would drastically change when a fundamental constant such as the fine structure constant varies by such minute amounts. But if they do, and the change in the constant is significant, it could mean that there are universes—or distant parts of our own universe—where matter as we know it, and hence life, could not exist. Such a conclusion would greatly amplify the weight of the anthropic principle as a powerful argument for why we observe and measure the physical parameters we do. It is important to note that there is still skepticism about the finding, expressed for example in this post from Sean Carroll last year. But the possibility that this result is real cannot be discounted.
In a previous post, I reported on the baffling new finding that neutrinos appear to travel faster than light. The stuff of science fiction…travel to the past…weird science…Einstein rolling in his grave. (Except that faster-than-light doesn’t necessarily imply the possibility of time travel, and superluminal neutrinos might not violate relativity if they were the hypothetical tachyons.) The result was met with widespread skepticism in the world of physics, and the skepticism still continues. But just as the furor was beginning to die down, the OPERA (Oscillation Project with Emulsion tRacking Apparatus) consortium that runs the neutrino experiment at the Gran Sasso laboratory deep in the Apennine Mountains of central Italy, using neutrinos created 732 kilometers away at CERN, near Geneva, reported an experimental confirmation of their own earth-shattering results.
Originally, the pulses of protons that CERN used to generate the neutrinos through collisions with a stationary target were relatively long, and some critics have claimed that the long pulses, lasting 10.5 microseconds, could have introduced some uncertainty into the process. The OPERA scientists therefore asked CERN to shorten the pulses, and the new pulses have been only three nanoseconds (billionths of a second) long. The original result had been that the neutrinos traveled from CERN to Gran Sasso 60 nanoseconds faster than light would have taken for the same 732 kilometers, with a statistical standard error that was one-sixth as large (hence the result was statistically significant at “six-sigma,” which is extremely significant and its probability of being a fluke was therefore less than one in 3.5 million). The much shorter pulses make the pulse length fall within the standard error, and not a contributor to a possible false finding. Significantly, the new results, based on 20 detected neutrinos from the new and ultrashort pulses, replicated the earlier OPERA finding: The neutrinos still appear to travel faster than light.
Fermilab’s Tevatron, the largest particle accelerator in the United States, was shut down on September 30 after a celebrated career of 28 years that has provided us with some of the greatest discoveries in particle physics. This leaves the European lab CERN (see photo on left) to lead the way into future discoveries with its Large Hadron Collider.This landmark in experimental physics is an opportunity to reexamine the theoretical model physicists have constructed and relied on in their search to understand the workings of the universe: the standard model of particle physics. The standard model is a comprehensive theory about nature’s elementary particles and the forces that control their behavior, and it has been constructed over a half-century of intensive work by many theoretical physicists as well as experimentalists. The model has worked amazingly well, harmoniously combining theory and experiments and producing extremely accurate predictions about the behavior of particles and forces. But could the model now be beginning to show some cracks?
It all started on a wintry evening in 1928. While staring at the flames in the fireplace at St. John’s College, Cambridge, Paul Dirac made one of the most important discoveries in the history of science when he saw how to combine the Schrödinger equation of quantum mechanics with Einstein’s special (but not general) theory of relativity. This achievement launched relativistic quantum field theory—which forms the theoretical basis for the standard model—and produced two immediate consequences: an explanation of the spin of the electron, and Dirac’s stunning prediction of the existence of antimatter (confirmed a few years later with the discovery of the positron).
In the late 1940s, Richard Feynman, Julian Schwinger, and Sin-Itiro Tomonaga, all working independently, presented the first quantum field theory, called quantum electrodynamics, which explained the electromagnetic interactions of electrons and photons. It forms the first part of the standard model by handling interactions that are controlled by the electromagnetic field. The theory’s success inspired other theoretical physicists to construct similar quantum field theories for addressing the actions of the weak and strong nuclear forces—thus together accounting for everything in particle physics except for the action of gravity, the subject of Einstein’s general theory of relativity. By the 1970s, the result, the standard model, was ready: a quantum field theory of all elementary particles—leptons and quarks and their interactions through the actions of particles (such as the photon) called bosons.
Once more we are going through the annual ritual of the Nobel Prize announcements. The early morning phone calls, the expressions of shock, the gnashing of teeth in the betting pools. In the midst of the hoopla, I got an annoyed email on Tuesday from an acquaintance of mine, an immunology grad student named Kevin Bonham. Bonham thought there was something wrong with this year’s Prize for Medicine or Physiology. It should have gone to someone else.
Kevin lays out the story in a new post on his blog, We Beasties. The prize, he writes, “was given to a scientist that many feel is undeserving of the honor, while at the same time sullying the legacy of my scientific great-grandfather.” Read the rest of the post to see why he feels this way.
Kevin emailed me while he was writing up the blog post. He wondered if I would be interested in writing about this controversy myself, to give it more prominence. I passed. Even if I weren’t trying to carry several deadlines on my head at once, I would still pass. As I explained to Kevin, I tend to steer clear of Nobel controversies, because I think the prize is, by definition, a lousy way to recognize important science. All the rules about having to be alive to win it, about how there can be no more than three winners–along with the lack of prizes for huge swaths of important scientific disciplines–make these kinds of disputes both inevitable and tedious.
By now you’ve probably heard the widely reported news about the possible discovery of neutrinos that allegedly travel faster than light. The OPERA (Oscillation Project with Emulsion tRacking Apparatus) collaboration of almost 200 scientists working at the Gran Sasso underground laboratory in central Italy has discovered a phenomenon the physicists could simply not explain. For over three years, the scientists have been collecting data on the flight of neutrinos—those mysterious, nearly massless particles that can travel through anything at immense speed—originating in the SPS accelerator at CERN, near Geneva, and traveling underground all the way to Gran Sasso, 731 kilometers (about 450 miles) away. The experiment showed that the 16,000 neutrinos measured at Gran Sasso had traveled there through Earth’s crust at faster than light speed.
Facing a crowded lecture hall at CERN last Friday, Dario Autiero of the OPERA group explained how the researchers went to great lengths to remove any sources of error in their measurements: they measured distances using an extremely high-precision GPS called PolarX, measured time at the two locations to an accuracy of one nanosecond using cesium clocks, and accounted for the tides, Earth’s rotation, variations between day and night and spring and fall, etc. The statistical significance of the finding was six-sigma—meaning that the probability that the experimental result was a random fluke was only one in a billion. For a full hour after the presentation, Dr. Autiero was grilled by a roomful of physicists, and seemed to be able to account for all of the many potential errors brought up by the audience.
But physicists remain very skeptical. They want to see a confirmation of the findings from another experiment in a separate laboratory before they accept such a bizarre finding. After all, this result, if true, would appear to run against the spirit of Einstein’s special theory of relativity. When I showed the Gran Sasso paper to Nobel Laureate Steven Weinberg, he told me: “It looks pretty impressive, but I still think that this will go away.” The sentiment was echoed by almost every physicist I have spoken with since. The results seem mind-boggling. After all, nothing can go faster than light, right?