# Bump Hunting (Part 2)

Was it real?

I stared up at the atrium of Building 40, two things becoming apparent. First, with a result with a little bump like this, there was going to be a lot more work to do. Second, I had better get myself back to Fermilab soon rather than remain at CERN.

I changed my ticket to the next Monday as there was nothing available the next day. This meant I had the weekend at CERN to work. I sent email to my still-asleep colleagues in the US, and got busy.

When you see a spectrum like the one we had, with a few bins having more than normal rate, it could be a number of things. It’s always possible that you forgot some background, that the simulation of the detector and the analysis is not perfect, or simply a statistical fluctuation.

High energy particle collisions are governed by quantum mechanics, and therefore intrinsically random processes. If you expect to see a certain number of events with certain characteristics after running for some time, you don’t get exactly that number when you run the experiment. If you expect 10, you might observe 8, or 12, or, more rarely, 20. That was the case here: in the spectrum of what we called “visible mass,” a few of the bins had more events than expected. So was this a statistical fluctuation, a foul-up in the simulation, or the beginning of a real signal?

As the apocryphal student lab report said “our experiment didn’t have enough data so we had to use statistics”. In our field we use statistics all the time. We want to make quantitative statements about the things that make the hair on the back of your neck stand up, or perhaps yawn. Being quantitative means that we can be as objective as possible, and offer to our colleagues and to the world a rational interpretation of the data. Doing these sorts of analyses is my particular specialty in particle physics. Now it was time to do it here.

As a common language in our field we try to relate all our statistical questions back to the good old “normal” distribution, the familiar bell-shaped gaussian. A gaussian is peaked in the middle with long tails on the high and low side. It is quite proper to think of it as an indication of how often you expect some measurement to give you a certain result: most often in the middle, and less often out in the tails. It’s characterized by a “standard deviation” or, since we use the Greek letter sigma to represent that, “1 sigma.” If your measurement is described by a Gaussian, about 2/3 of the time it will lie within plus or minus one sigma of the peak. In 1/6 if the cases, then, it will be more than one sigma on the high side. The more sigma away from the peak, the less likely it is that it happens.

In particle physics we see statistical fluctuations all the time, and so to make a discovery of something really new, we have to demand that the probability of a random fluctuation causing the excess is very, very small. Usually we demand five sigma, that is five standard deviations from the peak, to claim “discovery.” With less statistical significance, we hedge and use words like “evidence” or “indication.”

So how significant was this? I quickly fired up my program to fit the spectrum including a Higgs signal, and, sure enough, the data preferred a SUSY Higgs with a mass of about 160 GeV, or about 170 times the mass of the proton. The peak rate was quite consistent with our sensitivity. And, to boot, it appeared that the statistical significance for that mass was about 2.5 standard deviations. Below that level people are not very interested, but at this level it starts to get intriguing.

The problem is that this is a bump hunt, and we didn’t know where the bump might show up in terms of the mass of the Higgs, since we don’t know what the Higgs mass is! That means that the real question is this: could a random fluctuation give us a bump like this anywhere in the spectrum, not just at 160 GeV.

This calculation was going to take a while. I needed to run a random simulation that generated lots of possible experimental outcomes with no Higgs signal present, and then fit the spectrum for all possible Higgs masses, and see how often I got a 2.5-sigma result. I set about coding this up.

Fortunately I had all the pieces I needed to perform the calculation, and I stitched them together, testing as I went. Then it was time to launch the job. I moved the code to our computer cluster back in California and set up scripts to split the calculation across multiple computers. With a single command, I set 20 CPUs going, generating random spectra and looking for the Higgs where there was no Higgs. It looked like it would be a number of days to get the answer.

It is a source of continual amazement to me that a mere 15 years ago the state of the art in computing at CERN was our big VAX 9000, really the last of the mainframes. Windows desktop PCs were useless to us since we couldn’t program them, and Unix workstations were coming into vogue but still not that fast. So now I had just launched a job which was utilizing about 2000 times more computing power than the old VAX. And this was only 20 CPUs! We now have a global grid with tens of thousands of nodes accessible, and more every day.

So in the mean time, it was time to assess the damage the bump had done to our limits. When I had opened the box, I had expected that we’d see no sign of a Higgs signal, and then proceed to rule out certain regions of supersymmetric parameter space where, if nature had chosen to live there, we would have seen a Higgs signal. Our usual statistical standard is somewhat loose in this regard: we require “95% confidence” to rule out a certain parameter set. In fact that means that there is less than a 5% chance that, if that parameter set is true, we’d have seen what we saw or an even larger excess of Higgs. You kind of have to think about that for a good while to see why such inside-out logic is needed. The problem is that there is a crucial thing we don’t know: the probability that the Higgs exists or not!

With this excess, we’d be able to rule out a lot less territory than we had hoped. I ran the short jobs to calculate the rate limits, and sent them off to Amit to convert into the final SUSY parameter limits, and started writing it all up.

In our collaboration we have a very formal internal review process for getting out results. We need to document everything in advance of two presentations to the appropriate physics analysis meeting. The first presentation is called a “pre-blessing” and is where the real knives come out. The presenter is peppered with deep, probing questions about every aspect of the analysis, usually for an hour or more. Though it can seem like a blood sport at times, this is an absolutely essential part of the scientific process: if we aren’t our own worst skeptics then someone else will do it for us.

Anton was awake now, got the message about the excess, and started asking all the right questions. Amit chimed in and we quickly formed a plan: we had until Sunday at midnight to get the note posted, and we divided up the work. He shot back parameter limits to me and I turned it into a nice plot with Illustrator:

The dark purple region in the plot shows the paramter values we exclude with 95% confidence. The region labeled “expected” is what we thought we’d be able to exclude. Clearly we had not reached our expected sensitivity due to the fluctuation. As we get more and more data, we expect the excluded region to cover more and more of the plot, to lower and lower “tan beta” which is one of the main SUSY parameters. We illustrated the somewhat mild dependence of our result on input assumptions, and compared with the limits obtained years ago at the LEP 2 accelerator at CERN. We’d made progress, but the little bump was hurting us.

I returned to Fermilab on Monday, and met with Anton and Cris to brainstorm about possible problems, and questions that might arise. Anton was going to make the presentation on Thursday, and we needed to armor-plate the result, which was sure to elicit lots of tough questions. He had done an amazing job at documenting everything – our note ran to over 70 pages, much of it plot after plot comparing the data and simulation in our control samples and in other subsamples of the data. Everything checked out – there was no reason to believe that the little bump was anything other than either a statistical fluctuation or something new.

My statistical significance jobs came back with an answer: there was about a 2% chance that a random fluctuation could have given us a bump of the magnitude we saw. That’s about a 2 sigma effect, and those come and go every day in our field. Still, we are all human and have to wonder whether this is real or not.

The presentation went fine – we were “preblessed” and scheduled for final blessing the first week of the new year, the week before my talk at Aspen. We really did not receive any difficult questions, as we’d done our homework well. Over the holidays we could expect questions and comments from our collaborators, and had some more time to do cross checks.

But by the blessing talk three weeks later, a number of people who had not been at the preblessing presentation caught wind of the excess, and showed up at the meeting to ask tough questions. In then end, though, we were blessed and I had a green light to show the result at Aspen the following Tuesday. I worked all weekend on the talk, in which I now focused a lot of attention on the new result, which would be the thing of most interest to the crowd.

It’s pretty rare these days to have a high profile talk like one at Aspen and also your own brand new result to show – sort of a perfect storm. My attitude was to show the facts and draw the obvious conclusion: we need more data! As anticipated, the part of my talk that drew the most interest and questions was our new result. It was about as fun as it gets in this field!

Afterward though, on the ski slopes, in fact, a colleague from our competitor experiment Dzero at the Tevatron, Greg Landsberg, a professor at Brown, intimated to me that their experiment had a result nearing completion. This occupied my thoughts the rest of the afternoon. What had they seen? Did they have an excess too? Last year there had been a small hint of one, but nothing exciting.

At the evening session Greg let the cat out of the bag: Dzero had a deficit where we had an excess! I’d have rather he waited a day or two to let me know, but, eventually the world would find out. Their method was quite similar to ours, so it was hard to escape the conclusion that if they really had a deficit, our excess is more likely to be a statistical fluctuation. We’re still waiting to see the answer from them, which should happen soon. The nice thing is that we have another factor of two more data to analyze soon. By early summer we should have a much better idea what it is we are seeing.

A week from today Anton will present the result at a Friday Fermilab “Wine and Cheese” seminar which, despite the genteel name, is one of the toughest audiences I know of. A single question there, with 150 people in the audience, can quickly erupt into a barrage, with the speaker barely able to catch his or her breath. But Anton is a real pro, and it will be very interesting to be there and see how it all goes.

We’ve caused quite a stir considering how insignificant the bump really is. I’ve gotten requests from all over to see my Aspen slides, to answer questions. When I was back at CERN last week to carry new pixel detectors to CERN, I got stopped by lots of people who had heard about the result and wanted to know more. To me it shows that there is intense interest in finally nailing the Higgs, whatever its nature, and also it shows that the Tevatron has a real shot at making big discoveries before the LHC turns on. Real physics data will come from the LHC by the middle of next year!

In the end, some day we are going to have something new right there in our data, and we cannot shrink from it. We’ve gone a very long time with no truly new discovery in particle physics, no observation that truly changes the paradigm. We’ve gotten used to fluctuations coming and going, and are justly skeptical of any new ones that come along. But I think I got a glimpse that Saturday morning of what it will feel like when we do have something new, and real, and it’s a sensation that I hope I’ll have again some day soon.

Pingback: Brit royals in town & Higgs « skies of blue

Pingback: Chrononautic Log 改 » Blog Archive » Sine ira et studio

Pingback: mis 3 quarks » Breves: 30/01/07

Pingback: Not Even Wrong » Blog Archive » Short Items

Pingback: Untitled « parasite blog

Pingback: appletree » Blog Archive » Carnivalia

Pingback: Rumors about the Higgs « A Quantum Diaries Survivor

Pingback: The Atom Smashers « Peculiar Velocity

Pingback: 18 months after the Higgs affair… « A Quantum Diaries Survivor