Bump Hunting (Redux)

By John Conway | October 22, 2007 8:44 pm

Last January, in my blogger-virginity-losing post to CV, and a follow-up post, I wrote about the experience of “opening the box” on a never-before-seen sample of data from the CDF experiment at Fermilab and being perhaps the first human to see what nature had to tell CDF in the search for the Higgs boson predicted by supersymmetry with the then-current data sample. What we’d seen was a small excess that might have in fact been the first experimental glimpse of a Higgs boson with a mass of around 160 GeV, or 170 times the mass of a proton. Or, it could have been a statistical fluctuation, or an artifact of the detector or analysis. It was excitng, but as scientists we had to keep our heads on straight.

There is basically nothing we can do about a statistical fluctuation – you get what you get. What keeps us awake at night is the prospect that we had made a mistake, or overlooked some detail. And so for months now, we (and when I say “we” I mostly mean Anton Anastassov, a postdoc at Rutgers, and my student Cris Cuenca) worked very hard to make sure that we hadn’t missed anything.

As far as we could tell, we hadn’t missed any problems, and so by late summer we decided to “open the box” again on a sample with 1.8 times more data (but containing the original sample). So it was not totally new data, but a sample with 80% more statistics. Was the bump still there? Would we see an even more significant excess?

We already kind of knew, given that the D0 experiment had not seen a similar excess, that we might not see the bump. So finally, after we had dotted the i’s and crossed the t’s, we took a look and there it was:

Higgs to tau pairs in CDF with 1.8 fb-1

Gone! The data points all fell within an error bar or so of every bin…no excess, no bump, no Higgs… I am sure you are thinking “no tickets to Stockholm” too. Were we suprised? No. Once you’ve been working in this field awhile you realize that this is what happens with two-standard-deviation effects very often: they go away with more data. If you want all the gory details you can find them here.

If you do go back and read the original posts, you’ll find that we assumed that a statistical fluctuation was one very possible explanation. Unless we had auxiliary information that said that there should be a Higgs at that mass with that production rate, etc., it was much more likely than not to have been a statistical fluctuation. And in the end that is what it was…even with a probability of 1 in 50 or so. It happens.

So the quest for this beast continues. Mother Nature is a big fat tease!

Now, gentle readers, one thing we’ve learned is that among you are many science journalists who use blogs as a means to catch wind of breaking news. You all have to have an angle, and a story to tell (sell). In the past, our field has been treated to stories of the ilk “300 Physicists Fail to Find Supersymmetry” with the subtitle “Study Illustrates the Risks of Big Science”. (New York Times, 1993). I sincerely hope that’s not your angle here, science writers! Our favorite put-down of that is to ask whether there should have been an 1888 story titled “Physicists Fail to Find Ether in Vacuum” about the Michelson and Morley null result. But okay, we’re nerds.

Of all the stories that appeared this past year about the quest for the Higgs I think that the one that got it right was Dennis Overbye’s in the New York Times. He captured the true spirit of this hunt without hitting false notes about blogging and science, or trying to make it look like some sort of last-chance desperate ploy by an accelerator nearing the end of its useful life, or trying to foment some non-existent controversy. I challenge you journalists out there to tell it like it is: this is a great human adventure, with all the twists and turns any good adventure has. And someday, maybe soon, if not at the Tevatron then at the LHC, there it will be…but will it be what we expect?

CATEGORIZED UNDER: Science, Science and the Media
  • Seth

    The original Oops-Leon paper also gave “less than one chance in fifty” of their 6 GeV di-lepton peak being a statistical fluctuation. Luckily, the field has learned its lesson since then.

    My vague observation is that statistically-improbable bumps appear and disappear even more often than they should. Am I underestimating the number of plots that people are looking for bumps in? Or are the plot-makers underestimating their systematic errors?

  • michael s pierce

    That’s sad news indeed. I’m sorry your bump vanished! However, if I’m reading the graph correctly (and that’s assuming a bit), it appears that it’s now even a valley. It then brings the question of what would happen if you only looked at the latter data set without the first. Would the points shift down even further?

    Or perhaps asking the question a different way : does the fluctuation scale with the number of processed data points in a way that makes sense with the expected errors or is there variation beyond what you expect?

    All in all, eventually we will find the Higgs, or many Higgs, or even nothing at all. Any of which would be very interesting. In fact, Nothing with a capital N might be the most interesting of all…

  • http://blogs.discovermagazine.com/cosmicvariance/sean/ Sean

    I almost once wrote a post to the effect that “nobody should ever get interested in any two-sigma result, ever, except for the experimenters themselves, who should do their best to follow up.” Is this sensible advice?

  • Jason Dick

    The really cool thing about this whole endeavor is that we find out something interesting whether or not we find the Higgs.

    As for the two sigma thing, I always like Andy Albrecht’s line which I will paraphrase as, “I won’t get out of bed for less than four sigma.”

  • Solipsist

    two sigma, four sigma
    why does everyone always assume every measurement entails a normal distribution ?

  • fh

    “why does everyone always assume every measurement entails a normal distribution ?”

    The central limit theorem?

  • Solipsist

    thanks fh, i am aware of the central limit theorem (since i had the ‘pleasure’ of reproducing its proof at an exam long time ago)
    I had rather this in mind:
    “Because they occur so frequently, there is an unfortunate tendency to invoke normal distributions in situations where they may not be applicable. As Lippmann stated, “Everybody believes in the exponential law of errors: the experimenters, because they think it can be proved by mathematics; and the mathematicians, because they believe it has been established by observation” (Whittaker and Robinson 1967, p. 179).”

    as quoted from http://mathworld.wolfram.com/NormalDistribution.html

  • Seth

    Solipsist,

    I believe most current analyses don’t treat things as normal distributions. Experimental physicists’ use of “sigma” is as a traditional shorthand for more-mathematically rigorous confidence intervals and things of that ilk.

  • Solipsist

    thanks Seth, i read their presentation (or at least made a gallant effort) Apparently their “P” in PDF stands for “parton” and not for “probability” as i was always taught. Are those Feynman’s partons ? Has anyone warned Murray Gell-Mann about this ? :-)

  • Pingback: Fall is here! « blueollie

  • http://blogs.discovermagazine.com/cosmicvariance/john John

    Solipsist, we only use the “two sigma” or “four sigma” terminology to connect with the familiar normal distribution. We are actually doing a state of the art binned likelihood treatment with systematic errors represented as nuisance parameters, eliminated by Bayesian marginalization. ;)

    Sean, if the Higgs *had* been there it would have shown up first as a two sigma excess, then three, then… So, no, you really should pay attention to these things, just keep your head on straight. And Andy can stay in bed, we’ll get him up when it’s time.

    We use “PDF” or “pdf” to refer to both parton distribution functions and probability density functions. You have to be careful of the context when reading.

  • http://tyrannogenius.blogspot.com Neil B.

    From someone, please: a middle-brow “cocktail party” explanation of why there needs to be something (of which a “particle” is a manifestation of some field) to give a fundamental property like “mass” to other particles. Really, how can we dig down to that foundational level to say, particles “wouldn’t have mass” unless this special particle gave it to them? How low “off the ground” is all this coming from, and with what authority? tx

    PS – since I am hearing lots of talk about errors in the measurement, pls. entertain a segue into a related but more momentous issue in quantum measurement: how do the reports of unreliable detectors (UDs) affect a wave function? I just don’t hear about this (the closest, is to the “Renninger negative result” which redistributes (!) a wave function to places other than where it can no longer be – but event that assumes a reliable detector just going “no.”) If a UD says “hit” then maybe there’s a 20% chance the photon etc. will actually hit somewhere else – so, what does that do to a wave function? I don’t see a machinery for showing “20% worth of a wave function” (not to be confused with 20% chance to be one place and 80% to be elsewhere, already representable.) Even if you said, OK we’re going to represent that somehow, the reliability itself could be ill-defined (the detector just starts to ‘act up” for awhile and then go back to apparently good accuracy, etc.) Thoughts?

  • http://blogs.discovermagazine.com/cosmicvariance/john John

    Neil – I will try to do a post in the near future about the reason we think that there is probably a Higgs boson out there, and why it might just be the tip of the iceberg for a whole new layer of tructure of matter…

    As for your other comment about the quantum limitations of measurement, rest assured that we are no where near those limits with our particle detectors.

    What we do is to measure as well as we can the energies and momenta of particles coming out of our collisions, and from that infer as well as we can the initial state from whih they came. We are actually pretty good at that in some cases, and not so good in others (high backgrounds from uninteresting processes).

    The detector indeed “acts up” occaisionally, and we monitor that with a sample f data independent of that which we use to do our Higgs search. If the detector is dodgy, we simply ignore those collision events (typically a “run” of a few hours).

  • http://carlbrannen.wordpress.com/ Carl Brannen

    Far better for theorists is to discover that Higgs doesn’t exist. Lots of cool theories don’t use it.

  • http://tyrannogenius.blogspot.com Neil B.

    John – Thanks, but what I mean is: unreliable detectors in cases such as when a photon is split by a beamsplitter, and then a hit in one leg shows the photon can’t be found in the other one, etc. What if the detector saying “hit” is sometimes wrong, what happens to the wave function of the photon?

  • Jason Dick

    Neil B.,

    The short answer is that if we attempt to add mass directly to particles, field theory doesn’t work, and least in the standard model formalism. So, if field theory as we understand it is correct, then all particles must, on a fundamental level, have zero mass, and there must be some physical mechanism that gives them apparent mass.

  • http://blogs.discovermagazine.com/cosmicvariance/john John

    Again, Neil, the types of quantum-level fluctuations in the detector which you are talking about don’t really affect us, at least not in the process of detecting these high-energy particles. It is certainly true that quantum mechanics reigns supreme in the “initial state” or “hard collision process” that we are looking for. And we do quite detailed calculations of those effects, which can include the spin polarization states of the outgoing particles. But once the particles leave the interaction region, and leave their signatures in our detector, we really don’t need to account for quantum correlations between them in terms of the detector efficiency. The photons hitting our lead-based calorimeter have 20 billion times more energy than visible light photons…no wave-diffraction or quantum-uncertainty effects here! They hit the lead nuclei and basically shatter into a huge number of pieces (an “electromagnetic shower”). Our uncertainty on their position and momentum is many orders of magnitude away from the quantum measurement limit.

    All this having been said there have been some collider tests of Bell’s Inequality proposed using the final state polarization correlations of the outgoing particles. But these effects play no role in detecting (or not) high energy hadrons, electrons, muons, photons, etc. Our inefficiencies are more of the nature that the electronics was not working properly…

  • http://tyrannogenius.blogspot.com Neil B.

    John, Jason, thanks. Now, what I think is really ironic: that you say,
    “if we attempt to add mass directly to particles, field theory doesn’t work, … then all particles must, on a fundamental level, have zero mass,…” OK, but then the problem that required “renormalization” was apparently just the opposite: the field/interaction energies around a particle (at least, a charged one like an electron) were infinite (and energy is equivalent to mass), which is just weird. BTW, I assume you all mean Higgs must be needed to give “rest mass”, for otherwise the Higgs-free “massless” particles wouldn’t even have energy (the way photons do) – ?

  • http://tyrannogenius.blogspot.com Neil B.

    Jason, if you aren’t feeling too condescended to in my “God” post, (sorry), I still am looking for a good answer to my question about effect of unreliable detectors on wave functions in general. John took it too much direct to his experiments, and I want to know the general implications, tx.

  • Jason Dick

    Well, I’m really not sure what you mean. I’m into cosmology, and basically only have the one graduate course series in field theory to draw on here, so I’ll just leave it at saying that I don’t even understand the problem you are alluding to.

    As for renormalization, though, from what I remember of renormalization, it really isn’t a problem with respect to energies blowing up location-wise. It has to do with certain components in Feynman diagrams giving infinite results that come from taking the integrals to infinite energy (loop diagrams, specifically). This is solved by suggesting that there is some physics beyond a certain energy that we just don’t understand, so we should cut off our integrals, use a dummy parameter to represent the value of the integral out to infinity, and use experiment to fix the value of this dummy parameter. Then, as long as we can show that our prescription of renormalization is independent of the cutoff energy we choose, this should be a valid thing to do. All that remains after this is to measure whatever parameters are required in renormalization theory using one set of experiments, and see if the corroborate with the same parameters measured in a different experiment using different interactions.

    And no, I don’t think it has anything to do with them not being able to have energy. I’m honestly not clear on what the theoretical problems with giving particles mass are. But I do know that they would, in terms of energy/momentum, act exactly like photons at a “fundamental” level, before adding interactions to the theory. It could, of course, simply be that we just don’t understand the nature of mass, which is why it’d be at least as interesting to find no Higgs as it would be to find one.

  • Jason Dick

    P.S.
    The problem in post #20 I was alluding to was in response to post #19. the rest of the post was in response to post #18.

  • jlm

    Hi John,

    Thanks for the update. If I read the prediction for jets faking
    a tau in the region 120-150 gev it looks like it went from about
    10 in the 1/fb analysis to ~35 in the 1.8/fb analysis. What is
    the reason this prediction didn’t scale?

  • Hiranya

    I have really enjoyed this series of posts! Please keep it coming :)

  • Nonnormalizable

    Hey Neil B., this thread doesn’t seem like the best venue for your question, but anyway: I think (in nonrelativistic QM anyway) that such a process as you describe, with a detector that has probability P of each detection being true, could easily be analyzed as collapsing the wave function from Psi to an incoherent mixture of (1-P)*Psi + P*delta function. I’m not aware of such a thing giving anything interesting, but I’m not an expert.

  • Eric

    The bump vanished. Surprise!…NOT.

    This is why experimentalists overeager for acclaim shouldn’t hype up their small and insignificant fluctuations in the public domain, especially to the general public (that includes theorists), lest the unsuspecting public (again that includes theorists) should get unreasonably excited. Any experimentalist with half an once of skepticism would know that 2 sigma fluctuations are just that and shouldn’t be publicized as an impending discovery.

    Well, at least I get to say “I told you so” to all my friends.

  • http://tyrannogenius.blogspot.com Neil B.

    OK… Eric (or anyone) have we advanced in our ability to figure whether a given marginal signal is likely real or just noise etc? The basic simple math has been known and likely not changed for awhile, but I figure there have been advances (whether in the math or the computational power.) I don’t hear much about it.

  • Eric

    Neil, there’s no obvious way to know, assuming all other checks have been made.

    A 3-sigma effect is decent, but any good particle experimentalist will tell you that 5 sigma is the gold standard. Many people (especially theorists) ask: a 3-sigma fluctuation is so statistically unlikely already–why need 5? The answer: Well, the probability of a 5 sigma is exponentially suppressed compared to 3 sigma. But more saliently, it also offers robustness against mistakes in error estimation. For example, if you accidentally underestimate your error by a factor of 2, your great 3-sigma effect now becomes a 1.5 sigma effect, which happens quite frequently. Similar reasoning applies to error estimation from Monte Carlo, which may not properly model new physics, etc.

    The point of having 5 sigma is so that claims of discovery are robust against not only statistical fluctuations, but also against experimenter-related mistakes. Until one is near that, I think it’s misleading to imply a near-discovery (as it seemed to me was done here at CV, New Scientist, etc.) There was no legitimate reason to hype up the result so much.

  • Eric

    Furthermore, there is an experimental bias because one selectively pays attention to fluctuations that occur in quantities of interest and ignores fluctuations elsewhere. An experimenter should really consider the probability of getting a 2-sigma fluctuation *anywhere* (not just here), which is pretty high due to combinatorics. J. Conway is reputed to be a statistics guru, so maybe that was done, I’m not sure.

  • http://blogs.discovermagazine.com/cosmicvariance/john John

    Hi jlm (#22): the reason the background did not scale is that we loosened our selection a bit to get better sensitivity (more backgorund but more signal too…)

    Eric, #25, apaprently you didn’t read my original posts. I think if you go back and do your homework, you can tone down the sarcasm a bit. I never hyped anything…the media did. It was a typical cycle with this sort of fluctuation. The main point is if the Higgs *had* been there this is *exactly* how it would have appeared. My first post, and this one, was all about how important it is to remain skeptical, while being human. This is exciting! Are you saying you never expect to see a Higgs anywhere?

    In response to your comment #28, our estimate of the probability of a fluctuation *of course* took into account that a fluctuation could have been anwhere. This was discussed in the first posts, which, as I say, you really should take the tme to read. I stand by every word I wrote.

  • Brian Drell

    Glad to know LHC hasn’t lost usefulness before they even collide beam.
    ;)

  • Eric

    John, you’re right–after rereading your posts I agree that you were more fair than I gave you credit for. Guess I succumbed to the usual irresistible temptation of over-sarcasm (it’s quite enjoyable). ;)

  • jlm

    Hi John,

    I’m confused. If I look at the paper you linked to describing this analysis
    I see that the combined efficiency listed for a 90 (250) gev higgs is 1.0% (3.1%).
    And the cdf higgs webpage has a paper describing the 1.0/fb analysis which
    lists the combined efficiency for the same masses as 1.1% (3.3%). Also, the
    ratio of the total number of predicted taus faking jets in the two analyses is 1.6 (slightly less than the 1.8 that comes just from luminosity scaling).

    So it doesn’t appear that a looser event selection has led to increased predictions for the signal or the fake background. However, as I wrote in my previous post, it looks like the prediction for the jet faking tau background in the mass region the excess was observed in the 1.0/fb analysis has increased by 3.5 instead of 1.8. Did I misunderstand your answer?

  • loses to monkeys

    what’s plotted on the y axis of those graphs? people never label axis these days!

  • Thomas D

    I’m not sure I like the smell of the phrase ‘to catch wind of breaking news’…

  • http://dorigo.wordpress.com tommaso dorigo

    Hi John,

    I let a few weeks pass before pitching in on this, now that fewer eyes are looking. What I really would like to know is the details of the modeling of the QCD background, the fat red histogram that is responsible for the shape of the falling spectrum at high reconstructed mass.

    Because, if I am not mistaken, the fact that you now see no excess is due to a remodeling of the QCD background (which used to be quite a bit leaner in the 1/fb analysis). So I really wonder, was it really a statistical fluctuation or a systematic underestimate of the background ?

    Sorry for being quite direct… I decided not to post on this issue in my blog and just ask you here. If the new QCD model is better than the old one it is entirely to your credit, not the other way round. I think you, Anton and the others did a terrific job and I cannot see a way to improve the analysis.

    Cheers,
    T.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.
ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+

Login to your Account

X
E-mail address:
Password:
Remember me
Forgot your password?
No problem. Click here to have it e-mailed to you.

Not Registered Yet?

Register now for FREE. Registration only takes a few minutes to complete. Register now »