Science for the masses

By Daniel Holz | December 9, 2010 11:35 am

Observational science is hard. And it seems to be getting harder. Nowadays, when you want to analyze the latest and greatest data set, it could consist of finding a minute-long evolving oscillatory gravitational-wave signal buried in months and mountains of noise. Or it could consist of picking out that one Higgs event among 600 million events. Per second. Or it could consist of looking for tiny correlations in the images of tens of millions of galaxies.

The interesting effects are subtle, and it’s easy to fool oneself in the data analysis. How can we be sure we’re doing things right? One popular method is to fake ourselves out. A group gets together and creates a fake data set (keeping the underlying parameters secret), and then independent groups can analyze the data to their heart’s content. Once the analysis groups publicly announce their results, the “true” parameters underlying the data can be revealed, and the analysis techniques can be directly evaluated. There is a correct result. You either get it or you don’t. You’re either right or wrong.

dark matter from gravitational lensing This approach has been developed for particle physics and gravitational waves and all sorts of other data sets. The latest version of this is the GREAT10 data challenge, for weak gravitational lensing data analysis. As we’ve discussed before (here, here, here), gravitational lensing is one of the most powerful tools in cosmology (Joanne Cohn has a brief introduction, with lots of links). In short: the gravity from intervening matter bends the light coming from distant objects. This causes the images of distant objects to change in brightness, and to be bent (“shear” is the preferred term of art). By looking at the correlated effects on (literally) millions of distant galaxies, it is possible to infer the intervening matter distribution. What is particularly powerful about gravitational lensing is that it is sensitive to everything in the Universe. There are no prejudices: the lensing object can be dark or luminous, it can be a black hole or a cluster of galaxies or something we haven’t even thought of yet. As long as the object in question interacts via gravity, it will leave an imprint on images of distant sources of light.

Measuring the Universe with gravitational lensing would be simple if only all galaxies were perfectly round, and the atmosphere wasn’t there, and telescopes were perfect. Sadly, that’s not the situation we’re in. We’re looking for an additional percent-level squashing of a galaxy that is already intrinsically squashed at the 30% level. The only way to see this is to notice correlations among many, many galaxies, so you can average away the intrinsic effects. (And there might be intrinsic correlations in the shapes of adjacent galaxies, which is a pernicious source of systematic noise.) And if some combination of the telescope and the atmosphere produces a blurring (so that stars, for example, don’t appear perfectly spherical), this could easily make you think you have tons of dark matter where there isn’t any. How do you know you’re doing it right? You produce a fake sky, with as many of the complications of the real sky as possible. Then you ask other people to separate out the effects of the atmosphere and the telescope (encapsulated in the point spread function) and the effects of dark matter (via gravitational lensing). The GREAT10 team has done exactly this (see discussions here, here, here). They have released a bunch of images to the public. They know exactly what has gone into making the images. Your task is to figure out the PSF and the gravitational lensing in the image. Everyone is welcome to give it a shot! The images, and lots of explanatory documentation, are available here. The group that does the best job of finding the dark matter gets a free trip to the Jet Propulsion Laboratory. And, most importantly, an iPad. What more incentive could you want? Start working on your gravitational-lensing algorithms!

This is truly science by the masses, for the masses.

ADVERTISEMENT
  • Pingback: Tweets that mention Science for the masses | Cosmic Variance | Discover Magazine -- Topsy.com()

  • http://coraifeartaigh.wordpress.com Cormac

    Excellent – it’s a pity this technique doesn’t have a catchy name. It could be very helpful in convincing philosophers and sociologists of science that modelling is not quite the arbitrary process that some seem to think it is

  • Eugene

    I wish they have a similar thing for the CMB non-gaussianities!

  • Pingback: 9 December 2010 pm « blueollie()

  • http://www.savory.de/blog.htm Eunoia

    @Cormac,
    in computer science we call this method ‘be-bugging’ ; let people then look for the ‘bugs’ in the code, some of which will be real besides the inserted ones, and then do some simple Bayesian statistics to infer the number of remaining ‘real’ bugs.

  • Shantanu

    Daniel,
    slightly OT question, since you are a relativist
    what do you think of the paper
    http://arxiv.org/pdf/1012.1194
    by Blanchet and his group and the previous papers in this issue?
    do you think Steven Chu et al are right or CNRS/IAP group?
    Surprised there is no discussion in blogosphere on this

  • Trevor

    Science by the masses, for the masses… to measure masses!

  • http://overcomingbias.com Robin Hanson

    I don’t understand how there is sufficient incentive to compete in this competition. An iPad n a trip to JPL just don’t seem sufficient.

NEW ON DISCOVER
OPEN
CITIZEN SCIENCE
ADVERTISEMENT

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Cosmic Variance

Random samplings from a universe of ideas.

About Daniel Holz

ADVERTISEMENT

See More

ADVERTISEMENT
Collapse bottom bar
+