Observational science is hard. And it seems to be getting harder. Nowadays, when you want to analyze the latest and greatest data set, it could consist of finding a minute-long evolving oscillatory gravitational-wave signal buried in months and mountains of noise. Or it could consist of picking out that one Higgs event among 600 million events. Per second. Or it could consist of looking for tiny correlations in the images of tens of millions of galaxies.
The interesting effects are subtle, and it’s easy to fool oneself in the data analysis. How can we be sure we’re doing things right? One popular method is to fake ourselves out. A group gets together and creates a fake data set (keeping the underlying parameters secret), and then independent groups can analyze the data to their heart’s content. Once the analysis groups publicly announce their results, the “true†parameters underlying the data can be revealed, and the analysis techniques can be directly evaluated. There is a correct result. You either get it or you don’t. You’re either right or wrong.
This approach has been developed for particle physics and gravitational waves and all sorts of other data sets. The latest version of this is the GREAT10 data challenge, for weak gravitational lensing data analysis. As we’ve discussed before (here, here, here), gravitational lensing is one of the most powerful tools in cosmology (Joanne Cohn has a brief introduction, with lots of links). In short: the gravity from intervening matter bends the light coming from distant objects. This causes the images of distant objects to change in brightness, and to be bent (â€shear†is the preferred term of art). By looking at the correlated effects on (literally) millions of distant galaxies, it is possible to infer the intervening matter distribution. What is particularly powerful about gravitational lensing is that it is sensitive to everything in the Universe. There are no prejudices: the lensing object can be dark or luminous, it can be a black hole or a cluster of galaxies or something we haven’t even thought of yet. As long as the object in question interacts via gravity, it will leave an imprint on images of distant sources of light.
Measuring the Universe with gravitational lensing would be simple if only all galaxies were perfectly round, and the atmosphere wasn’t there, and telescopes were perfect. Sadly, that’s not the situation we’re in. We’re looking for an additional percent-level squashing of a galaxy that is already intrinsically squashed at the 30% level. The only way to see this is to notice correlations among many, many galaxies, so you can average away the intrinsic effects. (And there might be intrinsic correlations in the shapes of adjacent galaxies, which is a pernicious source of systematic noise.) And if some combination of the telescope and the atmosphere produces a blurring (so that stars, for example, don’t appear perfectly spherical), this could easily make you think you have tons of dark matter where there isn’t any. How do you know you’re doing it right? You produce a fake sky, with as many of the complications of the real sky as possible. Then you ask other people to separate out the effects of the atmosphere and the telescope (encapsulated in the point spread function) and the effects of dark matter (via gravitational lensing). The GREAT10 team has done exactly this (see discussions here, here, here). They have released a bunch of images to the public. They know exactly what has gone into making the images. Your task is to figure out the PSF and the gravitational lensing in the image. Everyone is welcome to give it a shot! The images, and lots of explanatory documentation, are available here. The group that does the best job of finding the dark matter gets a free trip to the Jet Propulsion Laboratory. And, most importantly, an iPad. What more incentive could you want? Start working on your gravitational-lensing algorithms!
This is truly science by the masses, for the masses.