[astro-ph/0506112] The Shear TEsting Programme 1: Weak lensing analysis of simulated ground-based observations

Authors:  Catherine Heymans, Ludovic Van Waerbeke, David Bacon, Joel Berge, Gary Bernstein, Emmanuel Bertin, Sarah Bridle, Michael L. Brown, Douglas Clowe, Haakon Dahle, Thomas Erben, Meghan Gray, Marco Hetterscheidt, Henk Hoekstra, Patrick Hudelot,
Abstract:  The Shear TEsting Programme, STEP, is a collaborative project to improve the accuracy and reliability of all weak lensing measurements in preparation for the next generation of wide-field surveys. In this first STEP paper we present the results of a blind analysis of simulated ground-based observations of relatively simple galaxy morphologies. The most successful methods are shown to achieve percent level accuracy. From the cosmic shear pipelines that have been used to constrain cosmology, we find weak lensing shear measured to an accuracy that is within the statistical errors of current weak lensing analyses, with shear measurements accurate to better than 7%. The dominant source of measurement error is shown to arise from calibration uncertainties where the measured shear is over or under-estimated by a constant multiplicative factor. This is of concern as calibration errors cannot be detected through standard diagnostic tests. The measured calibration errors appear to result from stellar contamination, false object detection, the shear measurement method itself, selection bias and/or the use of biased weights. Additive systematics (false detections of shear) resulting from residual point-spread function anisotropy are, in most cases, reduced to below an equivalent shear of 0.001, an order of magnitude below cosmic shear distortions on the scales probed by current surveys.
Our results provide a snapshot view of the accuracy of current ground-based weak lensing methods and a benchmark upon which we can improve. To this end we provide descriptions of each method tested and include details of the eight different implementations of the commonly used Kaiser, Squires and Broadhurst (1995) method (KSB+) to aid the improvement of future KSB+ analyses.
[PDF]  [PS]  [BibTex]  [Bookmark]

Discussion related to specific recent arXiv papers
Post Reply
Antony Lewis
Posts: 1941
Joined: September 23 2004
Affiliation: University of Sussex
Contact:

[astro-ph/0506112] The Shear TEsting Programme 1: Weak lensi

Post by Antony Lewis » June 16 2005

This is an interesting (though rather technical) paper that compares various methods of estimating averaged galaxy cosmic shear.

The errors on the recovered shear depend on the weighting scheme used for averaging over galaxies. The ellipticity-dependent weighting scheme used by BJ02 was found to bias the results for larger shears (because the weights depend on the shear). They recommend using a shape-independent weighting to avoid the problem. Perhaps an alternative would be to iterate on the shear and weight?: ie. estimate the shear with shape- independent weighting, then re-estimate the shear using un-sheared shape weights (where the un-shearing is done using the previous shear estimate). Iterate until the weighted shear estimate converges. I'd have though this would remove most of the bias by making the weighting almost independent of shear (i.e. weighting only on the psf-rounded instrinsic ellipticity which is perfectly valid and reduces the shape noise).

Comment: I'm not very convinced by the tests of PSF removal in this study: since the simulation has constant PSF, methods which are more realistic (allowing rapid spatial variations) are bound to do worse in the test compared to simpler schemes, whereas is reality they might do much better. Wouldn't it be better to allow people to use the fact that the PSF is constant? (therefore testing the already-hard problem of smeared shape estimation without confusion from the also hard but irrelevant problem of spatial modelling).

One result is that Shapelet methods need to use very high order (12th) to get sensible answers. This presumably reflects a relatively poor match between the fundamental low order Shapelet sizes/shapes and the actual PSF/galaxy shapes, and sounds like very bad news for doing reliable spatial interpolation of the PSF. Perhaps one could instead do a PCA on Shapelet coefficients to identify which combinations are relevant, then define a new smaller basis of PCA functions which are much better matched to the images? One could test for spatial dependence of the PSF components, then only interpolate those which are found to vary spatially (e.g. those from focus/atmosphere, but not components describing diffraction spikes/pixelization/CCD artefacts).

Christopher M. Hirata
Posts: 3
Joined: January 18 2005
Affiliation: Princeton University
Contact:

[astro-ph/0506112] The Shear TEsting Programme 1: Weak lensi

Post by Christopher M. Hirata » June 19 2005

I'd be curious to see how well this PCA strategy works. My suspicion is that it would do very well for the PSF (although if one is using PCA coefficients it is not at all clear to me whether one gains anything by using the shapelet basis -- why not just directly use the pixel basis and let the PCA tell you which basis modes actually exist in the data?)

For the galaxies, though, I'm a bit nervous because PCA works by figuring out which "modes" it finds in its training set. Some galaxies, especially spirals/irregulars, tend to have lots of HII regions spread around at essentially random longitudes, and so many high-order shapelet coefficients will get excited. On the other hand, issues involving repeatable features that are always in the same place -- like the shapelet basis not fitting central cusps well -- might be nicely solved by PCA. So it might be worth a try.

Chris
One result is that Shapelet methods need to use very high order (12th) to get sensible answers. This presumably reflects a relatively poor match between the fundamental low order Shapelet sizes/shapes and the actual PSF/galaxy shapes, and sounds like very bad news for doing reliable spatial interpolation of the PSF. Perhaps one could instead do a PCA on Shapelet coefficients to identify which combinations are relevant, then define a new smaller basis of PCA functions which are much better matched to the images? One could test for spatial dependence of the PSF components, then only interpolate those which are found to vary spatially (e.g. those from focus/atmosphere, but not components describing diffraction spikes/pixelization/CCD artefacts).

Richard Massey
Posts: 3
Joined: June 22 2005
Affiliation: CalTech
Contact:

[astro-ph/0506112] The Shear TEsting Programme 1: Weak lensi

Post by Richard Massey » June 22 2005

Firstly, shapelets can get good results at low truncation order (n_max) if the scale size (beta) is chosen well. After all, some shapelets methods reduce to KSB if only the first two coefficients are used, and beta=r_g. There seem to be two camps in the weak lensing methods: either measure observed moments, plus the observed shear susceptibility (like KSB), or shear an initially circular object until it matches the observation. Konrad took the second approach. This makes the shear susceptibility trivial, but makes the choice of beta and n_max harder. Since the model will never match the data perfectly (e.g. spiral arms), it's difficult to know when to stop truncating/enlarging. I think all of the STEP methods fit into one of these catagories, so it'll be interesting to see which method works best.

Secondly, I have tried exactly your idea for PSF interpolation on some Subaru images. Interpolating only the top few principal components of stars' shapes does improve the PSF modelling a little, but not as much as I'd hoped. The problem is that although a shapelet decomposition is linear in the shapelet coefficient, it is not in the scale size of the Gaussian (beta). So one is faced with a choice between using the same beta for all of the stars, or varying beta to fit, but having each shapelet coefficient represent a different aspect of PSFs depending upon their size.

There is a similar issue with PCA of galaxy shapes. Kelly & McKay used SDSS redshifts to fix beta=100kpc in physical units. But that's not ideal either, as some galaxies are physically larger than others.

The long and the short of it is that PCA improves things a little, but I haven't got it perfect yet. Interpolating a full shapelet model of the PSF is clearly going to be more of a challenge than just the few KSB moments... but it's probably easier than interpolating double-Gaussian im2shape PSFs, where the models aren't linear in any of the coefficients! :) Anyone have any ideas on that one?

PS: Hmm, maybe we should allow people to assume that STEP PSFs do not vary in the next round. Good point!

Sarah Bridle
Posts: 144
Joined: September 24 2004
Affiliation: University College London (UCL)
Contact:

[astro-ph/0506112] The Shear TEsting Programme 1: Weak lensi

Post by Sarah Bridle » June 23 2005

since the simulation has constant PSF, methods which are more realistic (allowing rapid spatial variations) are bound to do worse in the test compared to simpler schemes
I would think that using a non-constant psf model only introduces noise to the shear measurement (and no bias). (The psf model would vary across the image due to the noise on the stellar shape measurements, therefore e.g. in some places the psf would be assumed to be a bit bigger and so the shears overestimated, and vica versa.) The enormous number of image simulations used for this trial would then average out this effect. So I would bet that the results would change by a negligible amount on assuming a constant psf anyway. Anyone want to redo their analysis and prove me wrong?!
One result is that Shapelet methods need to use very high order (12th) to get sensible answers.
I would be interested to know whether in practice it is equally important to use high orders for the stars as it is for the galaxies e.g. could you equally well get away with 12th order for stars and 4th order for the (pre-psf) galaxies?. Is it actually feasible anyway, to use a different number of shapelet orders for stars and galaxies?
but it's probably easier than interpolating double-Gaussian im2shape PSFs, where the models aren't linear in any of the coefficients!
Trying to list the advantages of being linear:

* If you fix beta to be the same for two stars, and you want to know what the psf is halfway between the stars, then you can average the shapelet coefs. Because shapelets are a linear transformation of the pixel values, the resulting psf will be the average of the pixel values of the two images (after normalisation). It is true that if you average the coefficients of a double Gaussian fit to the two stars then the resulting psf will not be the same as an average of the pixel values of the two images. However it is not obvious to me that this, in itself, is a bad thing (e.g. if the psf really were a single Gaussian, of variable width, then it would be a good thing).

* The likelihood function is a Gaussian in shapelet parameter space, whereas it can in principle be banana shaped in n-Gaussian parameter space. Therefore if the psf is constant across the image then due to noise the stellar n-Gaussian coefs could be distributed along a banana. A problem then arises on interpolating between stars which, due to random noise, end up at opposite ends of the banana. The average of these two sets of coefs is no longer on the banana, and therefore is not a good fit to the psf. I had expected this to be a problem for STEP1, but I looked for bananas and couldn't see them, so I did straight averaging.

Any other advantages I've forgotten?

Richard Massey
Posts: 3
Joined: June 22 2005
Affiliation: CalTech
Contact:

[astro-ph/0506112] The Shear TEsting Programme 1: Weak lensi

Post by Richard Massey » June 24 2005

I would think that using a non-constant psf model only introduces noise to the shear measurement (and no bias). (The psf model would vary across the image due to the noise on the stellar shape measurements, therefore e.g. in some places the psf would be assumed to be a bit bigger and so the shears overestimated, and vica versa.) The enormous number of image simulations used for this trial would then average out this effect. So I would bet that the results would change by a negligible amount on assuming a constant psf anyway. Anyone want to redo their analysis and prove me wrong?!
Since the individual images are finite in size, there will inevitably be edge effects if a high order polynomial is used to fit the PSF. Noise variations in the centres of images will make the shapes blow up near any less well constrained edges. This'll reduce the sensitivity to shear on average, and may even be biased. I can't say for certain, but I'd bet that it's not completely negligible. I there is any trace of this, it will affect methods using high order fits more than it will affect methods using adaptative or low order fits. So to make things fair, we should ask everyone to use the same order; and why not constant (or even supply the PSF?) PSF Interpolation is a whole other challenge, and it'd be nice to separate it from shear measurement in future STEPs. Maybe we need a PIP.
I would be interested to know whether in practice it is equally important to use high orders for the stars as it is for the galaxies e.g. could you equally well get away with 12th order for stars and 4th order for the (pre-psf) galaxies?. Is it actually feasible anyway, to use a different number of shapelet orders for stars and galaxies?
It is indeed feasible: the truncation orders of the star and galaxy models don't need to be identical. Konrad doesn't specify the order of his stellar model, but there is no reason why he should use 12th order, just because that's what he's using for the galaxies. In my implementation, I am even changing the truncation order from one galaxy to the next, depending roughly upon their S/N.

{The only thing to bear in mind is that you need to convolve the galaxy model in these methods (fitting convolved basis functions achieves the same end result as inverting the convolution matrix, but appears more stable). This makes the convolved model bigger than either the input galaxy or the input star. So you have to be careful to use an n_max and beta for the convolved model that are bigger than both of the inputs. In your example, you might need a 14th order to represent the convolution of a 12th and a 4th order shapelet. I have worked out the necessary increase (it'll probably make the next shapelets paper) and the output order can be chosen arbitrarily during the convolution process, so everyone's happy.}

Altering n_max is also useful: faint galaxies typically need far fewer shapelet coefficients to be well represented than stars in space-based data, because galaxy profiles are closer to the shapelet Gaussian. I suppose the reverse is probably true from the ground. All you require is a close reconstruction, and being frugal with n_max speeds things upconsiderably. I think I have improved the latest version of the shapelets code so that the time it takes per object is \prop n_max or n_max^2 (it was \prop n_max^4) but that gain is still helpful if you're sat writing cosmo-coffee posts while you're waiting...
It is true that if you average the coefficients of a double Gaussian fit to the two stars then the resulting psf will not be the same as an average of the pixel values of the two images.
Good to hear that the bananas aren't slipping you up. But which components do you average together? Is one Gaussian always high and narrow, while the other is low and wide? If not, there doesn't seem a unique way of deciding that Gaussian 1 from star A matches Gaussian 1 from star B or Gaussian 2 from star B. Does it matter which way round you do it?

Another advantage of being linear? You don't need MCMC! (Or is saying that considered heretical here?)

Chris Roat
Posts: 1
Joined: June 22 2005
Affiliation: UC Davis

[astro-ph/0506112] The Shear TEsting Programme 1: Weak lensi

Post by Chris Roat » July 06 2005

Hi,
Since the individual images are finite in size, there will inevitably be edge effects if a high order polynomial is used to fit the PSF. Noise variations in the centres of images will make the shapes blow up near any less well constrained edges.
This statement applies when using a single image to do your work. However, if one is simultaneously fitting many exposures, or if one circularizes exposures prior to stacking, then this statement applies only to an individual exposure. Thus, one has the ability to throw out pixels for galaxies that appear near the edge of CCD chips, but keep pixels for that same galaxy in other exposures where the PSF is well determined. If the exposures were adequately dithered, this would slightly lower the signal-to-noise, but circumvent any edge-fitting problems. For this reason, and to avoid forcing everyone to a particular noise model from the stacking, I think it useful to provide the data in the form of flat-fielded exposures.

As far as picking the correct basis for interpolating the PSF, I think both the double-Gaussian and the Shapelet approaches have merit. What really matters is capturing the physics that occurs, while simultaneously extracting as much information as possible that is relevant for lensing.

If the PSF has wide tails and a small core, the double-Gaussian approach might be best. If the shape variation of the tails is caused by something different that what causes shape variation in the core, then interpolating the the parameters between stars will work nicely. The double-Gaussian (multiple-Gaussian) approach can capture information on multiple, disparate length scales quite well.

If the PSF has an intricate shape, it can be captured by shapelets. For a simple example, if the PSF is a highly erratic shape and the PSF variation is due only to focus, then one may only need to vary the length-scale ("beta") parameter across the image. If only certain coefficients are thought to vary across the image (say, via a PCA method, or via physical knowledge of the telescope/camera), the obviously Shapelets would be efficient at extracting information.

I don't advocate either way. I try to have my software flexible enough to choose either method (and even a pixel-based PSF method). I imagine that every experiment may need to make independent decisions on what to choose. The nice thing about the STEP simulations is that they provide several common PSF features, allowing us to see which methods work well in which situations.

Cheers,
Chris

Post Reply