[0802.1528] Bayesian Galaxy Shape Measurement for Weak Lensing Surveys -II. Application to Simulations

Authors:  T. D. Kitching, L. Miller, C. E. Heymans, L. van Waerbeke, A. F. Heavens
Abstract:  We extend the Bayesian model fitting shape measurement method presented in Miller et al. (2007) and use the method to estimate the shear from the Shear TEsting Programme simulations (STEP). The method uses a fast model fitting algorithm which uses realistic galaxy profiles and analytically marginalises over the position and amplitude of the model by doing the model fitting in Fourier space. This is used to find the full posterior probability in ellipticity so that the shear can be estimated in a fully Bayesian way. The Bayesian shear estimation allows measurement bias arising from the presence of random noise to be removed. In this paper we introduce an iterative algorithm that can be used to estimate the intrinsic ellipticity prior and show that this is accurate and stable. By using the method to estimate the shear from the STEP1 simulations we find the method to have a shear bias of m ~ 0.005 and a variation in shear offset with PSF type of sigma_c ~ 0.0002. These values are smaller than for any method presented in the STEP1 publication that behaves linearly with shear. Using the method to estimate the shear from the STEP2 simulations we find than the shear bias and offset are m ~ 0.002 and c ~ -0.0007 respectively. In addition we find that the bias and offset are stable to changes in magnitude and size of the galaxies. Such biases should yield any cosmological constraints from future weak lensing surveys robust to systematic effects in shape measurement.
[PDF]  [PS]  [BibTex]  [Bookmark]

Discussion related to specific recent arXiv papers
Post Reply
Douglas Applegate
Posts: 9
Joined: October 19 2007
Affiliation: Kavli Institute for Particle Astrophysics & Cosmology (KIPAC)

[0802.1528] Bayesian Galaxy Shape Measurement for Weak Lens

Post by Douglas Applegate » February 15 2008

Lensfit seems to be an exciting methodology breakthrough for weak lensing. Even if the improvements on the STEP simulations reported here turn out to be incorrect, I think this Bayesian approach opens the door to a more robust reporting of errors in WL methods.

That being said, I do have some questions/comments that I hope the community (or the authors) could shed some light on.

1.) Given a catalog of galaxies, this method seems to be able to fit only a constant shear. With the use of expectation values to fit g, how would this method be expanded to measure a varying shear field over an image?

2.) Fitting the prior from the data seems reasonable. I think in the stats world this is known as a 'hierarchical model.' However, the A-D variables in the paper's prior would need some defined 'hyper prior' to be fully Bayesian; hyper priors tend to be simple distributions. Since the priors are irrelevant to the measurement, the prior parameters need to be marginalized. That doesn't seem to be the case in this paper.

3.) The method seems to depend on a training sample to extract the prior for the shape measurement (as opposed to a hierarchical model). If this method really does need a training set, can it not be applied to current datasets?
The paper also mentions that future surveys would have a 'medium depth' auxiliary survey outside the main cosmic shear survey to allow for this training. How would this work? Don't we expect cosmic shear in this training sample? Also, wouldn't galaxy shape evolution, such as the change in the fraction of barred spirals, be neglected in a shallower sample?

4.) Is there an advantage to working with the expectations of the individual galaxy posterior probability functions? A more Bayesian approach is to carry along the posteriors. Then <e> = \int e \Pi P(e|y_i) de, and g would be fit as an explicit parameter in the statistical model.

5.) Is comparing this method's m & c values from the STEP papers a fair comparison to other methods reported in STEP1 & STEP2? The authors used extra information (the identity of unsheared images) that was unavailable during the STEP competitions. If the prior was inferred from the data as described above in point (2), then I would expect the results to be degraded to some degree, or at the very least have larger error bars.

Good paper, and I look forward to any responses.

Cheers,
Douglas Applegate

Thomas Kitching
Posts: 9
Joined: September 14 2005
Affiliation: University of Edinburgh
Contact:

[0802.1528]

Post by Thomas Kitching » February 16 2008

Thanks for the post on our paper!

We believe that this is a genuinely new and promising way forward in the shape measurement problem, and that by using a technique such as lensfit the concern that shear bias may limit future weak lensing surveys could be eliminated.

For more information please visit http://www.physics.ox.ac.uk/lensfit
on which there will soon be publically available code to download.

We do not expect that the results presented will turn out to be incorrect. We have extensively tested the lensfit pipeline and found it to be robust and accurate.

To answers your specific points:

1) lensfit can measure the shear across any image in the same way as any other shape measurement method. The shear can be estimated on a galaxy-by-galaxy basis from the sensitivity weighted <e>. So that in exactly the same way as every other shape measurement method one could choose apertures, measure shear in each aperture and vary the scale etc.

Furthermore since the entire posterior in ellipticity is determined the probability of shear p(g) could be fully and directly estimated by convolving the ellipticity distribution from each galaxy.

2) Yes we could have used 'hyper-priors' on the functional fit the prior, but in the case the the intrinsic ellipticity distribution is entirely unknown the only reasonable prior is a flat one, which is what we have implicitly assumed in the analysis presented in this paper.

Also, the shape of the ellipticity distribution is extremely well constrained by a large sample (i.e. the hyper-likelihoods for the parameters describing the prior are extremely sharply peaked) so we don't expect that this marginalisation would make much difference.

3) This an important point which we wish to clarify. The method does not depend on a training set. The prior can be derived from the data itself. As such it can be used on any data set. The only case in which this would break down is for *extremely* small data sets (of order a few hundred galaxies) e.g. snapshots of clusters, in this case the prior could be estimated from an external data set.

A medium-deep survey would be of use if the main survey was not complete to a certain magnitude say (though this is unlikely for future weak lensing experiments). In this case the deeper and more complete survey could be used to correct estimate the prior as a function of magnitude. Which could then be used for the wider survey.

In this process we are estimating the prior distribution of ellipticity, not the prior distribution of the shear. Over any one survey the shear is expected to average to zero (apart from in particular regions such as near a galaxy cluster).

If one then uses the ellipticities in some way to measure a shear statistic (mean shear, shear variance, shear correlation function, shear power spectrum, map statistic) then one could/should apply priors on gamma at that point, but lensfit is a step removed from that stage of analysis.

4) There is a number of ways that there is an advantage in using the individual galaxy posterior likelihoods. Firstly any "bad" likelihood surfaces from individual galaxies could be identified and not used in the ensemble. Secondly in the case of multiple exposures the likelihood from the same galaxy from each exposure should be able to be combined in an optimal way.

When using the individual posterior the equation mentioned could be trivially computed, but the expression quoted for <e> isn't what one would like to calculate. We could calculate the probability distribution in shear p(g) from the convolution of individual posterior probability distributions if we wished. But in the most straightforward analysis, if we don't wish to know the distribution of g but just the mean, we can calculate the expectation value <e>, computationally efficiently, because the expectation value of a sum is a sum of the expectations values.

5) We have a number of points on this issue. Firstly in STEP2 the zero-shear images were known. To quote the README given to the original STEP2 participants:

'In each case, image #00 contains zero shear, and can be used if your shear measurement method requires an ensemble galaxy population without it.'

So for STEP2 the results are fair an accurate.

For STEP1 we did use this information however (a) the majority of other authors didn't use a prior or ensemble average so that information was irrelevant for them (b) for our method in particular the fact that the step shear is constant over the images is actually more difficult than in real data. This is because the constant high shear values might lead to an additional slight bias in the ellipticity prior (i.e. not being centered on zero) which would not exist in real data (where the shear varies on small scales). This is why we chose not to use the sheared images when estimating the ellipticity distribution prior.

Cheers
Tom Kitching

Douglas Applegate
Posts: 9
Joined: October 19 2007
Affiliation: Kavli Institute for Particle Astrophysics & Cosmology (KIPAC)

[0802.1528] Bayesian Galaxy Shape Measurement for Weak Len

Post by Douglas Applegate » February 19 2008

Thanks Tom for helping me out with my questions!

Could you post here when the code is available for public consumption? It'd be a convenient way to let everyone know its ready. I know I'd like to try it out.

If you'd indulge me, I have a few more questions about how your method works.

In the paper you mention that you convolve the galaxy model with the recovered PSF model to create an observational model for the galaxies. A PSF that varies across the field would then require that the galaxy model be convolved for every galaxy position in the field, correct? Then the prior would need to be fit with the PSF removed first. Would that be fit as additional parameters in the statistical model or would the PSF be removed explicitly?

If I'm thinking straight on this, it'd then be possible to release a measurement of the prior derived from something like the COSMOS field. Releasing that information would then allow the method to be used on cluster snapshots, since, as you said above, you expect the method to break down on small samples. Are there plans for something like this?

Cheers,
Doug Applegate

Thomas Kitching
Posts: 9
Joined: September 14 2005
Affiliation: University of Edinburgh
Contact:

[0802.1528]

Post by Thomas Kitching » February 19 2008

We will certainly make a post here when the publically available code is ready.
In the paper you mention that you convolve the galaxy model with the recovered PSF model to create an observational model for the galaxies. A PSF that varies across the field would then require that the galaxy model be convolved for every galaxy position in the field, correct?
This is correct. We have an implementation in the code which can characterise a spatially varying PSF, which will be explained in detail in documentation that will accompany the code. However one could use another method to create the PSF and use this in conjunction with lensfit if this was more desirable e.g using shapelet coefficients, polynomials etc.
Then the prior would need to be fit with the PSF removed first. Would that be fit as additional parameters in the statistical model or would the PSF be removed explicitly?
This is not correct. The model is convolved with the PSF, this is then fitted to the galaxy image (true galaxy convolved with PSF) to create a likelihood for the true galaxy ellipticity. The PSF does not need to be 'removed' at any point in this process. The intrinsic ellipticity prior created is then the probability distribution of the underlying galaxy ellipticities (this is independent of the PSF if the PSF is correctly characterised).

For small samples one could certainly use the prior derived from a wider field survey. To second order one may imagine that the prior could vary as a function of magnitude, size and colour. In this case the wider survey used would have to have the same magnitude limits and colours that were used in the smaller sample.

Cheers
Tom

Post Reply