cosmoMC suprnova data

Use of Healpix, camb, CLASS, cosmomc, compilers, etc.
Post Reply
Rahul Biswas
Posts: 5
Joined: September 25 2004
Affiliation: University of Illinois at Urbana Champaign

cosmoMC suprnova data

Post by Rahul Biswas » March 20 2006

Hi,

The file supernovae.f90 can be replaced by supernovae_riess.f90. But is there a reason to not run them simultaneously, which would seem to be what one would want to do if they were likelihood codes for two different datasets (SCP and High Z) ?

Thanks,
Rahul

Alex Conley
Posts: 11
Joined: February 08 2005

cosmoMC suprnova data

Post by Alex Conley » March 25 2006

They aren't independent, as they share much of the same low redshift SN data set,
which acts to 'normalize' the data. So, don't run both simultaneously, or
your results will be incorrect.

Sarah Bridle
Posts: 144
Joined: September 24 2004
Affiliation: University College London (UCL)
Contact:

cosmoMC suprnova data

Post by Sarah Bridle » March 25 2006

Yes. Good point.
I guess it should in principle be possible to produce a new combined supervnova file, which only uses the low z data once.
Am not sure how easy this would be in practice though, given the info provided by the teams.

Rahul Biswas
Posts: 5
Joined: September 25 2004
Affiliation: University of Illinois at Urbana Champaign

cosmoMC suprnova data

Post by Rahul Biswas » March 27 2006

Hi,

Thank you for the two replies ... I was worried about something like this. I would
be interested in the suggestion for combining the two data sets taking into account the low redshift data only once.

However, it is not clear to me how to go about that for two reasons:

If we could identify which data points are common, we could simply erase them from one of the data sets and add the log likelihoods from the two sets, (Though this could reduce the efficiency of the analytic marginalization procedure).So, I am wondering if it is better to combine the data sets and if so how to go from the moduli to magnitudes or vice versa?

Identification of the common data points is the other problem.
There are 10 points in the two datasets where both the redshifts and the deviations match (in the redshift range 0.026-0.495). Is it reasonable to assume that these are the same data points? (In that case it would seem that the values of M that have been used to calculate the moduli in the Riess data set vary from -18.93 or -19.34 to 19.68) Obviously, it would be better if one could just read the information about the common points.

Could someone please suggest the way to proceed?

Thanks,
Rahul

Alex Conley
Posts: 11
Joined: February 08 2005

cosmoMC suprnova data

Post by Alex Conley » March 29 2006

There is no easy way. Things are much more complicated than you probably think.
The different groups use different SN Ia light curve fitting methodologies, so
fits to the same SNe give slightly different results. It washes out in the
end as long as you stick to one data set -- as long as the effect of any differences
is the same for the low and high redshift SNe it won't affect the cosmological results.
But combining data from different light-curve fitting methodologies is a harder
problem. There are lots of reasons to expect things to not be as simple as a
offset in magnitudes between the two fits.

The only way to really do this and have a reliable result would be to
refit the SN lightcurves yourself. This is possible for some of the Knop '03
data (the HST SNe) and most of the Riess '04 data, but not all of it.
I have to imagine this is a lot more than you really want to do.

If you really feel you must combine, what you really have to do is go
back to the original sources and identify the information in the data
files with the names of the SN. Then you can try to fit some sort of
offset between the fitting methods.

However, if I were a referee handed a paper based on a technique
like that I would probably reject it unless the person had done a
very thorough job studying the differences between the light-curve
fits, which would be of a comparable level of work to fitting them
yourselves. Note that this was done in Riess '04 (he includes
SN from Knop 03 and Perlmutter 99 without refitting them,
mostly due to the lack of published lightcurves), but he probably
shouldn't have done so, except possibly as a sidebar analysis
just showing what the level of consistency was.

In other words, I would strongly not recommend trying to combine
the data sets.

Incidentally, you should probably consider Astier '05 over Knop '03.

Rahul Biswas
Posts: 5
Joined: September 25 2004
Affiliation: University of Illinois at Urbana Champaign

cosmoMC suprnova data

Post by Rahul Biswas » March 31 2006

Thank you for the clarifications. You are correct in guessing that I am not keen to get into light curve fitting. However, I am somewhat surprised that given the present interest in supernovae Ia measurements, people well versed in light curve fitting and associated technology have not done this already ( in the rigourous way that you describe). In fact, if I remember (and understood) correctly, I have heard in various talks that a future goal in supernova cosmology would be to get data for supernovae between z= 0.3 and 0.7, as the previous data was relatively sparse in this range. Would such a plan not require this kind of combinining of data sets?

Or does this simply mean that given the relative sizes of the data sets, one would not gain substantially in terms of cosmological information by combining these sets. If that is the case for these two data sets, does the situation change with the SNLS set which is already out?

Rahul

Alex Conley
Posts: 11
Joined: February 08 2005

cosmoMC suprnova data

Post by Alex Conley » March 31 2006

Not all of the lightcurve data has been published, which makes it a bit hard.
Most of the High-z team data has been, most of the SCP data hasn't.
The SNLS lightcurves have not been published, although we
(I am a member of SNLS) are hoping to change that soon.
There is also some unpublished nearby data.

One of the changes going on in the SN field is that people are trying
to move to large, homogenous sets of data. The current samples are from
a wide variey of instruments and telescopes, and how well they can be
combined is getting to be a concern for the next generation of results.
There will never be one data set from one telescope for all redshift ranges,
but most parties are hoping to converge on just a few data sets --
maybe one or two at low redshift, one or two at intermediate redshift,
and one or two at very high redshift. Each of these sets will be from
a small number of telescopes/instruments, and so combining the disparate
sets will hopefully be an easier task. So, with an eye to the future,
combining the old data sets is really not that exciting. That is, even
if all the old data were published, it seems likely that within a year or so
it would be safest to just ignore the old data and work with the new stuff,
which hopefully will all be published.

The redshift range 0.3 to 0.7 is where most of the current data sample
is -- you are probably thinking of the SDSS supernova survey, which is
trying to get SN in the range 0.1-0.3. The state of play by redshift for
current projects and their expected yield is something like
(number of SN, redshift):

KAIT: ? Nearby
SNFactory: 100+ Nearby
CSP: 150? Nearby

SDSS supernova survey: 300ish 0.05-0.3

ESSENCE: 150ish 0.2-0.8
SNLS: 500ish 0.3-1.0

PANS (Hi-z team) several tens 1.0-1.5
SCP Cluster: 5-10 1.0-1.5

These are my reads on the likely numbers, not the
official goals of each project.

Alex

Post Reply