We just had a Cosmo Lunch on this paper along with the related: Visser gr-qc/0309109
and Allam et al astro-ph/0303009
, so many of these comments aren't really mine, but I'll take credit and blame equally.
The "kinetics" push isn't quite justifiable. Blandford's big push seems to be that these parameters are the "right" ones to use for testing dark energy's deviation from a basic lambda universe. The general reaction from the theorists' side was that one basis of parameters was (almost) as good as another and all reveal some_
kind of bias; from the observer's side it was pointed out that the data will in some sense tell you what parameters are good or bad.
We discussed that if the quality of the data is good enough, your priors won't matter. Right now, Lambda CDM is good enough for the data and Occam plays a bit of a role (think about Liddle's astro-ph/0401198
spartan approach to the number of parameters you need and apply it a' la Bassett et al astro-ph/0407364
); when the data gets better, theorists can parametrize all they like; the data will sort it all out.
So I want to hear a convincing argument that expanding around a(t
) is the right thing to do, and have yet to do so. All the hoopla about avoiding a theoretical bias seems dodgy: this approach avoided some_
biases about quintessence and the like, but brings other biases with it. I think Allam et al did one better by showing from a SNAPpish simulation of Lambda CDM SNIa's that you could potentially rule out classes of theories using jerk (and something they call "s" which is not_
the fourth derivative snap). This is also probably the right way to ask the question.
Visser's point was that you can't linearize w
() without considering the jerk: not quite phrased as following, but this gets to the point: w
) = 2q(t
)−1/3. Linearize w
and you have to linearize q
(the decceleration parameter/2nd derivative of a(t
))...hence you pick up j
. Then (though he may have just chosen a really bad parametrization for w
), points out that (now very slightly dusty) since SNIa data does not constrain j
very well, you don't get much constraint on w
'. I don't think this should have been a huge surprise: all we have is fairly discreet data points; the more derivatives you take, the noisier things get... This is a theorist finding out what Bassett and friends new: that even linearizing w
is tricky with current data.
Any other comments?
P.S. Using jerk as a parameter predates these papers (Harrison in Nature v.260 1976) though his motivation was different: he was trying to avoid using the energy densities since they were so difficult to measure precisely. Then again, as one of us pointed out, Zeldovich probably suggested it in passing during an undergraduate lecture in the 60's.