However unsatisfactory this might sound, it is 'physicist's intuition' that tells me that a 2 sigma deviation of a measured parameter away from standard model is not exciting.

First, there are quite a lot of parameters we could measure, and over a few years, the probability that at least one will be 2 sigma out for a non-negligible period of time is rather large.

Second, there is a large but unquantifiable probability that the parameter estimation and error estimate are affected by as yet undiscovered systematics or flaws in analysis, which when discovered almost always erase most of the discrepancy. Small screw-ups are much more likely than significant new physics. I don't know of any good way of incorporating this fact of life into a systematic analysis... cosmologists just need to learn that 2 sigma deviations are as common as mud.

Physicists are well used to dealing with 'n sigma' or likelihood results and know from experience what they probably really mean. Whereas most don't have much tested experience with Bayesian inference. Bayes should take care of the first point (the many possible deviations from a standard model) but, at least at first sight, looks less transparent to the effects of screw-ups. Sure, you can do a new calculation to consider what would happen if this or that systematic changed the data such and such a way, but the effect isn't immediately visible. (Of course in complicated parameter spaces nothing is immediately visible...)

After thinking a bit about my 'bootstrap' suggestion I find that it doesn't work, in that one still needs a zeroth prior (as Kate says) before any data at all is applied - and if that zeroth prior is nonsense then so is the result.

What that says to me is that Bayesian inference about a 'model' which has no physically justifiable prior is meaningless. Sounds obvious, but you can't get physics out without putting physics in.

One obvious example, there is no inflationary physics that gives a HZ spectrum. So there is no physics justification for using a 'model' that fixes n=1 as a point of comparison. If people were to use it just on the basis of looking simple they would be fooling themselves to claim any physical significance for the result.

Perhaps one could reformulate it as comparing an inflationary model or class of models which predicts n-1 to be extremely small, with another (class of) model(s) in which it's distributed over a few percent... a rather more complicated question.

Or in

astro-ph/0701338, the authors choose top-hat priors on their non-LCDM models, with quite arbitrary boundaries. 'We let w vary, assuming that it is small enough to lead to acceleration.' Model II has a flat prior between -1/3 and -1; Model III has a flat prior between -1/3 and -2. Well, why stop at -2? Why impose acceleration in the first place, which sounds suspiciously like dressing up data in the guise of a prior? The whole exercise has no useful relation to physics models of dark energy (e.g axions) that produce sensible, non-top-hat distributions for w.

If you

*have* physics models (eg fitting stellar spectra, supernovae...) you can compare them. If not, you shouldn't fabricate physics-free prior distributions for the purpose of making a comparison. Without meaningful models, I would argue that the best one can do is to measure numbers.

Thomas