 I see how it works for background, but does it work for perturbations as well if you assume just w=w(\rho), or do you actually need w=w(\rho,z)?
 Secondly, there are surely finetuning arguments: A two fluid component with w=0 and w=1 is surely more realistic than a singlefluid component with some complicated EOS?
[astroph/0702615] The dark degeneracy: On the number and nature of dark components
Authors:  Martin Kunz (University of Geneva) 
Abstract:  We use that gravity probes only the total energy momentum tensor to show how this leads to a degeneracy for generalised dark energy models. Because of this degeneracy, Omega_m cannot be measured. We demonstrate this explicitely by showing that the CMB and supernova data is compatible with very large and very small values of Omega_m for a specific family of dark energy models. We also show that for the same reason interacting dark energy is always equivalent to a family of noninteracting models. We argue that it is better to face this degeneracy and to parametrise the actual observables. 
[PDF] [PS] [BibTex] [Bookmark] 

 Posts: 183
 Joined: September 24 2004
 Affiliation: Brookhaven National Laboratory
 Contact:
[astroph/0702615] The dark degeneracy: On the number and na
This is a quite interesting paper. Basically, it claims that since gravity probes just the total energymomentum tensor you will never be able to see its subcomponents and so the split to DM and DE is arbitrary: you could always replace it by suitable variation of w.

 Posts: 44
 Joined: October 26 2004
 Affiliation: Santa Fe Institute
 Contact:
[astroph/0702615] The dark degeneracy: On the number and na
First of all, I only kind of skimmed the paper, but I am going to say I have lots of opinions on it anyway.
I'm not sure I follow their argument for why perturbations don't help you break this degeneracy.
While it is obvious that a DM+DE universe in the homogenous case can be "unified" as a single funnyw component, isn't there a distinct difference between a universe with two components, one with c_s=1, and one with c_s=0, and a universe with one component with c_s=something else?
Now their comment that we could let dark energy cluster and still be OK with the CMB and supernovae is well taken. This makes sense because there is no dark energy at the CMB epoch, w=0, and supernovae only probe the homogenous cosmology.
My structureformation intuition, however, breaks down when I consider a w=0.5 (say  chosen to fit the H(z) observations) dark energy with c_s~0 to allow for structure formation. How does that cluster? The problem for me is that I am used to being allowed to do the Newtonian approximation below the horizon scale, but you can't do that here because the matter pressure is relativistic.
Can you *really* get a tophat to collapse if it's made out of negativew matter? I just don't see how the standard pictures get off the ground. An Omega_lamba=100 universe is closed, but it does not collapse!
I guess this is Anze's point below, that OK, you kind of want to flip the w when the density gets to be a certain point  put in w=w(\rho) so that when you get a lump forming somewhere w goes to zero (say.) But I'm not sure that actually is guaranteed to work in all cases.
So let's see: early in the universe rho is large, so w=0. Then later the homogenous evolution makes rho drop and w starts going negative. However, as this happens, there are slight overdensities that have been growing and by the time w starts going negative in the background they are dense enough that they don't participate in this.
In this model, structure formation is turned off sort of "all at once". As soon as w starts going past 1/3 in the background, any overdensity still (say) in the linear regime is *never* going to turn around. This is in contrast to the standard LCDM model where structure formation is slowed but not stopped during dark energy domination.
I guess what I'm saying is that I see how w(\rho) can give you qualitatively the same results as LCDM for the growth function, but maybe not so much how you could get them to line up perfectly.
In the end, I don't doubt that you could come up with a sufficiently complicated "single fluid" to do all the work of DE+DM. In the end, he is right, you only measure w, \phi, \psi, \Pi, etc. on different scales, and you can always produce a model that links them all together. The standard "Occam's razor" objections apply (and this is I think is related to Anze's fine tuning argument) that you may end up having a fluid with such complicated relationships between those quantities that you give up and go to two fluids.
I wonder though if there is a "knock down" argument. For example, if you believe in the DE+DM distinction, you believe that if you took a galaxy and blew it up it would still behave like w=0 even if dispersed to background densities. In order for a "single fluid" adherent to account for this, he would have to say that his single fluid has some sort of hysteresis! At some point in the thought experiment of "can you ever see two fluids purely gravitationally" you can break the degeneracy though just a minimal application of common sense/scientific taste.
(But I guess that bomb would have to interact with the DM nongravitationally!)
I'm not sure I follow their argument for why perturbations don't help you break this degeneracy.
While it is obvious that a DM+DE universe in the homogenous case can be "unified" as a single funnyw component, isn't there a distinct difference between a universe with two components, one with c_s=1, and one with c_s=0, and a universe with one component with c_s=something else?
Now their comment that we could let dark energy cluster and still be OK with the CMB and supernovae is well taken. This makes sense because there is no dark energy at the CMB epoch, w=0, and supernovae only probe the homogenous cosmology.
My structureformation intuition, however, breaks down when I consider a w=0.5 (say  chosen to fit the H(z) observations) dark energy with c_s~0 to allow for structure formation. How does that cluster? The problem for me is that I am used to being allowed to do the Newtonian approximation below the horizon scale, but you can't do that here because the matter pressure is relativistic.
Can you *really* get a tophat to collapse if it's made out of negativew matter? I just don't see how the standard pictures get off the ground. An Omega_lamba=100 universe is closed, but it does not collapse!
I guess this is Anze's point below, that OK, you kind of want to flip the w when the density gets to be a certain point  put in w=w(\rho) so that when you get a lump forming somewhere w goes to zero (say.) But I'm not sure that actually is guaranteed to work in all cases.
So let's see: early in the universe rho is large, so w=0. Then later the homogenous evolution makes rho drop and w starts going negative. However, as this happens, there are slight overdensities that have been growing and by the time w starts going negative in the background they are dense enough that they don't participate in this.
In this model, structure formation is turned off sort of "all at once". As soon as w starts going past 1/3 in the background, any overdensity still (say) in the linear regime is *never* going to turn around. This is in contrast to the standard LCDM model where structure formation is slowed but not stopped during dark energy domination.
I guess what I'm saying is that I see how w(\rho) can give you qualitatively the same results as LCDM for the growth function, but maybe not so much how you could get them to line up perfectly.
In the end, I don't doubt that you could come up with a sufficiently complicated "single fluid" to do all the work of DE+DM. In the end, he is right, you only measure w, \phi, \psi, \Pi, etc. on different scales, and you can always produce a model that links them all together. The standard "Occam's razor" objections apply (and this is I think is related to Anze's fine tuning argument) that you may end up having a fluid with such complicated relationships between those quantities that you give up and go to two fluids.
I wonder though if there is a "knock down" argument. For example, if you believe in the DE+DM distinction, you believe that if you took a galaxy and blew it up it would still behave like w=0 even if dispersed to background densities. In order for a "single fluid" adherent to account for this, he would have to say that his single fluid has some sort of hysteresis! At some point in the thought experiment of "can you ever see two fluids purely gravitationally" you can break the degeneracy though just a minimal application of common sense/scientific taste.
(But I guess that bomb would have to interact with the DM nongravitationally!)

 Posts: 2
 Joined: February 27 2007
 Affiliation: University of Geneva
[astroph/0702615] The dark degeneracy: On the number and na
Hi Anze and Simon,
thanks for the interest! Just a few comments from my side.
The main motivation for me (and the reason why I started thinking
about it) was that we are trying to build the "most general dark
energy model", in order to eventually measure its properties somehow,
and I was not sure whether we need to include interactions between DE
and DM. It turns out we don't need to. Of course this is not the same
as saying that there are no such interactions! But the former is about
measuring the properties of the dark stuff, and the latter is rather
about model comparison.
So about Anzes first point: When you go beyond the background, you
need to specify more than [tex]w[/tex]. For the scalar perturbations, you need
to specify e.g. [tex]\delta p[/tex] and [tex]\Pi[/tex]. However, the point is that
gravity only constrains the total [tex]\delta p[/tex] and the total [tex]\Pi[/tex].
Secondly, yes, LCDM works amazingly well, apart from the tiny problem
with the value of [tex]\Lambda[/tex]. I mean, the big problem with the tiny
value. :) Instead, see Fig. 2, you could also fit WMAP+SNLS data with
only baryonic matter and a dark energy with [tex]c_s^2=0[/tex] and [tex]w(z)[/tex] that
tends to [tex]0[/tex] at high redshift (solving partially the finetuning
problem) and then going negative (I haven't computed it, but I guess
to about [tex]w_0\approx0.8[/tex]), maybe towards a de Sitter attractor in the
far future. Is this really a "worse model"? Also, e.g. modified
gravity models lead to quite complicated "effective" dark energies,
even though as a fundamental model they are conceptually quite simple.
So, yes, model comparison we can (and should!) do. It may be the only
way to understand the dark energy. What I wanted to point out is that
we cannot {\em measure} the properties of the dark energy without a
model, and that quite some surprises may still be waiting for us. I
also wanted to point out that we have to be quite careful with the
assumptions we make in data analysis. Using LCDM, or even using scalar
field dark energy, is a very strong assumption. (I must say, I'm
rather surprised that the latter suffices to break the degeneracy
completely  something I have yet to understand.)
To Simons comment: Fundamentally, I can always divide the full
[tex]T_{\mu\nu}[/tex] into several components, or add it together. Redefining
the quantities in it then necessarily leads to the same evolution, it
is after all the same energy momentum tensor, and a solution to the
full Einstein equations. However, I have absolutely no intuition how
this would actually look like in the fully nonlinear case. I think I
agree with Simon that we would expect it to look ridiculously
finetuned, if we get the wrong split. But we just don't know. Maybe
instead the nonlinearities lead to generic behaviour for many models?
Still, the nonlinear evolution may be the best place to look in order
to rule out models  a simple model that gives the correct nonlinear
evolution is certainly a good candidate. Anyone interested in
thinking about how to do "general relativistic Nbody simulations"?
(I would be!)
One also has to appreciate that already at first order perturbation
theory one has a lot of freedom: Although [tex]w[/tex] controls the coupling
to gravity, the gravitational collapse of the fluid is controlled
by [tex]\delta p[/tex]. If [tex]w\neq1[/tex] you actually have to stabilise the dark
energy perturbations with internal perturbations to evade a catastrophic
collapse due to [tex]c_s^2\sim w<0[/tex].
thanks for the interest! Just a few comments from my side.
The main motivation for me (and the reason why I started thinking
about it) was that we are trying to build the "most general dark
energy model", in order to eventually measure its properties somehow,
and I was not sure whether we need to include interactions between DE
and DM. It turns out we don't need to. Of course this is not the same
as saying that there are no such interactions! But the former is about
measuring the properties of the dark stuff, and the latter is rather
about model comparison.
So about Anzes first point: When you go beyond the background, you
need to specify more than [tex]w[/tex]. For the scalar perturbations, you need
to specify e.g. [tex]\delta p[/tex] and [tex]\Pi[/tex]. However, the point is that
gravity only constrains the total [tex]\delta p[/tex] and the total [tex]\Pi[/tex].
Secondly, yes, LCDM works amazingly well, apart from the tiny problem
with the value of [tex]\Lambda[/tex]. I mean, the big problem with the tiny
value. :) Instead, see Fig. 2, you could also fit WMAP+SNLS data with
only baryonic matter and a dark energy with [tex]c_s^2=0[/tex] and [tex]w(z)[/tex] that
tends to [tex]0[/tex] at high redshift (solving partially the finetuning
problem) and then going negative (I haven't computed it, but I guess
to about [tex]w_0\approx0.8[/tex]), maybe towards a de Sitter attractor in the
far future. Is this really a "worse model"? Also, e.g. modified
gravity models lead to quite complicated "effective" dark energies,
even though as a fundamental model they are conceptually quite simple.
So, yes, model comparison we can (and should!) do. It may be the only
way to understand the dark energy. What I wanted to point out is that
we cannot {\em measure} the properties of the dark energy without a
model, and that quite some surprises may still be waiting for us. I
also wanted to point out that we have to be quite careful with the
assumptions we make in data analysis. Using LCDM, or even using scalar
field dark energy, is a very strong assumption. (I must say, I'm
rather surprised that the latter suffices to break the degeneracy
completely  something I have yet to understand.)
To Simons comment: Fundamentally, I can always divide the full
[tex]T_{\mu\nu}[/tex] into several components, or add it together. Redefining
the quantities in it then necessarily leads to the same evolution, it
is after all the same energy momentum tensor, and a solution to the
full Einstein equations. However, I have absolutely no intuition how
this would actually look like in the fully nonlinear case. I think I
agree with Simon that we would expect it to look ridiculously
finetuned, if we get the wrong split. But we just don't know. Maybe
instead the nonlinearities lead to generic behaviour for many models?
Still, the nonlinear evolution may be the best place to look in order
to rule out models  a simple model that gives the correct nonlinear
evolution is certainly a good candidate. Anyone interested in
thinking about how to do "general relativistic Nbody simulations"?
(I would be!)
One also has to appreciate that already at first order perturbation
theory one has a lot of freedom: Although [tex]w[/tex] controls the coupling
to gravity, the gravitational collapse of the fluid is controlled
by [tex]\delta p[/tex]. If [tex]w\neq1[/tex] you actually have to stabilise the dark
energy perturbations with internal perturbations to evade a catastrophic
collapse due to [tex]c_s^2\sim w<0[/tex].

 Posts: 183
 Joined: September 24 2004
 Affiliation: Brookhaven National Laboratory
 Contact:
Re: [astroph/0702615] The dark degeneracy: On the number an
Yes, sure, but can you get away without specifying timedependence? I.e. have a supermagic fluid that can be as strange as you want it, but doesn't know about the cosmic clock and still fit everything? (I always found it very strange to parametrise w as a function of z rather than \rho, anyway)Martin Kunz wrote: So about Anzes first point: When you go beyond the background, you
need to specify more than [tex]w[/tex]. For the scalar perturbations, you need
to specify e.g. [tex]\delta p[/tex] and [tex]\Pi[/tex]. However, the point is that
gravity only constrains the total [tex]\delta p[/tex] and the total [tex]\Pi[/tex].
I talked to several people about that. Apparently it is out of question, it is extremely unstable, etc. People can just about make half an orbit of one BH around the other...Martin Kunz wrote: Anyone interested in thinking about how to do "general relativistic Nbody simulations"?
(I would be!)

 Posts: 49
 Joined: December 17 2004
 Affiliation: Perimeter Institute/ University of Waterloo
 Contact:
[astroph/0702615] The dark degeneracy: On the number and na
The dark energy model discussed is identical to the quadratic Cuscuton model the we talked about in astroph/0702002, at least in terms of its background evlution. The point is that if, in addition to a cosmological constant, you have an incompressible dark component (c_s =\infty) with a quadratic potential, its density follows H^2, amounting to an effecive renormalization of the Planck mass in the Friedmann equation. However, the expansion history remains unchanged, and so any geometric measurement (such SNe, or BAO) is blind to the presence of this component (i.e. [tex]H^2(z) = A+ B(1+z)^3[/tex]).
In our case, this component is incompressible, and so does not cluster on small scales, similar to quintessence, leading to strong constraints(through ISW effect, similar to Martin's Fig. 2, as well as its effect on Lyalpha forest). However, if you assume c_s=0 for this component, as Martin posits, this component is indeed identical to CDM + cosmological constnat. There is no need to carry out any simulation; the two are indistinguishable!
In our case, this component is incompressible, and so does not cluster on small scales, similar to quintessence, leading to strong constraints(through ISW effect, similar to Martin's Fig. 2, as well as its effect on Lyalpha forest). However, if you assume c_s=0 for this component, as Martin posits, this component is indeed identical to CDM + cosmological constnat. There is no need to carry out any simulation; the two are indistinguishable!

 Posts: 2
 Joined: February 27 2007
 Affiliation: University of Geneva
[astroph/0702615] The dark degeneracy: On the number and na
I will have a look at the Cuscuton model, thanks for pointing this out!
Re [tex]w(z)[/tex] vs [tex]w(\rho)[/tex]: If you do model building, then one can certainly argue for demanding [tex]w(\rho)[/tex] (although I am sure one can think of different models). When trying to pin down the "dark energy" through measurements, I think it is too restrictive: One could, just as an example, have a dark energy composed of two fluids with constant [tex]w[/tex] larger and smaller than [tex]1[/tex] (the Quintom model). A constant [tex]w[/tex] is a bit a boring form of [tex]w(\rho)[/tex], but the resulting effective [tex]w[/tex] would cross [tex]1[/tex], which is very hard to model with [tex]w(\rho)[/tex]. Also, if modified gravity is responsible for the latetime acceleration and you deal with a nonstandard Friedmann equation (but of course you don't know that), then the resulting effective [tex]w[/tex] may also not look like [tex]w(\rho)[/tex].
Re [tex]w(z)[/tex] vs [tex]w(\rho)[/tex]: If you do model building, then one can certainly argue for demanding [tex]w(\rho)[/tex] (although I am sure one can think of different models). When trying to pin down the "dark energy" through measurements, I think it is too restrictive: One could, just as an example, have a dark energy composed of two fluids with constant [tex]w[/tex] larger and smaller than [tex]1[/tex] (the Quintom model). A constant [tex]w[/tex] is a bit a boring form of [tex]w(\rho)[/tex], but the resulting effective [tex]w[/tex] would cross [tex]1[/tex], which is very hard to model with [tex]w(\rho)[/tex]. Also, if modified gravity is responsible for the latetime acceleration and you deal with a nonstandard Friedmann equation (but of course you don't know that), then the resulting effective [tex]w[/tex] may also not look like [tex]w(\rho)[/tex].