[1002.3966] Why all these prejudices against a constant?

Authors:  Eugenio Bianchi, Carlo Rovelli
Abstract:  The expansion of the observed universe appears to be accelerating. A simple explanation of this phenomenon is provided by the non-vanishing of the cosmological constant in the Einstein equations. Arguments are commonly presented to the effect that this simple explanation is not viable or not sufficient, and therefore we are facing the "great mystery" of the "nature of a dark energy". We argue that these arguments are unconvincing, or ill-founded.
[PDF]  [PS]  [BibTex]  [Bookmark]

Discussion related to specific recent arXiv papers
Post Reply
Niayesh Afshordi
Posts: 49
Joined: December 17 2004
Affiliation: Perimeter Institute/ University of Waterloo
Contact:

[1002.3966] Why all these prejudices against a constant?

Post by Niayesh Afshordi » March 01 2010

This is a provocative paper, that questions the motivation behind most of research in cosmology over the past decade, which has focused on dark energy.
In fact it argues that there is no mystery behind having a cosmological constant as small as observations indicate.

It starts by recounting “Einstein’s biggest blunder” which, contrary to popular belief, was not introduction of [tex]\Lambda[/tex], but rather overlooking the instability of Einstein’s static universe. It then points out [tex]\Lambda[/tex] is a natural part of Einstein’s gravity, and is not to be afraid of!

The paper then goes on to the “coincidence problem”, i.e. why dark energy and matter densities are so close. They point out that the statement of the problem is vague, and eventually use the anthropic argument to dismiss it.

The most questionable part of the paper, however, is to dismiss the old cosmological constant problem. While they recount the argument for why QFT vacuum energy is 55 orders of magnitude bigger than observed \Lambda, they dismiss it as a problem of QFT, and unrelated to dark energy problem: “It is just one of the numerous open problems in high-energy physics.”

Now, there are several reasons to take issue with this conclusion. Here is one:

All the physical theories that we know of are non-linear. Therefore, high energy problems can and do directly affect low energy phenomena. In fact, particle cosmology is a vibrant field that emerged over the past thirty years, and include high energy effects such as BBN, inflation, baryogenesis, etc. that affect cosmology and low energy physics. The fact that we can now measure [tex]\Lambda[/tex] doesn’t dismiss the embarrassing predictions of high energy models for dark energy.

The paper says:
“To trust flat-space QFT telling us something about the origin or the nature of a term in Einstein equations which implies that spacetime cannot be flat, is a delicate and possibly misleading step.”
Neglecting the fact that this also rules out almost any application of GR. For example, all the CMB anisotropy calculations would be wrong because they use Thomson cross-sections derived from flat-space QFT.

In the end, let me just quote the last paragraph of the paper, which in my opinion, is the most disturbing of them all:
“Why then all the hype about the mystery of the dark energy? Maybe because great mysteries help getting attention and funding. But offering a sober and scientifically sound account of what we understand and what we do not understand is preferable for science, on the long run.”

Igor Khavkine
Posts: 3
Joined: March 01 2010
Affiliation: ITF, Utrecht

Re: [1002.3966] Why all these prejudices against a constan

Post by Igor Khavkine » March 03 2010

Niayesh Afshordi wrote: The most questionable part of the paper, however, is to dismiss the old cosmological constant problem. While they recount the argument for why QFT vacuum energy is 55 orders of magnitude bigger than observed \Lambda, they dismiss it as a problem of QFT, and unrelated to dark energy problem: “It is just one of the numerous open problems in high-energy physics.”

Now, there are several reasons to take issue with this conclusion. Here is one:

All the physical theories that we know of are non-linear. Therefore, high energy problems can and do directly affect low energy phenomena. In fact, particle cosmology is a vibrant field that emerged over the past thirty years, and include high energy effects such as BBN, inflation, baryogenesis, etc. that affect cosmology and low energy physics. The fact that we can now measure [tex]\Lambda[/tex] doesn’t dismiss the embarrassing predictions of high energy models for dark energy.
Unless I'm mistaken, you are talking about the QFT prediction for the vacuum energy density obtained by summing the zero-point energies, [tex]\hbar\omega(k)/2[/tex], for all modes [tex]k[/tex] up to the inverse Planck scale. I've often wondered why this calculation is called a failed prediction, or even more importantly a prediction at all.

Consider the analogous calculation for the electron mass. Fix an experiment whose outcome is a single number dubbed "the electron mass". There is also a bare electron mass, which enters the QED Lagrangian. Then "the electron mass" can be theoretically calculated to be the bare mass + corrections from Feynman diagrams with loops. If the calculations are done with a momentum space cutoff, then the loop corrections are huge, tending to infinity as the cutoff is taken to infinity: depending on where the cutoff is chosen, the ratio of the loop corrections to "the electron mass" can be arbitrarily large. However, I see no-one running around and claiming that there is an "electron mass" problem. For the cosmological constant, one obtains a divergence already at tree level, but the principle is the same. The bare cosmological constant must be chosen to depend on the cutoff such that this dependence is cancelled by all the quantum corrections, in the limit of the cutoff sent to infinity, and the result is equal to the experimentally measured value. QFT does not specify what the measured value must be, just as it does not specify what the electron mass must be. Both are fixed from experimental input.

I believe that the above argument is a restatment of the last section of Rovelli's paper. So, what's so provocative about it?
Niayesh Afshordi wrote: The paper says:
“To trust flat-space QFT telling us something about the origin or the nature of a term in Einstein equations which implies that spacetime cannot be flat, is a delicate and possibly misleading step.”
Neglecting the fact that this also rules out almost any application of GR. For example, all the CMB anisotropy calculations would be wrong because they use Thomson cross-sections derived from flat-space QFT.
In the above quote, the paper probably overstates its case. It is not controversial that on particle physics scales, the space-time can be aproximated sufficiently well by a chunk of Minkowski space. The argument I outlined above is more to the point and is what I think is most relevant.

Niayesh Afshordi
Posts: 49
Joined: December 17 2004
Affiliation: Perimeter Institute/ University of Waterloo
Contact:

[1002.3966] Why all these prejudices against a constant?

Post by Niayesh Afshordi » March 09 2010

I'm sorry for the late response. You're bringing up a very good point.

The difference between electron mass and cosmological constant (cc) is that the former only logarithmically depends on the cut-off, while the latter depends quartically. Therefore, cc is much more sensitive to cut-off, and is NOT "technically natural". In other words, if you manage to make cc small by canceling against a bare cc, but then change the cut-off a little, it gets large again.

Higgs mass in standard model has a similar problem: it quadratically depends on the cut-off. That's why people invented supersymmetry, in which fermion and boson contribution cancel, and give Higgs a finite mass.

Note that for both Higgs and cc, the problem is not the consistency of the theory, but rather that it requires extreme fine-tuning. An approximate symmetry often provides a solution in field theories, but for the case of CC, we don't know of one.

Igor Khavkine
Posts: 3
Joined: March 01 2010
Affiliation: ITF, Utrecht

[1002.3966] Why all these prejudices against a constant?

Post by Igor Khavkine » March 13 2010

And I am in turn sorry for the late followup. :-)

I I've heard this kind of argument about the problem of fine tuning the cosmological constant and the Higgs mass. However, I'm still failing to appreciate the logic behind it, even though I'd really like to.

The way I see it, fine tuning is only a problem if you are forced to fine tune the input parameters of your model of particle physics. If I interpret what you are saying correctly, your model is a QFT with a momenum space cutoff and your input parameters are (cutoff, bare CC, bare masses, ...). To reproduce all experimental measurements while allowing for variations in the cutoff, the bare parameters must depend on the cutoff. The dependence of the bare electron mass depends on log(cutoff), while the CC and Higgs mass depend on (cutoff)^2. In this case, you are right to say that the bare CC and Higgs mass require more fine tuning than the bare electron mass.

However, here's my problem with the above approach. A QFT model may not have all of the following properties at the same time: Poincare invariance, continuum (space-time is a manifold, not a lattice), and a finite momentum cutoff. If you introduce a cutoff, you lose the space-time continuum and Poincare symmetry.

On the other hand, since we have yet to uncover any evidence of non-continuity of space-time or violations of Poincare symmetry (ignoring space-time curvature at the moment), I am free to build my model of particle physics as a continuum, Poincare invariant, renormalized QFT. My input parameters are (measured CC, measured masses, ...). Since none of the input parameters can be set arbitrarily (like the cutoff could be in the previously described model), there is no fine tuning to be done. Hence, there is no fine tuning problem.

I believe that the last paragraph describes a way to model particle physics with even an apparent fine tuning problem. On the other hand, even if you are faced with a model that has an apparent fine tuning problem, such as QFT with a cutoff, I fail to see why this apparent problem has the magnitude that is usually ascribed to it. I can see only one way this apparent problem can become a real one. Suppose one day we discover a way to directly, experimentally measure the qantities that enter as bare parameters into our cutoff QFT model of particle physics. If these quantities turn out to be significantly different from their theoretical values, that would be a problem. It would be a problem because the requirement of fine tuning will not allow the theoretical values to be fudged enough to correspond to measured ones. Then theoretical predictions would be at variance with observations, and we can all agree that's bad. But there's a major flaw in that logic. There is no evidence that we'll ever have direct, experimental access to these bare parameters, or that they even exist as finite numbers!

So, have I correctly captured the argument supporting the naturalness or fine-tuning problem? If so, do you agree that there is a logical flaw in it? If we still disagree, then where do we differ?

Willard Mittelman
Posts: 2
Joined: March 14 2010
Affiliation: University of Georgia

[1002.3966] Why all these prejudices against a constant?

Post by Willard Mittelman » March 15 2010

One problem here is that there are different possible ways of avoiding fine-tuning, not all of which are compatible with each other. For example, in G.E. Volovik's "Vacuum energy: myths and reality" gr-qc/0604062v4, it is argued that the "natural" value of the cc, at least under certain ideal conditions, is zero; it is then claimed that the non-ideal conditions of the physical world lead to a "dynamics" for the cc, so that the cc is nonzero but also not a true constant. From this standpoint, determining the cc's value is just a matter of figuring out the relevant dynamics, and so there's no (clear) need for fine-tuning. In that case, though, the strategy of avoiding fine-tuning by obtaining a single value for the cc from observation is ruled out. So, the question then becomes: which strategy for avoiding fine-tuning should we adopt (and of course, there are other possible strategies besides the two just mentioned)? The problem of finding the best or correct strategy is not quite the same as "the cc problem" in its usual form, but it is a significant and challenging problem nonetheless. Bianchi and Rovelli, it seems to me, simply opt dogmatically for one particular strategy, without explaining why it is to be preferred over other approaches; I find this rather unhelpful, to say the least.

Igor Khavkine
Posts: 3
Joined: March 01 2010
Affiliation: ITF, Utrecht

[1002.3966] Why all these prejudices against a constant?

Post by Igor Khavkine » March 16 2010

The CC problem, fine tuning, naturalness, I find it difficult to qualitatively distinguish between these apparent problems. It appears to me that all proposed solutions need to rely on some new and yet unknown physics. On the other hand, one can turn the question around and ask why these apparent problems should be problems at all. That is what I'm trying to understand. A standard argument seems to be the kind that Niayesh put forward. But, as I've already written, I don't buy it. So, if these problems are not just apparent but real, what am I missing?

Bianchi and Rovelli's paper does not propose any strategy to solve nauralness or fine tuning problems, from their point of view these are not problems at all. Simply put, if we use continuum, relativistic QFT to model particle physics, then this theory is capable of modeling any particular value of the CC, electron mass, Higgs mass, etc. These values are fixed by comparison with scattering, or perhaps even cosmological data. Aside from this data, we had no a priori estimates for these observables. Without a priori estimates, these values cannot suffer fine tuning or naturalness problems.

Willard Mittelman
Posts: 2
Joined: March 14 2010
Affiliation: University of Georgia

[1002.3966] Why all these prejudices against a constant?

Post by Willard Mittelman » March 16 2010

The Volovik article that I cited does not use new physics to reach the conclusion that the cc, or vacuum energy, is zero under ("ideal") equilibrium conditions; it simply uses the known thermodynamics of macroscopic systems. Since observations indicate that the cc is nonzero, there is already a problem here. This problem can be dealt with by showing how non-ideal, real-world conditions lead to a nonzero cc; this may require some new physics, but the existence of the problem itself provides sufficient motivation for such physics. Alternatively, one could question the idea that the cc is vacuum energy, but then one is faced with the problem of explaining what the cc is. There is nothing wrong with ignoring this problem if one has other interests, of course; but I don't think one should deny the problem altogether, or deny that those who pursue the problem are doing legitimate and important work. Finally, even if one rejects Volovik's argument for the "naturalness" of a zero cc, one is still faced with the problem of explaining why the cc is exempt from thermodynamic considerations that apply in other cases.

Post Reply