A Model for the Landscape
Arkani-Hamed, Dimopoulos and Kachru have a very nice paper in which they provide a simple field-theoretic model for Landscape-ish questions. In the context of this field theory, with a large number, , of vacua one can actually make quantitative statements about the statistical distribution of effective couplings.
They want to argue that, whereas the cosmological constant and electroweak scale can be effectively tuned for sufficiently large , the other couplings do not vary significantly from their central values as one samples the large number of vacua. The point is important because any attempt to obtain an anthropic bound on one coupling (say, the cosmological constant) usually assumes that one can freeze the values of the other couplings. If they vary, too, as one samples the ensemble of values of the cosmological constant, one obtains a much weaker bound (or no bound at all).
The idea is to take real scalar fields, , with a decoupled potential
which is the sum of general quartic potentials,
Here, is an ultraviolet cutoff, above which supersymmetry (or similar) kicks in. The dimensionless couplings, , in these potentials are assumed to be chosen randomly from some probability distribution, with the obvious provisos that
- , so that the potential is bounded from below.
- , sufficient to ensure that there are two minima.
- (maybe we should say ), so that we can trust the effective field theory below .
Generically, each such potential will have a pair of nondegenerate minima, located at ,
It is easy to arrange that the tunnelling rate is sufficiently suppressed so that the higher of the two vacua is stable on cosmological timescales. Thus we obtain a theory with vacua, labelled by , with randomly-chosen vacuum energies, .
Note that we’ve assumed no cross-couplings between the . Even if they aren’t coupled to anything else, the couple to gravity, and 1-loop diagrams will induce cross-couplings of the form However, we need to take the cutoff, , sufficiently low so that the corrections to Newton’s constant are under control, and, with that proviso, the cross-coupling terms are parametrically small compared to .
Similarly, when we couple the to Standard Model fields, we assume that the couplings are a sum of independent (randomly-chosen) couplings, , to the , with no cross-couplings. Again, Standard Model loops induce cross-couplings which are parametrically small compared to .
Why are we emphasizing the absence of cross-terms? Because we want to apply the Central Limit Theorem to study the distribution of values for . In the absence of cross-couplings, the large- limit gives us a Gaussian distribution with mean
and variance
where , and , indicate the ensemble average of these parameters over the probability distribution for the couplings of the .
If, as is typically the case, in order of magnitude, then
(or even much smaller if dominates over ). For large , one doesn’t scan a large range of coupling-constant space.
On the other hand, for some couplings, one can sometimes arrange for . For instance, consider a supersymmetric theory with superpotentials
(The absence of a quadratic term can be assured by a discrete R-symmetry.) Such a theory has . So the range of values for the cosmological constant effectively scanned by this model is
For sufficiently large , one can find vacua which cancel SUSY-breaking contribution to the cosmological constant, to an accuracy .
They devote the latter half of the paper to discussing various models in which “most” coupling are effectively frozen, but a few (the cosmological constant, the Higgs mass, the -parameter, …) can be fine-tuned.
I have a few quibbles with some of the details. Most revolve around taking -, as is typically considered in string constructions, and as they do here.
- While we argued that the cross-couplings are parametrically small, they are not numerically much smaller than for in this range. So we’re not really quite able to justify using the Central Limit Theorem
- It’s not really quite true that (and other parameters related to dimensionless couplings by dimensional transmutation) are effectively frozen. For this range of , may vary only by a few percent, but the variation in is substantial. This has a big impact when one tries to argue for the tuning of the cosmological constant and the electroweak scale (respectively, from Weinberg’s argument about structure formation and from the atomic principle)
I could list some more, but I don’t want to spoil the fun of reading a really thought-provoking paper. I also encourage you to read Nima’s comment on Luboš’s blog.
Update (1/20/2005)
I led a very lively discussion, in our brown bag seminar today, on this subject. In it, Willy Fischler pointed out that there’s a radiative correction, much larger than the one mentioned above, which leads to cross-couplings in the scalar potential. One-loop diagrams (with scalars running around the loop) lead to a correction to proportional to . When you do the Weyl rescaling so that the Einstein-Hilbert term has its canonical coefficient, this introduces a correction to the scalar potential of the form
which is not parametrically small (down by a factor of ) compared to . This really does seem to create a cloud over the radiative stability of the decoupled form of the scalar potential, and hence on the applicability of the Central Limit Theorem.