In the previous post I tried to explain what the cosmological constant (abbreviated as CC from now on) is, why we think we need it to explain the faintness of distant supernovae, and why galaxies and other structures would not have been able to form if the CC were substantially larger than the value required to explain the observations. I also indicated how straightforward attempts at calculating the value of the CC from first principles give nonsensical answers.
The existence of galaxies depends crucially on the value of the CC, but we have no theoretical understanding of why the CC isn't much larger than its observed value. If we accept that we need some kind of material structures in the Universe for complex life to exist, our existence then seems to balance on the edge of a knife. It is almost as if the value of the CC had been selected by a supernatural intelligence who created the Universe for us (and from there I suppose it is easy to prove that his name is Yahweh). If we think syllogisms make bad arguments look impressive, and especially if our initials are WLC, we might feel tempted to put it as follows:
1. The fine-tuning of the CC is due to either physical necessity, chance, or design.
2. It is not due to physical necessity or chance.
3. Therefore, it is due to design.
A few comments are in order here. First of all I would like to point out the emotional background of the argument. I can imagine a situation where our models of the Universe contained a parameter X which value was completely irrelevant to the existence of life, and which had a particular, unexplained value in our Universe. If the value of X were important for some other feature of the universe, would we then feel that it made sense to invoke Yahweh as an explanation? I don't think so. So the fine-tuning argument makes the assumption that complex life needs explanation over and beyond other features of the Universe.
The second comment I want to make is that there are more parameters which, like the CC, have values that cannot be derived from theory, and where a small change from their measured values would make life impossible. I will have deal with those later, but for I while I will stick to the one-parameter version of the argument, just to keep things simple. Much of what I will say will apply equally to the general case.
Finally, the list of alternatives in the premises is not exhaustive. For example, our Universe might be a result of naturalistic design (as opposed to supernatural design). That would be the case if we were part of a simulation run by some other advanced life-form on their computers. Proponents of supernatural fine-tuning will have to show that all alternatives are less probable than goddidit.
I will discuss how theists defend the second premise in later posts. What I want to do in closing here is to voice one of the concerns I have about the meaning of fine-tuning.
In the previous post we saw that in our universe the CC is equal to 1 (in suitably defined units), and I claimed that if it were larger than about 100, galaxies wouldn't have formed. I also gave the "best" theoretical estimate of the CC as 10 to the power of 120. This must be why apologists like William Lane Craig say that the CC is fine-tuned to a precision of (roughly) one in 10 to the 120. But if we accept that the CC is a tunable parameter, it could in principle take on any value between negative and positive infinity. So it is in fact tuned to infinite precision. At this point WLC's grin may widen so as to light up the entire solar system, but I tend to see this as a problem with the whole argument. Since the range of possible values is infinite, the life-permitting range of values of the CC will always be insignificant in comparison. It will always be possible to claim that the CC is fine-tuned.
A further problem is the unstated assumption that all possible values of the CC are a priori equally probable, technically referred to as a uniform prior. It could be the case that a more complete physical theory, while not allowing us to calculate the CC, predicts a probability distribution for it which is peaked around the observed value. I am not saying that this situation is likely to arise, but the theist needs to give some justification for why a uniform prior should be preferred.
So, in order to make the notion of fine-tuning precise, both a range and a probability distribution for the CC need to be specified and argued for. To me that seems to entail having some knowledge about the mechanism that got the Universe started, and that is knowledge we don't have.
Advanced Norwegian
Thursday, September 1, 2011
Sunday, August 28, 2011
The cosmological constant
To have something specific to work with I will start off with a typical example of (alleged) fine-tuning: the cosmological constant.
The Universe is expanding. Or, perhaps more accurately, a wealth of observations, including the redshifts of spectral lines from distant galaxies, are simply and elegantly explained by a model where the Universe expands.
General relativity (GR for short) is not magic. Massive objects still exert attractive gravitational forces on each other in Einstein's theory. This means that all the mass between us and a distant galaxy acts on it with an attractive force which will slow it down. The Universe expands now because it began to expand at some time in the distant past, but ever since the initial kick the expansion has been slowing down. Or so we used to think.
In 1998 two independent groups of astronomers discovered that supernovae going off in very distant galaxies looked much fainter than they were supposed to in the model where the expansion slows down. Even after correcting for possible sources of error like light absorbtion by dust, they still found that the supernovae were too faint. This could only mean that they were more distant than expected. The large distances could be explained if the expansion started to accelerate a few billion years ago.
But how can we have acceleration if gravity is always attractive? Well, there is a possibility for repulsive gravity in GR. Actually, you can have a repulsive gravitational force in Newtonian theory as well, and I will try to explain how.
Newton's law of gravity may be familiar to you. It states that the gravitational force between two point masses is attractive, proportional to the product of their masses, and inversely proportional to the square of the distance between them. How did Newton derive it? Well, he made an educated guess.
One thing Newton had to explain was Kepler's laws of planetary motion, including the first one which states that the planets move in elliptical orbits with the Sun at one of the foci. A force between the Sun and the planets falling off as 1 over the distance squared turns out to give exactly this result.
An important thing to bear in mind is that point masses, masses without spatial extension, do not exist in the real world. So what is the gravitational force between to real, solid objects? In general this question is hard to answer. We can divide the objects into pieces small enough to be considered point masses, use Newton's law of gravity, and sum up all their contributions. Newton worried a lot about this problem, and it was one of the reasons why he had to invent calculus. He found that there is one case where the solution is simple and elegant. When the two bodies are spherical, which planets to a good approximation are, the gravitational force between them is the same as if their masses were point masses with all the mass concentrated in their respective centres. So the force will fall off as 1 over the distance squared. In turn this meas that Newton's law of gravity explains Kepler's first law for real planets, not just point-like ones.
So the 1 over distance squared-law as two important properties: it has the same form for spherical masses, and it gives elliptical orbits. Is it unique in this respect? It turns out that there is one, and only one, other possibility: a force increasing proportionally with distance. And this force can be either attractive or repulsive. Newton was aware of this, but didn't worry too much about it. And with good reason. First of all, the 1 over distance squared law could explain everything on its own. Secondly, a significant contribution from the force proportional to distance would screw up Kepler's third law which says that the square of a planets orbital period is proportional to the cube of its mean distance from the Sun. However, a force that increases in proportion with distance could be made insignificant on Solar system scales but important on, say, extragalactic scales by choosing the constant of proportionality appropriately. So while it is not required in Newtonian theory, it cannot be excluded. There can be a repulsive contribution to gravity.
Perhaps not surprisingly we have the same possibility in GR. The repulsive force is here known as the cosmological constant. Einstein introduced it originally as a sort of intrinsic curvature of spacetime. Later on it was realised that it could also be interpreted as the energy density of empty space caused by quantum effects (zero point fluctuations) taking place even in a perfect vacuum. The vacuum energy can provide the repulsive force we need to explain the faintness of distant supernovae.
From the observations we can infer the value of the cosmological constant. For simplicity let's say that this value is equal to 1. What does theory predict? The precise answer is that there is no prediction since nobody knows how to calculate the energy of the quantum vacuum. A straightforward, but naive way of calculating it is to sum up the contributions from all fluctuations. There are fluctuations of different frequencies, and their energy is proportional to the frequency. The frequency can in principle be arbitrarily large, and then the total vacuum energy is formally infinite, which is obviously nonsense.
One can try to improve on the naive estimate (which, given that it gives a ridiculous result, isn't that hard to do) by assuming that there is a smallest length scale in nature. If space is not infinitely divisible, there will be a lower limit to the wavelength of the fluctuations, and therefore an upper limit on their frequency . Physicists have long speculated that there is indeed a smallest length scale, the so-called Planck length of about 10 to the power of -33 cm. If we cut the fluctuations off that the frequency corresponding to this length, we end up with the much more "reasonable" answer of 10 to the power of 120.
So theory tells us, if anything at all, 10 to the power of 120. Observations say 1. Quite a mismatch and a spectacular failure of theory. There are a number of ideas around about how the calculation can be improved, but we need not concern us with them for the time being. Let us instead ask what the Universe would look like if the cosmological constant really were as large as the simple estimate suggests.
The answer is simple: a cosmological constant that large would make the Universe start to expand very rapidly very early, allowing no time for structures like galaxies and stars to form. The Universe would be empy and cold. In fact, this would be the case even if the cosmological constant were only as large as 100.
So observations tell us that the cosmological constant is equal to 1, but we have no explanation for this. We know, however, that we could not live in a universe where it was much larger than 1. It is at this point that apologists says that the cosmological constant must have been fine-tuned by the creator of the Universe, and fans of anthropic arguments try to come up with a way to make sense of the vacuum energy in the context of the so-called multiverse.
I suspect that you already have thought of a number of questions and objections. So have I, but I will save them for later posts. All I wanted to do here was to provide a typical example of an allegedly fine-tuned physical parameter.
The Universe is expanding. Or, perhaps more accurately, a wealth of observations, including the redshifts of spectral lines from distant galaxies, are simply and elegantly explained by a model where the Universe expands.
General relativity (GR for short) is not magic. Massive objects still exert attractive gravitational forces on each other in Einstein's theory. This means that all the mass between us and a distant galaxy acts on it with an attractive force which will slow it down. The Universe expands now because it began to expand at some time in the distant past, but ever since the initial kick the expansion has been slowing down. Or so we used to think.
In 1998 two independent groups of astronomers discovered that supernovae going off in very distant galaxies looked much fainter than they were supposed to in the model where the expansion slows down. Even after correcting for possible sources of error like light absorbtion by dust, they still found that the supernovae were too faint. This could only mean that they were more distant than expected. The large distances could be explained if the expansion started to accelerate a few billion years ago.
But how can we have acceleration if gravity is always attractive? Well, there is a possibility for repulsive gravity in GR. Actually, you can have a repulsive gravitational force in Newtonian theory as well, and I will try to explain how.
Newton's law of gravity may be familiar to you. It states that the gravitational force between two point masses is attractive, proportional to the product of their masses, and inversely proportional to the square of the distance between them. How did Newton derive it? Well, he made an educated guess.
One thing Newton had to explain was Kepler's laws of planetary motion, including the first one which states that the planets move in elliptical orbits with the Sun at one of the foci. A force between the Sun and the planets falling off as 1 over the distance squared turns out to give exactly this result.
An important thing to bear in mind is that point masses, masses without spatial extension, do not exist in the real world. So what is the gravitational force between to real, solid objects? In general this question is hard to answer. We can divide the objects into pieces small enough to be considered point masses, use Newton's law of gravity, and sum up all their contributions. Newton worried a lot about this problem, and it was one of the reasons why he had to invent calculus. He found that there is one case where the solution is simple and elegant. When the two bodies are spherical, which planets to a good approximation are, the gravitational force between them is the same as if their masses were point masses with all the mass concentrated in their respective centres. So the force will fall off as 1 over the distance squared. In turn this meas that Newton's law of gravity explains Kepler's first law for real planets, not just point-like ones.
So the 1 over distance squared-law as two important properties: it has the same form for spherical masses, and it gives elliptical orbits. Is it unique in this respect? It turns out that there is one, and only one, other possibility: a force increasing proportionally with distance. And this force can be either attractive or repulsive. Newton was aware of this, but didn't worry too much about it. And with good reason. First of all, the 1 over distance squared law could explain everything on its own. Secondly, a significant contribution from the force proportional to distance would screw up Kepler's third law which says that the square of a planets orbital period is proportional to the cube of its mean distance from the Sun. However, a force that increases in proportion with distance could be made insignificant on Solar system scales but important on, say, extragalactic scales by choosing the constant of proportionality appropriately. So while it is not required in Newtonian theory, it cannot be excluded. There can be a repulsive contribution to gravity.
Perhaps not surprisingly we have the same possibility in GR. The repulsive force is here known as the cosmological constant. Einstein introduced it originally as a sort of intrinsic curvature of spacetime. Later on it was realised that it could also be interpreted as the energy density of empty space caused by quantum effects (zero point fluctuations) taking place even in a perfect vacuum. The vacuum energy can provide the repulsive force we need to explain the faintness of distant supernovae.
From the observations we can infer the value of the cosmological constant. For simplicity let's say that this value is equal to 1. What does theory predict? The precise answer is that there is no prediction since nobody knows how to calculate the energy of the quantum vacuum. A straightforward, but naive way of calculating it is to sum up the contributions from all fluctuations. There are fluctuations of different frequencies, and their energy is proportional to the frequency. The frequency can in principle be arbitrarily large, and then the total vacuum energy is formally infinite, which is obviously nonsense.
One can try to improve on the naive estimate (which, given that it gives a ridiculous result, isn't that hard to do) by assuming that there is a smallest length scale in nature. If space is not infinitely divisible, there will be a lower limit to the wavelength of the fluctuations, and therefore an upper limit on their frequency . Physicists have long speculated that there is indeed a smallest length scale, the so-called Planck length of about 10 to the power of -33 cm. If we cut the fluctuations off that the frequency corresponding to this length, we end up with the much more "reasonable" answer of 10 to the power of 120.
So theory tells us, if anything at all, 10 to the power of 120. Observations say 1. Quite a mismatch and a spectacular failure of theory. There are a number of ideas around about how the calculation can be improved, but we need not concern us with them for the time being. Let us instead ask what the Universe would look like if the cosmological constant really were as large as the simple estimate suggests.
The answer is simple: a cosmological constant that large would make the Universe start to expand very rapidly very early, allowing no time for structures like galaxies and stars to form. The Universe would be empy and cold. In fact, this would be the case even if the cosmological constant were only as large as 100.
So observations tell us that the cosmological constant is equal to 1, but we have no explanation for this. We know, however, that we could not live in a universe where it was much larger than 1. It is at this point that apologists says that the cosmological constant must have been fine-tuned by the creator of the Universe, and fans of anthropic arguments try to come up with a way to make sense of the vacuum energy in the context of the so-called multiverse.
I suspect that you already have thought of a number of questions and objections. So have I, but I will save them for later posts. All I wanted to do here was to provide a typical example of an allegedly fine-tuned physical parameter.
Thursday, August 25, 2011
Now, where to begin...
I'm not good at coming up with names. I think I have some evidence that my genes are to blame for this, since all my parents could think of when they had me was to name me after my dad. Finding a name for the blog has therefore been difficult for me. The ones I could think that would actually reflect the main theme of the blog were not available. The one I ended up with only reflects the painfully obvious fact that English is my second language. It is the name students at Yale gave to Lars Onsager's course on statistical mechanics (another name they gave it was Sadistical Mechanics). I don't blame you if you don't know who Onsager was, but he is famous among physicists for finding the exact solution of the two-dimensional Ising model, a simplified model of a ferromagnet. He was awarded the Nobel prize in chemistry in 1968 for his theoretical work on the effect of temperature gradients on diffusion processes.
The main purpose of this blog is to examine the claim that the laws of physics are fine-tuned for life, and even more specifically human life. This is a claim you will find being made in religious apologetic writings and even in seemingly serious books and papers on physics and cosmology. In religious circles the fine tuning is used to derive the existence of a deity. Physicists try to explain why it is so, or use so-called anthropic reasoning to predict certain properties of the Universe. My main contention is that both apologists and physicists make the mistake of not distinguishing between the laws of physics and the reality they are only an approximate description of. The laws of physics are models made up by humans. They are not arbitrary human constructions since we judge their quality by how well they describe observations and experiments, but they are still, unlike the external world, things we have made up. If they appear to be fine-tuned for anything it does not follow that the universe is so, too. And if we want to find the fine-tuner of these laws we only have to look in the mirror.
In my posts I will try to explain my take on fine-tuning in more detail, but also review a selection of the literature on the topic. I hope they will be at least partly understandable and moderately interesting.
Subscribe to:
Posts (Atom)