9

The anthropic principle, also known as the "observation selection effect", is the hypothesis, first proposed in 1957 by Robert Dicke, that the range of possible observations that could be made about the universe is limited by the fact that observations could happen only in a universe capable of developing intelligent life. Proponents of the anthropic principle argue that it explains why the universe has the age and the fundamental physical constants necessary to accommodate conscious life, since if either had been different, no one would have been around to make observations. Anthropic reasoning is often used to deal with the idea that the universe seems to be finely tuned for the existence of life.

There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts them at thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail. The weak anthropic principle (WAP), as defined by Brandon Carter, states that the universe's ostensible fine tuning is the result of selection bias (specifically survivorship bias). Most such arguments draw upon some notion of the multiverse for there to be a statistical population of universes from which to select. However, a single vast universe is sufficient for most forms of the WAP that do not specifically deal with fine tuning. Carter distinguished the WAP from the strong anthropic principle (SAP), which considers the universe in some sense compelled to eventually have conscious and sapient life emerge within it. A form of the latter known as the participatory anthropic principle, articulated by John Archibald Wheeler, suggests on the basis of quantum mechanics that the universe, as a condition of its existence, must be observed, thus implying one or more observers. Stronger yet is the final anthropic principle (FAP), proposed by John D. Barrow and Frank Tipler, which views the universe's structure as expressible by bits of information in such a way that information processing is inevitable and eternal.

Source: Anthrophic principle - Wikipedia

In other words, when theists assert that the extraordinary fine-tuning of the fundamental constants of the universe, facilitating life, demands a theistic explanation, proponents of the anthropic principle often counter that such fine-tuning is unsurprising—after all, we were bound to exist within a universe capable of sustaining life, otherwise we wouldn't have been here to contemplate it.

Is the Anthropic Principle's rebuttal to the fine-tuning argument sound?

I argue it is not. Allow me to elucidate through an analogy.

The Sniper Firing Squad Analogy

Imagine a scenario where a criminal, facing the death penalty, is placed in the center of a vast arena, surrounded by 10,000 skilled snipers, each armed with a high-quality rifle boasting a 99% accuracy rate. Just before firing, each sniper meticulously ensures their equipment is in optimal condition.

If we presume each sniper operates independently, the likelihood of all 10,000 missing their target can be calculated as 0.01 ^ 10,000 = (1/100) ^ 10,000 = 1 / 10^20,000. This equates to a minuscule 1 preceded by 20,000 zeros in decimal notation.

As all 10,000 snipers take aim and fire upon command, the criminal, anticipating his demise, is astounded to find himself unscathed, with all bullets narrowly missing their mark, hitting nearby points on the ground around him.

In disbelief, he exclaims, "How is this possible? I should be dead. This must have been by design. Someone must have intervened or planned this."

In response, an advocate of the Anthropic Principle in the audience interjects, "Why the astonishment? Why seek a deeper explanation? It's simply because you exist in the universe where the snipers happened to miss. Otherwise, you wouldn't be here to pose the question."

Is this line of reasoning valid? If not, does it not undermine the objection posed by the Anthropic Principle?

Julius Hamilton
  • 1,559
  • 4
  • 29
Mark
  • 4,725
  • 1
  • 18
  • 50

2 Answers2

8

The fact that the odds, P, of the criminal being missed was one preceded by 20,000 zeros does not make it more likely that some other cause was at play. In particular, it does not mean that the probability of some other cause was 1-P, or 0.999... with 20,000 nines. If you think otherwise, you are confusing the odds of something happening given that it happens randomly, with the odds that the cause is random.

As an example, consider a lottery in which the chance of winning the jackpot- assuming it is won at random- is one in a million. If you win, the chances were 0.000001. That does not mean that the odds of you winning in another way- by cheating, for example- were 0.999999. The odds that you won by cheating are not 1-P, where P is the probability of winning if the lottery is won fairly, so decreasing P does not make cheating more likely.

Marco Ocram
  • 20,914
  • 1
  • 12
  • 64
  • 3
    This is wrong. Winning the lottery does make it more likely that you were cheating (compared to if you lost the lottery), and the more unlikely the win, the greater the chances of cheating given that you won. It does not mean the chances of cheating were 1-P. The chances of successfully cheating a lottery are very small too, one in many millions. But we may suppose the chances of cheating a lottery are relatively independent of the chances of winning, so that if the chances of winning fall below that threshold, it becomes more likely that the winner cheated than that they won fairly. – causative Mar 31 '24 at 22:28
  • 6
    In the extreme case, if there is a chance of exactly 0 of winning the lottery fairly (because it's rigged), but someone appeared with a winning ticket, then the winner definitely cheated. – causative Mar 31 '24 at 22:32
6

You are correct that in the sniper analogy, the extreme improbability of all the snipers missing by chance does make some other explanation considerably more likely. Practically certain, given the very low probability of them all missing by chance.

Before I go on, let me mention that the chance of a deity existing with the properties described in any specific religious tradition is also an extreme case of fine tuning. Much more extreme than any scientific theory, due to the inherent complexity of a thinking, acting being. That said:

The Anthropic Principle doesn't really justify any particular hypothesis about the universe's creation. The Anthropic Principle is only post-hoc. If a prior hypothesis is complex (has a long minimum description length), then that hypothesis is a priori unlikely, and the Anthropic Principle does not make it more likely.

The prior probability of a hypothesis is exponentially lower as the minimum description length grows.

If a physical theory requires certain constants to be fine-tuned to particular values to obtain the universe as we see it, then every bit in those values does add to the minimum description length of the theory. And it's not enough simply to get the bits in the right range to permit intelligent life; if a theory demands a constant be a specific measurable value, then every single bit in that value to the finest precision we can measure, must be counted in the description length of the theory.

Solomonoff's theory of inductive inference is the ultimate word on how we should reason about hypotheses about the universe's creation. Roughly speaking (simplifying a bit), if M is the minimum description length of a hypothesis, the prior probability of that hypothesis is (approximately) Ak^(-M) for some base k and constant A. Then, you simply check, for each of the (infinite) possible hypotheses, whether that hypothesis exactly matches observations. If it doesn't match, you cross it out. Then, you sum up the probability mass of all the hypotheses not crossed out, to get a normalizing constant Z. The posterior probability of a hypothesis is then Ak^(-M) / Z. And roughly speaking, in practice, the hypothesis with minimum M wins and gets a probability near 1, and all other hypotheses lose and get probabilities near 0.

So, it all comes down to whether M_T, the minimum description length for a scientific theory T, is shorter or longer than M_G, the minimum description length for a thinking deity. If M_T requires fine-tuned constants, then that hurts it in comparison to M_G. But M_G is likely very, very long; how long would be a computer program that would let you simulate a human being? And it's not enough to specify just any intelligent being; M_G has to be a specification of an intelligent being that would produce the exact universe we observe. So if M_T is still under a few kilobytes, then M_T probably still wins, by a landslide.

causative
  • 12,714
  • 1
  • 16
  • 50
  • 2
    @ScottRowe There isn't just one God Hypothesis - there are an infinite number of them. Each would be a computer program that exactly generates a complete universe, and that includes as part of its code a specification for a God entity that does the rest. Some of them (an infinite number, actually) do exactly match observations, because they fine-tuned the properties of the God entity so it does generate exactly what we observe. So it's not a question of whether the God hypothesis matches observations, but about how long the shortest God hypothesis that matches observations is. – causative Mar 31 '24 at 23:35