Author’s note: Theoretical physicist Sean Carroll was recently interviewed by podcaster Alex O’Connor and asked to defend his stance that one of the most thought-provoking scientific arguments for God’s existence, the argument from cosmological fine-tuning, “is the best argument for God, but it’s still a terrible argument.” I am responding to Carroll and other critics of the fine-tuning argument in a series of posts.
Find the full series so far here.
In my last post, we considered the shortcomings of the counterevidence that Carroll adduces to undermine the fine-tuning argument for theism. The purpose of the next few installments is to clarify the Bayesian methodology for reasoning about the fine-tuning evidence because many people, Carroll included, think about it the wrong way.
Before taking apart Sean Carroll’s specific objections against the fine-tuning argument (FTA) for theism, thus obtaining a proper sense of the power of the FTA for theism, we need to get a good handle on what it means to reason probabilistically about evidence for hypotheses like theism and naturalism. Carroll’s “post hoc” and “likelihood reversal” objections, as well as the multiverse as his preferred alternative “explanation,” embody assumptions about what probabilities are and how Bayesian reasoning works. If we can see that the way he conceives of the problem is deeply inadequate or fundamentally incorrect, his objections lose considerable force even before we directly engage them.
Our task for the next few articles, then, is to dismantle the standard subjectivist approach1 to Bayesian reasoning about evidence as being problematic in itself and completely inadequate where the evidence from cosmological fine-tuning is concerned. To do this, we’ll use Nevin Climenhaga’s (2024) well-argued defense of epistemic probabilities2 as objective degrees of support. This way of handling the matter avoids the difficulties plaguing subjectivism and puts the evidential significance of fine-tuning in the clearest of light so we can see what is at stake. We are not dealing with the happenstance of prior beliefs in a quagmire of old evidence and post hoc fabrications,3 we are confronting what the evidence objectively supports.
The Bayesian Framework
Bayesian confirmation theory4 provides us with the standard framework for evaluating how evidence bears on the truth of hypotheses. Bayes’s Theorem states:
Here is what the symbols mean:
H is a proposition expressing the hypothesis whose plausibility we are evaluating (e.g., a scientific model, explanatory claim, causal story, etc.).
E is the evidence, that is, the observational data or information relevant to H.
P(H) is the prior probability of hypothesis H, namely, how probable it is apart from the evidence. It captures our rational background expectations, and the theoretical virtues and simplicity of the hypothesis and its coherence with established facts, not our subjective whims.
P(E|H) is the likelihood, that is, how probable the evidence E would be if H were true. It gives us a measure of how well H predicts or explains E.
P(E) is the marginal probability or evidential probability of E, namely, how probable E is independent of which hypothesis is true. This is expressed by:
where is an exhaustive and mutually exclusive set of hypotheses. P(E) is thus a normalization factor that ensures the probabilities add to 1, but it is often suppressed in hypothesis comparisons.5
P(H|E) is the posterior probability, that is, how probable H is after taking E into account. It is the level of epistemic support the hypothesis has after the evidence for it is evaluated.
Evidence E confirms hypothesis H when P(H|E) > P(H) — that is, when the epistemic support provided by E raises the probability of H. By algebraic manipulation of Bayes’s Theorem, this occurs whenever P(E|H) > P(E) — that is, whenever the evidence is more probable given H than it is overall.
We employ the odds form of Bayes’s Theorem when we want to compare hypotheses:
The ratio P(E|H₁)/P(E|H₂) is called the likelihood ratio or Bayes factor. It measures how much the evidence E favors H₁ over H₂, independent of prior probabilities. The “Likelihood Principle” tells us that when this ratio is greater than 1, the evidence favors H₁; when it’s less than 1, it favors H₂; and when equal to 1, the evidence is neutral. As Collins (2009: 205n2) counsels, this principle “allows one to give a precise statement of the degree to which evidence counts in favor of one hypothesis over another.”
The Fine-Tuning Argument in Bayesian Terms
Luke Barnes (2020) provides a suitably rigorous Bayesian formulation of the fine-tuning argument. Let:
T = Theism: there exists a transcendent, personal, intelligent creator of the universe, i.e., God exists.
N = Naturalism: there is no such being as God nor anything like him; the origin and function of the universe is the product of impersonal laws and forces alone.
FT = Fine-tuning: the laws, constants, and initial conditions of our universe fall within the extraordinarily narrow ranges required for the existence of embodied conscious agents.
B = Background information: our general knowledge about logic, philosophical arguments, history, world religions, life experiences, the intellectual milieu, the sociology of knowledge, mathematics, etc., and the structure of physical theories, excluding the specific fine-tuning evidence.
Bayes’s Theorem allows us to express the posterior probability of theism given the fine-tuning evidence and our background knowledge. Since theism and naturalism are treated as mutually exclusive and jointly exhaustive hypotheses for our purposes,6 we can normalize the evidential probability in the denominator and express the posterior probability of theism as (cf. Gordon, 2024):
All the factors that determine the posterior probability of theism are made explicit here: (1) the prior probability of theism on background knowledge, P(T|B); (2) the prior probability of naturalism on background knowledge, P(N|B); (3) the likelihood of fine-tuning on theism and background knowledge, P(FT|T·B); and (4) the likelihood of fine-tuning on naturalism and background knowledge, P(FT|N·B). Theories that make the fine-tuning evidence more likely have more evidential support than those that do not.
Similarly, the posterior probability of naturalism is:
The denominators of the posteriors are identical, so we can compare them directly using the likelihood ratio approach. This is quite useful because it screens the evidential contribution of fine-tuning from controversies over prior probabilities.
The conclusion of the fine-tuning argument is that FT confirms T over N: P(T|FT·B) > P(T|B), or equivalently, the likelihood ratio P(FT|T·B)/P(FT|N·B) is significantly greater than 1. When the Bayes factor greatly exceeds 1, then as long as the question isn’t begged by assuming P(T|B) = 0, fine-tuning strongly confirms theism over naturalism, independent of prior probabilities.
In broad outline, the fine-tuning argument for theism has two key premises:
Premise 1: P(FT|T·B) isn’t extraordinarily low.
Given the existence of a transcendent intelligent agent who can create a universe, if this agent’s intentions include or even permit the existence of embodied conscious beings, then a universe fine-tuned for such beings is not surprising. We needn’t claim that God must create such a universe, only that the purposes traditionally attributed to God are consistent with it and make it unsurprising.
Premise 2: P(FT|N·B) is astoundingly low.
In the context of naturalism, on the other hand, the values of physical constants have to be brute facts or arise from deeper physical laws that are brute facts. There is no tendency in undirected nature toward life-permitting values, nor can there be. Despite this, the fraction of parameter space permitting life is extraordinarily small — estimates range from 1 in 10^15 to 1 in 10^120 for a variety of individual parameters, to a mind-boggling 1 in 10^(10^123) for the initial entropy conditions (Penrose 2004; Barnes 2012). Hitting a target this small without being directed would be a coincidence that is, quite literally, beyond rational belief.
Conclusion: With these relative values for P(FT|T·B) and P(FT|N·B), the likelihood ratio P(FT|T·B)/P(FT|N·B) >> 1. Fine-tuning thus confirms theism over naturalism.
There is no denying the intuitiveness and the power of this argument. Consider what happens, for example, if we have a strong prior atheistic bias and set the priors as: P(T|B) = 0.01 and P(N|B) = 0.99. Suppose, given theism, we also modestly set the likelihood of fine-tuning at P(FT|T·B) = 0.5 to reflect some uncertainty about God’s purposes. Even with these very conservative assumptions, and modestly (given its actual improbability) estimating the likelihood of fine-tuning on naturalism as P(FT|N·B) = 10^-50, the posterior probability of theism is:
P(T|FT·B) = [0.01 × 0.5] / [(0.01 × 0.5) + (0.99 × 10^-50)] ≈ 1.
Even with a heavily atheistic bias, the fine-tuning evidence overwhelms the prior probabilities and ambivalent likelihood. The likelihood ratio is the crucial quantity. When it is astronomically large, the posterior probability of theism approaches certainty regardless of whether the prior probability assignments are thoroughly biased. So, we’ll be focusing on the likelihood ratio in our discussion, but we won’t forget the full posterior probability calculation in the background.
This is the basic structure of the FTA for theism. Nonetheless, its cogency depends on how we interpret the probabilities involved and it is here that many, including Carroll, really go astray.
The Problem with Subjective Bayesianism
Contemporary Bayesian analysis is dominated by subjectivism, or as it is sometimes known, “personalism.” This is how Carroll thinks about the matter (see Carroll (2016), chapters 14 and 18). From this standpoint, probabilities represent degrees of belief, that is, psychological states of confidence that rational agents have toward propositions (de Finetti, 1937; Savage, 1954). For example, when we write P(H|B)= 0.7, we are describing a mental state in which the agent believes H to degree 0.7 relative to his background beliefs.
This much said, subjective Bayesianism does have some virtue. It gives a clear semantics for probability talk (probabilities are mental states), a clear justification for the probability axioms (Dutch book arguments show that violating them leads to guaranteed losses in betting scenarios),7 and a clear updating rule (upon learning E, update P(H|B) to P(H|E·B)). But subjective Bayesianism still faces serious difficulties when applied to the kind of reasoning involved in the FTA.
The Problem of Permissive Priors
Strict subjectivism would tell us that any coherent probability assignment is rationally permissible (Joyce, 2011). So if Carroll were to assign P(T|B) ≈ 0 and we were to assign P(T|B) ≈ 1, neither of us would be making a mistake, we would simply have different priors. While Bayesian norms constrain how we update our beliefs, they do not constrain what beliefs we start with. This is deeply problematic. Some prior probability assignments are just unreasonable — not merely idiosyncratic, but objectively defective. Someone assigning P(T|B) = 0 for no reason but prejudice isn’t just expressing a preference, he has made an epistemic error. Subjective Bayesianism struggles to capture this.
The Old Evidence Problem
A more technical difficulty concerns evidence known before a hypothesis was formulated, usually referred to as the “problem of old evidence” (Glymour, 1980: 86). To use Clark Glymour’s example, Einstein’s general theory of relativity explained the anomalous precession of Mercury’s perihelion.8 This anomaly was already known and the theory elegantly accounted for it, so this explanatory success, intuitively, was a confirmation of general relativity.
On subjectivist Bayesianism, however, there’s a problem. Since we already knew about Mercury’s anomalous precession before considering general relativity, our credence in it is 1. But for evidence E to confirm hypothesis H, we need P(H|E) > P(H), which, by Bayes’s Theorem, requires P(E|H) > P(E). But if P(E) = 1, then it is already maximum and P(E|H) cannot exceed it. This means that evidence we already know cannot confirm any hypothesis and this threatens to undermine retrospective theory evaluation, that is, the assessment of how well a hypothesis explains evidence already in hand. The evidential support that general relativity receives from explaining Mercury’s orbit should not depend on whether we learned about it before encountering the theory versus afterward. Yet this is what subjectivism implies.
Various remedies have been suggested: counterfactual credences, that is, what we would have believed if we hadn’t known (Garber, 1983), or ur-priors, that is, hypothetical credences before any empirical learning (Monton, 2006), and more. But these “fixes” are awkward and contested, suggesting that something is wrong with the underlying framework.
The Problem of Interpersonal Disagreement
Finally, when Carroll says the fine-tuning argument fails and I say it succeeds, what exactly are we disagreeing about? On subjectivism, we might merely be reporting different psychological states. Carroll’s credences are such that fine-tuning doesn’t raise his confidence in theism whereas ours are such that it does. We’re both being coherent Bayesians, but we started in different places.
This can’t be right. Each of us is making claims about what the evidence supports, not describing our mental states. Carroll thinks fine-tuning doesn’t support theism and that it does support naturalism; I think the opposite. These are contradictory claims about an objective matter — the evidential bearing of fine-tuning on theism — not merely expressions of different psychological conditions.
These problems with subjective Bayesianism are more than mere technical curiosities because they threaten to undermine evidential reasoning altogether. An alternative interpretation that takes epistemic probabilities to be objective degrees of support resolves these difficulties and clarifies the stakes in the fine-tuning debate.
Next up: “The Objective Probability of the Fine-Tuning Evidence.”
Notes
- The subjectivist approach is that probabilities represent our personal degrees of confidence (a psychological attitude with respect to our belief) in a hypothesis rather than the strength of the evidence supporting it. On the subjectivist view, different people can assign different probabilities to the same hypothesis without being wrong as long as they update their beliefs consistently when new evidence is presented. By contrast, objectivism maintains that there is a fact of the matter regarding how well a hypothesis is supported by background knowledge and evidence.
- Epistemic probabilities are characterized in terms of objective degrees of support: the probability of a hypothesis given certain evidence is an objective fact about how strongly the evidence supports it.
- The problem of “old evidence” afflicts subjectivist interpretations of probability and was first identified by Clark Glymour. If you already know a piece of evidence, your credence (degree of belief) in it is already 1, so it cannot raise the probability of any hypothesis you come to recognize that it supports (see the discussion below regarding general relativity’s prediction of Mercury’s anomalous precession). Regarding post hoc fabrications, the concern is that a hypothesis is being tailored “after the fact” to fit evidence that is already known rather than genuinely predicting it.
- Bayes’s Theorem, which is named after the 18th-century theologian and mathematician Thomas Bayes, who derived it, tells you how to update the probability of a hypothesis when relevant evidence is discovered. Bayesian confirmation theory is the broader framework that uses Bayes’s Theorem to evaluate whether, and by how much, a piece of evidence supports a hypothesis.
- The formula says something straightforward: to find out how probable the evidence is overall, consider every way the evidence could arise. In the formula, each hypothesis Hᵢ represents one such way. The contribution of each hypothesis to the total probability for E is calculated from how likely E would be if Hᵢ were true (the likelihood P(E|Hᵢ)), weighted by how probable Hᵢ itself is (the prior probability P(Hᵢ)). The sum of these weighted contributions across all hypotheses is called the marginal probability, P(E), the probability of E averaged over every hypothesis in the space weighted by its prior probability. This quantity functions as a normalizing constant in the context of Bayes’s Theorem by ensuring that the posterior probabilities P(Hᵢ|E) sum to 1 across the full hypothesis space, which they must for the probability distribution to be legitimate.
- Insofar as pantheism or panpsychism attributes conscious agency to the universe itself, these views share theism’s predictive advantages regarding fine-tuning and could function as “theism-like” hypotheses in our likelihood comparisons. Since these alternatives deny transcendence and metaphysical necessity, however, they inherit naturalism’s explanatory deficits in these respects. Either way, the theism-naturalism comparison captures the essential contrast, and these alternatives do not constitute a neglected third option that would materially alter our analysis.
- A “Dutch book argument” shows that if your subjective degrees of belief violate the standard axioms of probability theory, a clever bookie can construct a “Dutch book,” that is, a set of bets you would accept individually that would guarantee you a net loss no matter what happens.
- Mercury’s perihelion, that is, its closest approach to the Sun, shifts slightly with each orbit. Newtonian gravitational theory couldn’t account for the value of this shift, which was an anomaly. Einstein’s general theory of relativity, however, explained it exactly, and this was taken as powerful evidence for the theory, despite the fact that the anomaly (the evidence) was known before the theory predicted it.









































