Pascal's Wager in modern form goes something like this[1]:

A1 Either god exists or not, and we can wager on that and may be rewarded or punished for our wager if he does. The various rewards can be summed up in the following table (a "decision matrix"; r1, r2 and r3 are some finite numbers, perhaps negative to indicate punishment):
God Exists God Doesn't Exist
Wager for God R r1
Wager against God r2 r3
A2 Rationality requires that the probability you assign to the existence of god to be finite and greater than zero
A3 Rationality requires that you act to maximize the expected utility

If one assumes that R is inifinite, the conclusion that you should wager for god follows, regardless of how small the probability (p) you assign to it. This is because a small number times infinity is infinity, so that the wager for god has the utility p*R + (1-p)*r1 = ∞, which is (infinitely) larger than the utility for wagering against god p*r2+(1-p)*r3.

The counter-points above show that the argument is mistaken. The resulting guillibility to any claim is especially worriesome from the point of view of deision theory. Clearly, there is something very wrong with the argument, but it is actually not easy to see what it is. We shall argue here that the core problem is, of all things, (A3).

Infinities and Gods: Unsatisfactory Counterarguments[]

While all the above counterarguments are true, they are unsatisfactory in the sense that they only demonstrate that the argument is invalid, they don't show why it is so.

It is true that in the original formulation Problem 1 was correct: the argument assumed that a payoff of zero for not believing. But the modern formulation states above, the pay-off in this case is any finite quantity. So this objection doesn't cut to the heart of the problem. (If there is a god who rewards those who use their critical faculty and punishes those with blind faith not believing brings infinite reward.)

It is also true that one cannot simply decide to believe (Problem 2). Indeed, as noted above, a rational person should believe what is most likely, not what is most lucrative. However, there are certain religions that do not require belief, merely action. Certain strands of Judaism follow this vein, for example. Should we all convert to Judaism? Of course not. So what is wrong with the argument, beyond this point?

Problem 3 raises the point that there is an infinity of possibilities neglected by the argument. For every god with the above rewards, there may be a god with an opposing one. This is the 'many gods' objection. This objection is problematic, since a theist might well argue that such gods are less likely, and we'd like the argument to falter even assuming - for the sake of argument - that he is right. (We find nearly all gods equally probable, but still.) Even more importantly, the core reason the argument fails is not that there are other unlikely possibilities - the analysis of this possibility alone should reject it, without any need to introduce other options.

A related issue is the "Greater God" problem - assuming infinities are banned, you should rationally choose some other god whose R is bigger than the current one and so on ad infinitum - so you're paralyzed without an ability to choose. This applies even to infinite R, using certain extensions such as surreal numbers[2].

Problem 4 highlights this point - there is just something fundamentally wrong with decision theory as it is applied here. If the argument was valid, it would lead to all sorts of gullibility, making the rational agent purchase indulgences and pay for Jason Wales's pension. Clearly, something is deeply wrong here.

Some maintain that, formally, decision theory cannot handle an infinite R[3]. But this objection is not too relevant, since the argument works with merely a very-very big R. Likewise, maintaining that the probability for god's existence is infinitesimal may be valid for an atheist, but the argument should fail even for someone less extreme.

While they demolish the argument, these objections are not satisfying. There is still something very wrong at the heart of the argument.

Against Maximizing Utility (A3)[]

Why should a rational agent maximize utility? The idea is that this would mean that on average he will gain the most. But it turns out averages aren't meaningful here. We are usually faced with situations where the possibilities are fairly limited, so that the average is a good representation of what is likely to happen. But when the probability distribution is too broad, the typical is no longer the average. In this case, most cases will turn out to be very far from the average, and we suggest that in such cases it is better to stick to the typical than to the average as your guide to rational behavior.

Let us consider, for simplicity, the case where r1=0. In this case the average utility for wagering for god is p*R. But the standard deviation[4] of the utility is (p(p-1))1/2R, which is larger than the utility when p<0.5. So the average loses its usual intuitive meaning when p<0.5; in these cases a typical result will be that the agent gets 0 as its reward. This gets a bit more complicated with finite r1, but the principle stands - for sufficiently small p's, the typical case would be that you will get r1 as the reward. In this case, wagering on God on the off-chance that you will get R is not rational; most rational agents will fail if they pursue this line of reasoning. Assuming the other elements in the decision matrix are appropriate (e.g. r3 is far larger than any other element), choosing atheism becomes the rational choice. Even if they aren't, the choice at least returns to rationality - the rational agent isn't lured away by astronomically unlikely options.

There is also an argument to be made against following arbitrarily low probabilities in general. A probability of 10-57 for example is zero for all intents and purposes - no matter how often it is checked, it is extremely unlikely to be realized within the lifetime of the universe. Since it may still grossly affect the average, this is a further indication that it is just not rational to follow the average utility. Following the typical utility is far more sensible when the standard deviation of the utility is larger than its average.


  1. See the Standford Encyclopedia of Philosophy [1]
  2. For a clear discussion of infinities in relation to Pascal's Wager, see Alan Hajek's Waging War on Pascal's Wager, The Philosophical Review, Vol. 112, No. 1 (January 2003). Hajek argues all wager arguments are doomed to failure due to either the greater god problem, or to the ability to wager on god by non-committing strategies - a point not covered on this site.
  3. The objections are briefly, but well, considered in the Internet Encyclopedia of Philosophy [2]
  4. The standard deviation of a random variable is a measure of how much it varies around the average - how far will typical results be from the average. See Wikipedia, [3]