Tuesday, November 21, 2017

Perfect rationality and omniscience

  1. A perfectly rational agent who is not omniscient can find itself in lottery situations, i.e., situations where it is clear that there are many options, exactly one of which can be true, with each option having approximately the same epistemic probability as any other.

  2. A perfectly rational agent must believe anything there is overwhelming evidence for.

  3. A perfectly rational agent must have consistent beliefs.

  4. In lottery situations, there is overwhelming evidence for each of a set of inconsistent claims, namely for the claims that one of options 1,2,3,… is the case, but that option 1 is not the case, that option 2 is not the case, that option 3 is not the case, etc.

  5. So, in lottery situations, a perfectly rational agent has inconsistent beliefs. (2,4)

  6. So, a perfectly rational agent is never in a lottery situation. (3,5)

  7. So, a perfectly rational agent is omniscient. (1,6)

The standard thing people like to say about arguments like this is that they are a reductio of the conjunction of the premises 2 through 4. But I think it might be interesting to take it as a straightforward argument for the conclusion 7. Maybe one cannot separate out procedural epistemic perfection (perfect rationality) from substantive epistemic perfection (omniscience).

That said, I am inclined to deny 3.

It’s worth noting that this yields another variant on an argument against open theism. For even though I am inclined to think that inconsistency in beliefs is not an imperfection of rationality, it is surely an imperfection simpliciter, and hence a perfect being will not have inconsistent beliefs.


Angra Mainyu said...


What about an agent that makes probabilistic assessments, but has no (further) beliefs on the matter?

For example, it seems to me it's not an imperfection for a rational agent A to assign probability 0.99999999999999999999999999 to P if that is what the information available to A warrants, and then go no further. So, I would not be inclined to accept 2.

Walter Van den Acker said...


That's what's unclear to me.
It seems obvious that an omniscient rational agent should assign the exact probability to P.
So, it wouldn't be a matter of available information, because I think all information would be available to an omniscient agent.
But what about something that really has a probability of 0.99999999999999999999999999?

I think in that case, saying that P will occur would be an imperfection. So, in that case, I tend to agree with you that the only rational thing for such a being to do would be to assign a 0.99999999999999999999999999 probability and go no further. I don't think would be anything inconsistent in that.