Wednesday, March 21, 2018

Bohmianism and God

Bohmian mechanics is a rather nice way of side-stepping the measurement problem by having a deterministic dynamics that generates the same experimental predictions as more orthodox interpretations of Quantum Mechanics.

Famously, however, Bohmian mechanics suffers from having to make the quantum equilibrium hypothesis (QEH) that the initial distribution of the particles matches the wavefunction, i.e., that the initial particle density is given by (at least approximately) |ψ|2. In other words, Bohmian mechanics requires the initial conditions to be fine-tuned for the theory to work, and we can then think of Bohmian mechanics as deterministic Bohmian dynamics plus QEH.

Can we give a fine-tuning argument for the existence of God on the basis of the QEH, assuming Bohmian dynamics? I think so. Given the QEH, nature becomes predictable at the quantum level, and God would have good reason to provide such predictability. Thus if God were to opt for Bohmian dynamics, he would be likely to make QEH true. On the other hand, in a naturalistic setting, QEH seems to be no better than an exceedingly lucky coincidence. So, given Bohmian dynamics, QEH does support theism over naturalism.

Theism makes it possible to be an intellectually fulfilled Bohmian. But I don’t know that we have good reason to be Bohmian.

Tuesday, March 20, 2018

Pruss and Rasmussen, Necessary Existence

Josh Rasmussen's and my Necessary Existence (OUP) book is out, in Europe in hardcover and Kindle and in the US currently only on Kindle--in hardcover in a week or two. I wish the price was much lower. The authors don't have a say over that, I think.

The great cover was designed by Rachel Rasmussen (Josh's talented artist wife).

Monday, March 19, 2018

"Before I formed you in the womb I knew you" (Jeremiah 1:5)

  1. Always: If x (objectually) knows y, then y exists (simpliciter). (Premise)

  2. Before I came into existence, it was true that God (objectually) knows me. (Premise)

  3. Thus, before I came into existence, it was true that I exist (simpliciter). (1 and 2)

  4. If 3, then eternalism is true. (Premise)

  5. Thus, eternalism is true. (3 and 4)

A variant of this argument uses “has a rightly ordered love for” in place of “(objectually) knows”.

Thursday, March 15, 2018

Something that has no reasonable numerical epistemic probability

I think I can give an example of something that has no reasonable (numerical) epistemic probability.

Consider Goedel’s Axiom of Constructibility. Goedel proved that if the Zermelo-Fraenkel (ZF) axioms are consistent, they are also consistent with Constructibility (C). We don’t have any strong arguments against C.

Now, either we have a reasonable epistemic probability for C or we don’t.

If we don’t, here is my example of something that has no reasonable epistemic probability: C.

If we do, then note that Goedel showed that ZF + C implies the Axiom of Choice, and hence implies the existence of non-measurable sets. Moreover, C implies that there is a well-ordering W on the universe of all sets that is explicitly definable in the language of set theory.

Now consider some physical quantity Q where we know that Q lies in some interval [x − δ, x + δ], but we have no more precise knowledge. If C is true, let U be the W-smallest non-measurable subset of [x − δ, x + δ].

Assuming that we do have a reasonable epistemic probability for C, here is my example of something that has no reasonable epistemic probability: C is false or Q is a member of U.

Logical closure accounts of necessity

A family of views of necessity (e.g., Peacocke, Sider, Swinburne, and maybe Chalmers) identifies a family F of special true statements that get counted as necessary—say, statements giving the facts about the constitution of natural kinds, the axioms of mathematics, etc.—and then says that a statement is necessary if and only if it can be proved from F. Call these “logical closure accounts of necessity”. There are two importantly different variants: on one “F” is a definite description of the family and on the other “F” is a name for the family.

Here is a problem. Consider:

  1. Statement (1) cannot be proved from F.

If you are worried about the explicit self-reference in (1), I should be able to get rid of it by a technique similar to the diagonal lemma in Goedel’s incompleteness theorem. Now, either (1) is true or it’s false. If it’s false, then it can be proved from F. Since F is a family of truths, it follows that a falsehood can be proved from truths, and that would be the end of the world. So it’s true. Thus it cannot be proved from F. But if it cannot be proved from F, then it is contingently true.

Thus (1) is true but there is a possible world w where (1) is false. In that world, (1) can be proved from F, and hence in that world (1) is necessary. Hence, (1) is false but possibly necessary, in violation of the Brouwer Axiom of modal logic (and hence of S5). Thus:

  1. Logical closure accounts of necessity require the denial of the Brouwer Axiom and S5.

But things get even worse for logical closure accounts. For an account of necessity had better itself not be a contingent truth. Thus, a logical closure account of necessity if true in the actual world will also be true in w. Now in w run the earlier argument showing that (1) is true. Thus, (1) is true in w. But (1) was false in w. Contradiction! So:

  1. Logical closure accounts of necessity can at best be contingently true.

Objection: This is basically the Liar Paradox.

Response: This is indeed my main worry about the argument. I am hoping, however, that it is more like Goedel’s Incompleteness Theorems than like the Liar Paradox.

Here's how I think the hope can be satisfied. The Liar Paradox and its relatives arise from unbounded application of semantic predicates like “is (not) true”. By “unbounded”, I mean that one is free to apply the semantic predicates to any sentence one wishes. Now, if F is a name for a family of statements, then it seems that (1) (or its definite description variant akin to that produced by the diagonal lemma) has no semantic vocabulary in it at all. If F is a description of a family of statements, there might be some semantic predicates there. For instance, it could be that F is explicitly said to include “all true mathematical claims” (Chalmers will do that). But then it seems that the semantic predicates are bounded—they need only be applied in the special kinds of cases that come up within F. It is a central feature of logical closure accounts of necessity that the statements in F be a limited class of statements.

Well, not quite. There is still a possible hitch. It may be that there is semantic vocabulary built into “proved”. Perhaps there are rules of proof that involve semantic vocabulary, such as Tarski’s T-schema, and perhaps these rules involve unbounded application of a semantic predicate. But if so, then the notion of “proof” involved in the account is a pretty problematic one and liable to license Liar Paradoxes.

One might also worry that my argument that (1) is true explicitly used semantic vocabulary. Yes: but that argument is in the metalanguage.

Tuesday, March 13, 2018

A third kind of moral argument

The most common kind of moral argument for theism is that theism better fits with there being moral truths (either moral truths in general, or some specific kind of moral truths, like that there are obligations) than alternative theories do. Often, though not always, this argument is coupled with a divine commmand theory.

A somewhat less common kind of argument is that theism better explains how we know moral truths. This argument is likely to be coupled with an evolutionary debunking argument to argue that if naturalism and evolution were true, our moral beliefs might be true, and might even be reliable, but wouldn’t be knowledge.

But there is a third kind of moral argument that one doesn’t meet much at all in philosophical circles—though I suspect it is not uncommon popularly—and it is that theism better explains why we have moral beliefs. The reason we don’t meet this argument much in philosophical circles is probably that there seems to be very plausible evolutionary explanations of moral beliefs in terms of kin selection and/or cultural selection. Social animals as clever as we are benefit as a group from moral beliefs to discourage secret anti-cooperative selfishness.

I want to try to rescue the third kind of moral argument in this post in two ways. First, note that moral beliefs are only one of several solutions to the problem of discouraging secret selfishness. Here are three others:

  • belief in karmic laws of nature on which uncooperative individuals get very undesirable reincarnatory outcomes

  • belief in an afterlife judgment by a deity on which uncooperative individuals get very unpleasant outcomes

  • a credence around 1/2 to an afterlife judgment by a deity on which uncooperative individuals get an infinitely bad outcome (cf. Pascal’s Wager).

These three options make one think that cooperativeness is prudent, but not that it is morally required. Moreover, they are arguably more robust drivers of cooperative behavior than beliefs about moral requirement. Admittedly, though, the first two of the above might lead to moral beliefs as part of a theory about the operation of the karmic laws or the afterlife judgment.

Let’s assume that there are important moral truths. Still, P(moral beliefs | naturalism) is not going to exceed 1/2. On the other hand, P(moral beliefs | God) is going to be high, because moral truths are exactly the sort of thing we would expect God to ensure our belief in (through evolutionary means, perhaps). So, the fact of moral belief will be evidence for theism over naturalism.

The second approach to rescuing the moral argument is deeper and I think more interesting. Moreover, it generalizes beyond the moral case. This approach says that a necessary condition for moral beliefs is being able to have moral concepts. But to have moral concepts requires semantic access to moral properties. And it is difficult to explain on contemporary naturalistic grounds how we have semantic access to moral properties. Our best naturalistic theories of reference are causal, but moral properties on contemporary naturalism (as opposed to, say, the views of a Plato or an Aristotle) are causally inert. Theism, however, can nicely accommodate our semantic access to moral properties. The two main theistic approaches to morality ground morality in God or in an Aristotelian teleology. Aristotelian teleology allows us to have a causal connection to moral properties—but then Aristotelian teleology itself calls for an explanation of our teleological properties that theism is best suited to give. And approaches that ground morality in God give God direct semantic access to moral properties, which semantic access God can extend to us.

This generalizes to other kinds of normativity, such as epistemic and aesthetic: theism is better suited to providing an explanation of how we have semantic access to the properties in question.

Conscious computers and reliability

Suppose the ACME AI company manufactures an intelligent, conscious and perfectly reliable computer, C0. (I assume that the computers in this post are mere computers, rather than objects endowed with soul.) But then a clone company manufactures a clone of C1 out of slightly less reliable components. And another clone company makes a slightly less reliable clone of C2. And so on. At some point in the cloning sequence, say at C10000, we reach a point where the components produce completely random outputs.

Now, imagine that all the devices from C0 through C10000 happen to get the same inputs over a certain day, and that all their components do the same things. In the case of C10000, this is astronomically unlikely, as the super-unreliable components of the C10000 produce completely random outputs.

Now, C10000 is not computing. Its outputs are no more the results of intelligence than the copy of Hamlet typed by the monkeys is the result of intelligent authorship. By the same token, C10000 is not conscious on computational theories of consciousness.

On the other hand, C0’s outputs are the results of intelligence and C0 is conscious. The same is true for C1, since if intelligence or consciousness required complete reliability, we wouldn’t be intelligent and conscious. So somewhere in the sequence from C0 to C10000 there must be a transition from intelligence to lack thereof and somewhere (perhaps somewhere else) a transition from consciousness to lack thereof.

Now, intelligence could plausibly be a vague property. But it is not plausible that consciousness is a vague property. So, there must be some precise transition point in reliability needed for computation to yield consciousness, so that a slight decrease in reliability—even when the actual functioning is unchanged (remember that the Ci are all functioning in the same way)—will remove consciousness.

More generally, this means that given functionalism about mind, there must be a dividing line in measures of reliability between cases of consciousness and ones of unconsciousness.

I wonder if this is a problem. I suppose if the dividing line is somehow natural, it’s not a problem. I wonder if a natural dividing line of reliability can in fact be specified, though.

Monday, March 12, 2018

The usefulness of having two kinds of quantifiers

A central Aristotelian insight is that substances exist in a primary way and other things—say, accidents—in a derivative way. This insight implies that use of a single existential quantifier ∃x for both substances and forms does not cut nature at the joints as well as it can be cut.

Here are two pieces of terminology that together not only capture the above insight about existence, but do a lot of other (but closely related) ontological work:

  1. a fundamental quantifier ∃u over substances.

  2. for any y, a quantifier ∃yx over all the (immediate) modes (tropes) of y.

We can now define:

  • a is a substance iff ∃u(u = a)

  • b is a (immediate) mode of a iff ∃ax(x = b)

  • f is a substantial form of a substance a iff a is a substance and ∃ax(x = f): substantial forms are immediate modes of substances

  • b is a (first-level) accident of a substance a iff u is a substance ∃axxy(y = b & y ≠ x): first-level accidents are immediate modes of substantial forms, distinct from these forms (this qualifier is needed so that God wouldn’t coount as having any accidents

  • f is a substantial form iff ∃uux(x = f)

  • b is a (first-level) accident iff ∃uuxxy(y = b).

This is a close variant on the suggestion here.

Friday, March 9, 2018

A regress of qualitative difference

According to heavyweight Platonism, qualitative differences arise from differences between the universals being instantiated. There is a qualitative difference between my seeing yellow and your smelling a rose. This difference has to come from the difference between the universals seeing yellow (Y) and smelling a rose (R). But one doesn’t get a qualitative difference from being related in the same way to numerically but not qualitatively different things (compare: being taller than Alice is not qualitatively different from being taller than Bea if Alice and Bea are qualitatively the same—and in particular, of the same height). Thus, if the qualitative difference between my seeing yellow and your smelling a rose comes from being related by instantiation to different things, namely Y and R, then this presupposes that the two things are themselves qualitatively different. But this qualitative difference between Y and R depends on Y and R exemplifying different—and indeed qualitatively different—properties. And so on, in a regress!

Intrinsic attribution

  1. If heavyweight Platonism is true, all attribution of attributes to a subject is grounded in facts relating the subject to abstracta.

  2. Intrinsic attribution is never grounded in facts relating a subject to something distinct from itself.

  3. There are cases of intrinsic attribution with a non-abstract subject.

  4. If heavyweight Platonism is true, each case of intrinsic attribution to a non-abstract subject is grounded in facts relating that object to something other than itself. (By 1 and 2)

  5. So, if heavyweight Platonism is true, there are no cases of intrinsic attribution to a non-abstract subject. (2 and 4)

  6. So, heavyweight Platonism is not true. (By 2 and 5)

Here, however, is a problem with 3. All cases of attribution to a creature are grounded in the creature’s participation in God. Hence, no creature is a subject of intrinsic attribution. And God’s attributes are grounded in a relation between God and the Godhead. But by divine simplicity, God is the Godhead. Since the Godhead is abstract, God is abstract (as well as being concrete) and hence God does not provide an example of intrinsic attribution with a non-abstract subject.

I still feel that there is something to the above argument. Maybe the sense in which a creature’s attributes are grounded in the creature’s participation in God is different from the sense of grounding in 2.

Friday, March 2, 2018

Wishful thinking

Start with this observation:

  1. Commonly used forms of fallacious reasoning are typically distortions of good forms of reasoning.

For instance, affirming the consequent is a distortion of the probabilistic fact that if we are sure that if p then q, then learning q is some evidence for p (unless q already had probability 1 or p had probability 0 or 1). The ad hominem fallacy of appeal to irrelevant features in an arguer is a distortion of a reasonable questioning of a person’s reliability on the basis of relevant features. Begging the question is, I suspect, a distortion of an appeal to the obviousness of the conclusion: “Murder is wrong. Look: it’s clear that it is!”


  1. Wishful thinking is a commonly used form of fallacious reasoning.

  2. So, wishful thinking is probably a distortion of a good form of reasoning.

I suppose one could think that wishful thinking is one of the exceptions to rule (1). But to be honest, I am far from sure there are any exceptions to rule (1), despite my cautious use of “typically”. And we should avoid positing exceptions to generally correct rules unless we have to.

So, if wishful thinking is a distortion of a good form of reasoning, what is that good form of reasoning?

My best answer is that wishful thinking is a distortion of correct probabilistic reasoning on the basis of the true claim that:

  1. Typically, things go right.

The distortion consists in the fact that in the fallacy of wishful thinking one is reasoning poorly, likely because one is doing one or more of the following:

  1. confusing things going as one wishes them to go with things going right,

  2. ignoring defeaters to the particular case, or

  3. overestimating the typicality mentioned in (4).

Suppose I am right about (4) being true. Then the truth of (4) calls out for an explanation. I know of four potential explanations of (4):

  1. Theism: God creates a good world.

  2. Optimalism: everything is for the best.

  3. Aristotelianism: rightness is a matter of lining up with the telos, and causal powers normally succeed at getting at what they are aiming at.

  4. Statisticalism: norms are defined by what is typically the case.

I think (iv) is untenable, so that leaves (i)-(iii).

Now, optimalism gives strong evidence for theism. First, theism would provide an excellent explanation for optimalism (Leibniz). Second, if optimalism is true, then there is a God, because that’s for the best (Rescher).

Aristotelianism also provides evidence for theism, because it is difficult to explain naturalistically where teleology comes from.

So, thinking through the fallacy of wishful thinking provides some evidence for theism.

Thursday, March 1, 2018

Superpositions of conscious states

Consider this thesis:

  1. Reality is never in a superposition of two states that differ with respect to what, if anything, observers are conscious of.

This is one of the motivators for collapse interpretations of quantum mechanics. Now, suppose that S is an observable that describes some facet of conscious experience. Then according to (1), reality is always in some eigenstate of S.

Suppose that at the beginning t0 of some interval I of times, reality is in eigenstate ψ0. Now, suppose that collapse does not occur during I. By continuity considerations, then, over I reality cannot evolve to a state orthogonal to ψ0 without passing through a state that is a superposition of ψ0 and something else. In other words, over a collapse-free interval of time, the conscious experience that is described by S cannot change if (1) is true.

What if collapse happens? That doesn’t seem to help. There are two plausible options. Either collapses are temporally discrete or temporally dense. If they are temporally dense, then by the quantum Zeno effect with probability one we have no change with respect to S. If they are temporally discrete, then suppose that t1 is the first time after t0 at which collapse causes the system to enter a state ψ1 orthogonal to ψ0. But for collapse to be able to do that, the state would have had to have assigned some weight to ψ1 prior to the collapse, while yet assigning some weight to ψ0, and that would violate (1).

(There might also be some messy story where there are some temporally dense and some temporally isolated collapse. I haven’t figured out exactly what to say about that, other than that it is in danger of being ad hoc.)

So, whether collapse happens or not, it seems that (1) implies that there is no change with respect to conscious experience. But clearly the universe changes with respect to conscious experience. So, it seems we need to reject (1). And this rejection seems to force us into some kind of weird many-worlds interpretation on which we have superpositions of incompatible experiences.

There are, however, at least two places where this argument can be attacked.

First, the thesis that conscious experience is described by observables understood (implicitly) as Hermitian operators can be questioned. Instead, one might think that conscious states correspond to subsets of the Hilbert space, subsets that may not even be linear subspaces.

Second, one might say that (1) is false, but nothing weird happens. We get weirdness from the denial of (1) if we think that a superposition of, say, seeing a square and seeing a circle is some weird state that has a seeing-a-square aspect and a seeing-a-circle aspect (this is weird in different ways depending on whether you take a multiverse interpretation). But we need not think that. We need not think that if a quantum state ψ1 corresponds to an experience E1 and a state ψ2 corresponds to an experience E2, then ψ = a1ψ1 + a2ψ2 corresponds to some weird mix of E1 and E2. Perhaps the correspondence between physical and mental states in this case goes like this:

  1. when |a1| ≫ |a2|, the state ψ still gives rise to E1

  2. when |a1| ≪ |a2|, the state ψ gives rise to E2

  3. when a1 and a2 are similar in magnitude, the state ψ gives rise to no conscious experience at all (or gives rise to some other experience, perhaps one related to E1 and E2, or perhaps one that is entirely unrelated).

After all, we know very little about which conscious states are correlated with which physical states. So, it could be that there is always a definite conscious state in the universe. I suppose, though, that this approach also ends up denying that we should think of conscious states as corresponding in the most natural way to the eigenvectors of a Hermitian operator.

Wednesday, February 28, 2018

More on pain and presentism

Imagine two worlds, in both of which I am presently in excruciating pain. In world w1, this pain began a nanosecond ago and will end in a nanosecond. In w2, the pain began an hour ago and will end in an hour.

In world w1, I am hardly harmed if I am harmed at all. Two nanoseconds of pain, no matter how bad, are just about harmless. It would be rational to accept two nanoseconds of excruciating pain in exchange for any non-trivial good. But in world w2, things are really bad for me.

An eternalist has a simple explanation of this: even if each of the two nanosecond pains has only a tiny amount of badness, in w2 I really have 60 × 109 of them, and that’s really bad.

It seems hard, however, for a presentist to explain the difference between the two worlds. For of the 60 × 109 two-nanosecond pains I receive in w2, only one really exists. And there is one that really exists in w1. Where is the difference? Granted, in w2, I have received billions of these pains and will receive billions more. But right now only one pain exists. And throughout the two hours of pain, at any given time, only one of the pains exists—and that one pain is insignificant.

Here is my best way of trying to get the presentist out of this difficulty. Pain is like audible sound. You cannot attribute an audible sound to an object in virtue of how the object is at one moment of time, or even a very, very short interval of times. You need at least 50 microseconds to get an audible sound, since you need one complete period of air vibration (I am assuming that 50 microseconds doesn’t count as “very, very short”). When the presentist says that there is an audible sound at t, she must mean that there was air vibration going on some time before t and/or there will be air vibration going on for some time after t. Likewise, to be in pain at t requires a non-trivial period of time, much longer than two nanoseconds, during which some unpleasant mental activity is going on.

How long is that period? I don’t know. A tenth of a second, maybe? But maybe for an excruciating pain, that activity needs to go for longer, say half a second. Suppose so. Can I re-run the original argument, but using a half-second pulse of excruciating pain in place of the two-nanosecond excruciating pain? I am not sure. For a half-second of excruciating pain is not insignificant.

Collapse and the continuity of consciousness

One version of the quantum Zeno effect is that if you collapse a system’s wavefunction with respect to a measurement often enough, the measurement is not going to change.

Thus, if observation causes collapse, and you look at a pot of water on the stove often enough, it won’t boil. In particular, if you are continuously (or just at a dense set of times) observing the pot of water, then it won’t boil.

But of course watched pots do boil. Hence:

  • If observation causes collapse, consciousness is not temporally continuous (or temporally dense).

And the conclusion is what we would expect if causal finitism were true. :-)

Tuesday, February 27, 2018

A problem for Goedelian ontological arguments

Goedelian ontological arguments (e.g., mine) depend on axioms of positivity. Crucially to the argument, these axioms entail that any two positive properties are compatible (i.e., something can have both).

But I now worry whether it is true that any two positive properties are compatible. Let w0 be our world—where worlds encompass all contingent reality. Then, plausibly, actualizing w0 is a positive property that God actually has. But now consider another world, w1, which is no worse than ours. Then actualizing w1 is a positive property, albeit one that God does not actually have. But it is impossible that a being actualize both w0 and w1, since worlds encompass all contingent reality and hence it is impossible for two of them to be actual. (Of course, God can create two or more universes, but then a universe won’t encompass all contingent reality.) Thus, we have two positive properties that are incompatible.

Another example. Let E be the ovum and S1 the sperm from which Socrates originated. There is another possible world, w2, at which E and a different sperm, S2, results in Kassandra, a philosopher every bit as good and virtuous as Socrates. Plausibly, being friends with Socrates is a positive property. And being friends with Kassandra is a positive property. But also plausibly there is no possible world where both Socrates and Kassandra exist, and you can’t be friends with someone who doesn’t exist (we can make that stipulative). So, being friends with Socrates and being friends with Kassandra are incompatible and yet positive.

I am not completely confident of the counterexamples. But if they do work, then the best fix I know for the Goedelian arguments is to restrict the relevant axioms to strongly positive properties, where a property is strongly positive just in case having the property essentially is positive. (One may need some further tweaks.) Essentially actualizing w0 limits one from being able to actualize anything else, and hence isn’t positive. Likewise, essentially being friends with Socrates limits one to existing only in worlds where Socrates does, and hence isn’t positive. But, alas, the argument becomes more complicated and hence less plausible with the modification.

Another fix might be to restrict attention to positive non-relational properties, but I am less confident that that will work.

Voluntariness of beliefs

The following claims are incompatible:

  1. Beliefs are never under our direct voluntary control.

  2. Beliefs are high credences.

  3. Credences are defined by corresponding decisional dispositions.

  4. Sometimes, the decisional dispositions that correspond to a high credence are under our direct voluntary control.

Here is a reason to believe 4: We have the power to resolve to act a certain way. When successful, exercising the power of resolution results in a disposition to act in accordance with the resolution. Among the things that in some cases we can resolve to do is to make the decisions that would correspond to a high credence.

So, I think we should reject at least one of 1-3. My inclination is to reject both 1 and 3.

Friday, February 23, 2018

More on wobbling of priors

In two recent posts (here and here), I made arguments based on the idea that wobbliness in priors translates to wobbliness in posteriors. The posts while mathematically correct neglect an epistemologically important fact: a wobble in a prior may be offset be a countervailing wobble in a Bayes’ factor, resulting in a steady posterior.

Here is an example of this phenomenon. Either a fair coin or a two-headed coin was tossed by Carl. Alice thinks Carl is a normally pretty honest guy, and so she thinks it’s 90% likely that a fair coin was tossed. Bob thinks Carl is tricky, and so he thinks there is only a 50% chance that Carl tossed the fair coin. So:

  • Alice’s prior for heads is (0.9)(0.5)+(0.1)(1.0) = 0.55

  • Carl’s prior for heads is (0.5)(0.5)+(0.5)(1.0) = 0.75.

But now Carl picks up the coin, mixes up which side was at the top, and both Alice and Bob have a look at it. It sure looks to them like there is a head on one side of it. As a result, they both come to believe that the coin is very, very likely to be fair, and when they update their credences on their observation of the coin, they both come to have credence 0.5 that the coin landed heads.

But a difference in priors should translate to a corresponding difference in posteriors given the same evidence, since the force of evidence is just the addition of the logarithm of the Bayes’ factor to the logarithm of the prior odds ratio. How could they both have had such very different priors for heads, and yet a very similar posterior, given the same evidence?

The answer is this. If the only relevant difference between Alice’s and Carl’s beliefs were their priors for heads, then indeed they couldn’t get the same evidence and both end up very close to 0.5. But their Bayes’ factors also differ.

  • For Alice: P(looks fair | heads)≈0.82; P(looks fair | tails)≈1; Bayes’ factor for heads vs. tails ≈0.82

  • For Bob: P(looks fair | heads)≈0.33; P(looks fair | tails)≈1; Bayes’ factor for heads vs. tails ≈0.33.

Thus, for Alice, that the coin looks fair is pretty weak evidence against heads, lowering her credence from 0.55 to around 0.5, while for Bob, that the coin looks fair is moderate evidence against heads, lowing his credence from 0.75 to around 0.5. Both end up at roughly the same point.

Thus, we cannot assume that a difference with respect to a proposition in the priors translates to a corresponding difference in the posteriors. For there may also be a corresponding difference in the Bayes’ factors.

I don’t know if the puzzling phenomena in my two posts can be explained away in this way. But I don’t know that they can’t.

A slightly different causal finitist approach to finitude

The existence of non-standard models of arithmetic makes defining finitude problematic. A finite set is normally defined as one that can be numbered by a natural number, but what is a natural number? The Peano axioms sadly underdetermine the answer: there are non-standard models.

Now, causal finitism is the metaphysical doctrine that nothing can have an infinite causal history. Causal finitism allows for a very neat and pretty intuitive metaphysical account of what a natural number is:

  • A natural number is a number one can causally count to starting with zero.

Causal counting is counting where each step is causally dependent on the preceding one. Thus, you say “one” because you remember saying “zero”, and so on. The causal part of causal counting excludes a case where monkeys are typing at random and by chance type up 0, 1, 2, 3, 4. If causal finitism is false, the above account is apt to fail: it may be possible to count to infinite numbers, given infinite causal sequences.

While we can then plug this into the standard definition of a finite set, we can also define finitude directly:

  • A finite set or plurality is one whose elements can be causally counted.

One of the reasons we want an account of the finite is so we get an account of proof. Imagine that every day of a past eternity I said: “And thus I am the Queen of England.” Each day my statement followed from what I said before, by reiteration. And trivially all premises were true, since there were no premises. Yet the conclusion is false. How can that be? Well, because what I gave wasn’t a proof, as proofs need to be finite. (I expect we often don’t bother to mention this point explicitly in logic classes.)

The above account of finitude gives an account of the finitude of proof. But interestingly, given causal finitism, we can give an account of proof that doesn’t make use of finitude:

  • To causally prove a conclusion from some assumptions is to utter a sequence of steps, where each step’s being uttered is causally dependent on its being in accordance with the rules of the logical system.

  • A proof is a sequence of steps that could be uttered in causally proving.

My infinite “proof” that I am the Queen of England cannot be causally given if causal finitism is true, because then each day’s utterance will be causally dependent on the previous day’s utterance, in violation of causal finitism. However, interestingly, the above account of proof does not guarantee that a proof is finite. A proof could contain an infinite number of steps. For instance, uttering an axiom or stating a premise does not need to causally depend on previous steps, but only on one’s knowledge of what the axioms and premises are, and so causal finitism does not preclude having written down an infinite number of axioms or premises. However, what causal finitism does guarantee is that the conclusion will only depend on a finite number of the steps—and that’s all we need to make the proof be a good one.

What is particularly nice about this approach is that the restriction of proofs to being finite can sound ad hoc. But it is very natural to think of the process of proving as a causal process, and of proofs as abstractions from the process of proving. And given causal finitism, that’s all we need.

Wobbly priors and posteriors

Here’s a problem for Bayesianism and/or our rationality that I am not sure what exactly to do about.

Take a proposition that we are now pretty confident of, but which was highly counterintuitive so our priors were tiny. This will be a case where we were really surprised. Examples:

  1. Simultaneity is relative

  2. Physical reality is indeterministic.

Let’s say our current level of credence is 0.95, but our priors were 0.001. Now, here is the problem. Currently we (let’s assume) believe the proposition. But if our priors were 0.0001, our credence would have been only 0.65, given the same evidence, and so we wouldn’t believe the claim. (Whatever the cut-off for belief is, it’s clearly higher than 2/3: nobody should believe on tossing a die that they will get 4 or less.)

Here is the problem. It’s really hard for us to tell the difference in counterintuitiveness between 0.001 and 0.0001. Such differences are psychologically wobbly. If we just squint a little differently when looking mentally a priori at (1) and (2), our credence can go up or down by an order of magnitude. And when our priors are even lower, say 0.00001, then an order of magnitude difference in counterintuitiveness is even harder to distinguish—yet an order of magnitude difference in priors is what makes the difference between a believable 0.95 posterior and an unbelievable 0.65 posterior. And yet our posteriors, I assume, don’t wobble between the two.

In other words, the problem is this: it seems that the tiny priors have an order of magnitude wobble, but our moderate posteriors don’t exhibit a correspnding wobble.

If our posteriors were higher, this wouldn’t be a problem. At a posterior of 0.9999, an order of magnitude wobble in priors results in a wobble between 0.9999 and 0.999, and that isn’t very psychologically noticeable (except maybe when we have really high payoffs).

There is a solution to this problem. Perhaps our priors in claims aren’t tiny just because the claims are counterintuitive. It makes perfect sense to have tiny priors for reasons of indifference. My prior in winning a lottery with a million tickets and one winner is about one in a million, but my intuitive wobbliness on the prior is less than an order of magnitude (I might have some uncertainty about whether the lottery is fair, etc.) But mere counterintuitiveness should not lead to such tiny priors. The counterintuitive happens all too often! So, perhaps, our priors in (1) and (2) were, or should have been, more like 0.10. And now perhaps the wobble in the priors will probably be rather less: it might vary between 0.05 and 0.15, which will result in a less noticeable wobble, namely between 0.90 and 0.97.

Simple hypotheses like (1) and (2), thus, will have at worst moderately low priors, even if they are quite counterintuitive.

And here is an interesting corollary. The God hypothesis is a simple hypothesis—it says that there is something that has all perfections. Thus even if it is counterintuitive (as it is to many atheists), it still doesn’t have really tiny priors.

But perhaps we are irrational in not having our posteriors wobble in cases like (1) and (2).

Objection: When we apply our intuitions, we generate posteriors, not priors. So our priors in (1) and (2) can be moderate, maybe even 1/2, but then when we updated on the counterintuitiveness of (1) and (2), we got something small. And then when we updated on the physics data, we got to 0.95.

Response: This objection is based on a merely verbal disagreement. For whatever wobble there is in the priors on the account I gave in the post will correspond to a similar wobble in the counterintuitiveness-based update in the objection.

Thursday, February 22, 2018

In practice priors do not wash out often enough

Bayesian reasoning starts with prior probabilities and gathers evidence that leads to posterior probabilities. It is occasionally said that prior probabilities do not matter much, because they wash out as evidence comes in.

It is true that in the cases where there is convergence of probability to 0 or to 1, the priors do wash out. But much of our life—scientific, philosophical and practical—deals with cases where our probabilities are not that close to 0 or 1. And in those cases priors matter.

Let’s take a case which clearly matters: climate change. (I am not doing this to make any first-order comment on climate change.) The 2013 IPCC report defines several confidence levels:

  • virtually certain: 99-100%

  • very likely: 90-100%

  • likely: 66-100%

  • about as likely as not: 33-66%

  • unlikely: 0-33%

  • very unlikely: 0-10%

  • exceptionally unlikely: 0-1%.

They then assess that a human contribution to warmer and/or more frequent warm days over most land areas was “very likely”, and no higher confidence level occurs in their policymaker summary table SPM.1. Let’s suppose that this “very likely” corresponds to the middle of its confidence range, namely a credence of 0.95. How sensitive is this “very likely” to priors?

On a Bayesian reconstruction, there was some actual prior probability p0 for the claim, which, given the evidence, led to the posterior of (we’re assuming) 0.95. If that prior probability had been lower, the posterior would have been lower as well. So we can ask questions like this: How much lower would the prior had to have been than p0 for…

  • …the posterior to no longer be in the “very likely” range?

  • …the posterior to fall into the “about as likely as not range”?

These are precise and pretty simple mathematical questions. The Bayesian effect of evidence is purely additive when we work with log likelihood ratios instead of probabilities, i.e., with log p/(1 − p) in place of p, so a difference in prior log likelihood ratios generates an equal difference in posterior ones. We can thus get a formula for what kinds of changes of priors translate to what kinds of changes in posteriors. Given an actual posterior of q0 and an actual prior of p0, to have got a posterior of q1, the prior would have to have been (1 − q0)p0q0/[(q1 − q0)p0 + (1 − q1)q0], or so says Derive.

We can now plug in a few numbers, all assuming that our actual confidence is 0.95:

  • If our actual prior was 0.10, to leave the “very likely” range, our prior would have needed to be below 0.05.

  • If our actual prior was 0.50, to leave the “very likely” range, our prior would have needed to be below 0.32.

  • If our actual prior was 0.10, to get to the “about as likely as not range”, our prior would have needed to be below 0.01.

  • If our actual prior was 0.50, to get to the “about as likely as not range”, our prior would have needed to be below 0.09.

Now, we don’t know what our actual prior was, but we can see from the above that variation of priors well within an order of magnitude can push us out of the “very likely” range and into the merely “likely”. And it seems quite plausible that the difference between the “very likely” and merely “likely” matters practically, given the costs involved. And a variation in priors of about one order of magnitude moves us from “very likely” to “about as likely as not”.

Thus, as an empirical matter of fact, priors have not washed out in the case of global warming. Of course, if we observe long enough, eventually our evidence about global warming is likely to converge to 1. But by then it will be too late for us to act on that evidence!

And there is nothing special about global warming here. Plausibly, many scientific and ordinary beliefs that we need to act on have a confidence level of no more than about 0.95. And so priors matter, and can matter a lot.

We can give a rough estimate of how differences in priors make a difference regarding posteriors using the IPCC likelihood classifications. Roughly speaking, a change between one category and the next (e.g., “exceptionally unlikely” to “unlikely”) in the priors results in a change between a category and the next (e.g., “likely” to “very likely”) in the posteriors.

The only time priors have washed out is cases where our credences have converged very close to 0 or to 1. There are many scientific and ordinary claims in this category. But not nearly enough for us to be satisfied. We do need to worry about priors, and we better not be subjective Bayesians.

Yet another life-based argument against thinking machines

Here’s yet another variant on a life-based argument against machine consciousness. All of these arguments depend on related intuitions about life. I am not super convinced by them, but they have some evidential force I think.

  1. Only harm to a living thing can be a great intrinsic evil.

  2. If machines can be conscious, then a harm to a machine can be a great intrinsic evil.

  3. Machines cannot be alive.

  4. So, harm to a machine cannot be a great intrinsic evil. (1 and 3)

  5. So, machines cannot be conscious. (2 and 4)

Tuesday, February 20, 2018


Mere criticism is a statement that something—an action, a thought, an object, etc.—falls short of an applicable standard. But sometimes instead of merely criticizing a person, we do something more, which I’ll call “castigation”. When we castigate people to their face, we are not merely asserting that they have fallen short of a standard, but we blame them for it in a way that is intended to sting. Mere criticism may sting, but stinging isn’t part of its intent. Mill’s “disapprobation” is an example of castigation:

If we see that ... enforcement by law would be inexpedient, we lament the impossibility, we consider the impunity given to injustice as an evil, and strive to make amends for it by bringing a strong expression of our own and the public disapprobation to bear upon the offender.

But now notice something:

  1. Castigation is a form of punishment.

  2. It is unjust and inappropriate punish someone who is not morally culpable.

  3. So, it is unjust and inappropriate to castigate someone who is not morally culpable.

In an extended sense of the word, we also castigate people behind their backs—we can call this third-person castigation. In doing so, we express the appropriateness of castigating them to their face even when that castigation is impractical or inadvisable. Such castigation is also a form of punishment, directed at reputation rather than the feelings of the individual. Thus, such castigation is also unjust and inappropriate in the case of someone lacking morally culpability.

I exclude here certain speech acts done in training animals or small children which have an overt similarity to castigation. Because the subject of the acts is not deemed to be a morally responsible person, the speech acts have a different significance from when they are directed at a responsible person, and I do not count them as castigation.

Thus, whether castigation is narrow (directed at the castigated person) or extended, it is unjust and inappropriate where there is no moral culpability. Mere criticism, on the other hand, does not require any moral culpability. Telling the difference between the castigation and mere criticism is sometimes difficult, but there is nonetheless a difference, often conveyed through the emotional load in the vocabulary.

In our society (and I suspect in most others), there is often little care to observe the rule that castigation is unjust absent moral culpability, especially in the case of third-person castigation. There is, for instance, little compunction about castigating people with abhorrent (e.g., racist) or merely silly (e.g., flat earth) views without investigation whether they are morally culpable for forming their beliefs. Politicians with policies that people disagree with are pilloried without investigation whether they are merely misguided. The phrase “dishonest or ignorant” which should be quite useful for criticism that avoids the risk of unjust castigation gets loaded to the point where it effectively castigates a person for possibly being ignorant. This is not to deny, of course, that one can be morally blameworthy for abhorrent, silly or ignorant views. But rarely do we know an individual to be morally culpable for their views, and without knowledge, castigation puts us at risk of doing injustice.

I hope I am not castigating anyone, but merely criticizing. :-)

Here is another interesting corollary.

  1. Sometimes it permissible to castigate friends for their imprudence.

  2. Hence, sometimes people are morally culpable for imprudence.

In the above, I took it that punishment is appropriate only in cases of moral wrongdoing. Mill actually thinks something stronger is the case: punishment is appropriate only in cases of injustice. If Mill is right, and yet if we can rightly castigate friends for imprudence, it follows that imprudence can be unjust, and the old view that one cannot do injustice to oneself is false.

Monday, February 19, 2018

Leibniz on PSR and necessary truths

I just came across a quote from Leibniz that I must have read before but it never impressed itself on my mind: “no reason can be given for the ratio of 2 to 4 being the same as that of 4 to 8, not even in the divine will” (letter to Wedderkopf, 1671).

This makes me feel better for defending only a Principle of Sufficient Reason restricted to contingent truths. :-)

Life, thought and artificial intelligence

I have an empirical hypothesis that one of the main reasons why a lot of ordinary people think a machine can’t be conscious is that they think life is a necessary condition for consciousness and machines can’t be alive.

The thesis that life is a necessary condition for consciousness generalizes to the thesis that life is a necessary condition for mental activity. And while the latter thesis is logically stronger, it seems to have exactly the same plausibility.

Now, the claim that life is a necessary condition for mental activity (I keep on wanting to say that life is a necessary condition for mental life, but that introduces the confusing false appearance of tautology!) can be understood in two ways:

  1. Life is a prerequisite for mental activity.

  2. Mental activity is in itself a form of life.

On 1, I think we have an argument that computers can’t have mental activity. For imagine that we’re setting up a computer that has mental activity, but we stop short of making it engage in the computations that would make it engage in mental activity. I think it’s very plausible that the resulting computer doesn’t have any life. The only thing that would make us think that a computer has life is the computational activity that underlies supposed mental activity. But that would be a case of 2, rather than 1: life wouldn’t be a prerequisite for mental activity, but mental activity would constitute life.

All that said, while I find the thesis that life is a necessary condition for mental activity, I am more drawn to 2 than to 1. It seems intuitively correct to say that angels are alive, but it is not clear that we need anything more than mental activity on the part of angels to make them be alive. And from 2, it is much harder to argue that computers can’t think.

Thursday, February 15, 2018

Physicalism and ethical significance

I find the following line of thought to have a lot of intuitive pull:

  1. Some mental states have great non-instrumental ethical significance.

  2. No physical brain states have that kind of non-instrumental ethical significance.

  3. So, some mental states are not physical brain states.

When I think about (2), I think in terms similar to Leibniz’s mill. Leibniz basically says that if physical systems could think, so could a mill with giant gears (remember that Leibniz invented a mechanical calculator running on gears), but we wouldn’t find consciousness anywhere in such a mill. Similarly, it is plausible that the giant gears of a mill could accomplish something important (grind wheat and save people from starvation or simulate protein folding leading to a cure for cancer), and hence their state could have great instrumental ethical significance, but their state isn’t going to have the kind of non-instrumental ethical significance that mental states do.

I worry, though, whether the intuitive evidence for (2) doesn’t rely on one’s already accepting the conclusion of the argument.

Beyond binary mereological relations

Weak supplementation says that if x is a proper part of y, then y has a proper part that doesn’t overlap x.

Suppose that we are impressed by standard counterexamples to weak supplementation like the following. Tibbles the cat loses everything but its head, which is put in a vat. Then Head is a part of Tibbles, but obviously Head is not the same thing as Tibbles by Leibniz’s Law (since Tibbles used to have a tail as a part, but Head did not use to have a tail as a part), so Head is a proper part of Tibbles—yet, Head does not seem to be weakly supplemented.

But suppose also that we don’t believe in unrestricted fusions, because we have a common-sense notion of what things have a fusion and what parts a thing has. Thus, while we are willing to admit that Tibbles, prior to its injury, has parts like a head, lungs, heart and legs, we deny that there is any such thing as Tibbles’ front half minus the left lung—i.e., the fusion of all the molecules in Tibbles that are in the front half but not in its left lung.

Imagine, then, that there is a finite collection of parts of Tibbles, the Ts, such that there is no fusion of the Ts. Suppose next that due to an accident Tibbles is reduced to the Ts. Observe a curious thing. By all the standard definitions of a fusion (see SEP, with obvious extensions to a larger number of parts), after the accident Tibbles is a fusion of the Ts.

So we get one surprising conclusion from the above thoughts: whether the Ts have a fusion depends on extrinsic features of them, namely on whether they are embedded in a larger cat (in which case they don’t have a fusion) or whether they are standalone (in which case their fusion is the cat). This may seem counterintuitive, but artefactual examples should make us more comfortable with that. Imagine that on the floor of a store there are hundreds of chess pieces spilled and a dozen chess boards. By picking out—perhaps only through pointing—a particular 32 pieces and a particular board, and paying for them, one will have bought a chess set. But perhaps that particular chess set did not exist before, at least on a common-sense notion of what things have a fusion. So, one will have brought it into existence by paying for it. The pieces and the board now seem to have a fusion—the newly purchased chess set—while previously they did not.

Back to Tibbles, then. I think the story I have just told shows that if we deny weak supplementation and unrestricted fusions also suggests something else that’s really interesting: that the standard mereological relations—whether parthood or overlap—do not capture all the mereological facts about a thing. Here’s why. When Tibbles is reduced to a head, we want to be able to say that Tibbles is more than its head. And we can say that. We say that by saying that Head is a proper part of Tibbles (albeit one that is not weakly supplemented). But if Tibbles is more than his head even after being reduced to a head, then by the same token Tibbles is more than the sum of the Ts even after being reduced to the Ts. But we have no way of saying this in mereological vocabulary. Tibbles is the fusion or sum of the Ts when that fusion is understood in the standard ways. Moreover, we have no way of using the binary parthood or overlap relations to distinguish the how Tibbles is related to the Ts from relationships that are “a mere sum” relationship.

Here is perhaps a more vivid, but even more controversial, way of seeing the above point. Suppose that we have a tree-like object whose mereological facts are like this. Any branch is a part. But there are no “shorn trunks” in the ontology, i.e., no trunk-minus-branches objects (unless the trunk in fact has no branches sticking out from it). This corresponds to the intuition that while I have arms and legs as parts, there is no part of me that is constituted by my head, neck and trunk. And (this is the really controversial bit) there are no other parts—there are no atoms, in particular. In this story, suppose Oaky is a tree with two branches, Lefty and Righty. Then Lefty and Righty are Oaky’s only two proper parts. Moreover, by the standard mereological definitions of sums, Oaky is the sum of Lefty and Righty. But it’s obvious that Oaky is more than the sum of Lefty and Righty!

And there is no way to distinguish Oaky using overlap and/or parthood from a more ordinary case where an object, say Blob, is constituted from two simple halves, say, Front and Back.

What should we do? I don’t know. My best thought right now is that we need a generalization of proper parthood to a relation between a plurality and an object: the As are jointly properly parts of B. We then define proper parthood as a special case of this when there is only one A. Using this generalization, we can say:

  • Head is a proper part of Tibbles before and after the first described accident.

  • The Ts are jointly properly parts of Tibbles before and after the second described accident.

  • Lefty and Righty are jointly properly parts of Oaky.

  • It is not the case that Front and Back are jointly properly parts of Blob.

Wednesday, February 14, 2018

Mereology and constituent ontology

I’ve just realized that one can motivate belief in bare particulars as follows:

  1. Constituent ontology of attribution: A thing has a quality if and only if that quality is a part of it.

  2. Universalism: Every plurality has a fusion.

  3. Weak supplementation: If x is a proper part of y, then y has a part that does not overlap x.

  4. Anti-bundleism: A substance (or at least a non-divine substance) is not the fusion of its qualities.

For, let S be a substance. If S has no qualities, it’s a bare particular, and the argument is done.

So, suppose S has qualities. By universalism, let Q be the fusion of the qualities that are parts of S. This is a part of S by uncontroversial mereology. By anti-bundleism, Q is a proper part of S. By weak supplementation, S has a part P that does not overlap Q. That part has no qualities as a part of it, since if it had any quality as a part of it, it would overlap Q. Hence, P is a bare particular. (And if we want a beefier bare particular, just form the fusion of all such Ps.)

It follows that every substance has a bare particular as a part.

[Bibliographic notes: Sider thinks that something like this argument means that the debate between constituent metaphysicians overlap bare particulars is merely verbal. Not all bare particularists find themselves motivated in this way (e.g., Smith denies 1).]

To me, universalism is the most clearly false claim. And someone who accepts constituent ontology of attribution can’t accept universalism: by universalism, there is fusion of Mt. Everest and my wedding ring, and given constituent ontology, the montaineity that is a part of Everest and the goldenness of my ring will both be qualities of EverestRing, so that EverestRing will be a golden mountain, which is absurd.

But universalism is not, I think, crucial to the argument. We use universalism only once in the argument, to generate the fusion of the qualities of S. But it seems plausible that even if universalism in general is false, there can be a substance S such that there is a fusion Q of its qualities. For instance, imagine a substance that has only one quality, or a substance that has a quality Q1 such that all its other qualities are parts of Q1. Applying the rest of the argument to that substance shows that it has a bare particular as a part of it. And if some substances have bare particular parts, plausibly so do all substances (or at least all non-divine substances, say).

If this is right, then we have an argument that:

  1. You shouldn’t accept all of: constituent ontology, weak supplementation, anti-bundleism and anti-bare-particularism.

I am an anti-bundleist and an anti-bare-particularist, but constituent ontology seems to have some plausibility to me. So I want to deny weak supplementation. And indeed I think it is plausible to say that the case of a substance that has only one quality is a pretty good counterexample to weak supplementation: that one quality lacks even a weak supplement.

Tuesday, February 13, 2018

Theistic multiverse, omniscience and contingency

A number of people have been puzzled by the somewhat obscure arguments in my “Divine Creative Freedom” against a theistic modal realism on which (a) God creates infinitely many worlds and (b) a proposition is possible if and only if it is true at one of them.

So, here’s a simplified version of the main line of thought. Start with this:

  1. For all propositions p, necessarily: God believes p if and only if p is true.

  2. There is a proposition p such that it is contingent that p is true.

  3. So, there is a proposition p such that it is contingent that God believes p. (1 and 2)

  4. Contingent propositions are true at some but not all worlds that God creates. (Theistic modal realism)

  5. So, there is a proposition p such that whether God believes p varies between the worlds that God creates. (3 and 4)

Now, a human being’s beliefs might vary between locations. Perhaps I am standing on the Texas-Oklahoma border, with my left brain hemisphere in Texas and my right one in Oklahoma, and with my left hemisphere I believe that I am in Texas while with my right one I don’t. Then in Texas I believe I am in Texas while in Oklahoma I don’t believe that. But God’s mind is not split spatially in the same way. God’s beliefs cannot vary from one place to another, and by the same token cannot vary between the worlds that God creates.

An objection I often hear is something like this: a God who creates a multiverse can believe that in world 1, p is true while in world 2, p is false. That's certainly correct. But those are necessary propositions that God believes, then--it is necessary that in world 1, p is true and that in world 2, p is false, say. And God has to believe all truths, not just the necessary ones. Hence, at world 1, he has to believe p, and at world 2, he has to believe not p.

Proper classes as merely possible sets

This probably won’t work out, but I’ve been thinking about the Cantor and Russell Paradoxes and proper classes and had this curious idea: Maybe proper classes are non-existent possible sets. Thus, there is actually no collection of all the sets in our world, but there is another possible world which contains a set S whose members are all the sets of our world. When we talk about proper classes, then, we are talking about merely possible sets.

Here is the story about the Russell Paradox. There can be a set R whose members are all the actual world’s non-self-membered sets. (In fact, since by the Axiom of Foundation, none of the actual world’s sets are self-membered, R is a set whose members are all the actual world’s sets.) But R is not itself one of the actual world’s sets, but a set in another possible world.

The story about Cantor’s Paradox that this yields is that there can be a cardinality greater than all the cardinalities in our world, but there actually isn’t. And in world w2 where such a cardinality exists, it isn’t the largest cardinality, because its powerset is even larger. But there is a third world which has a cardinality larger than any in w2.

It’s part of the story that there cannot be any collections with non-existent elements. Thus, one cannot form paradoxical cross-world collections, like the collection of all possible sets. The only collections there are on this story are sets. But we can talk of collections that would exist counterfactually.

The challenge to working out the details of this view is to explain why it is that some sets actually exist and others are merely possible. One thought is something like this: The sets that actually exist at w are those that form a minimal model of set theory that contains all the sets that can be specified using the concrete resources in the world. E.g., if the world contains an infinite sequence of coin tosses, it contains the set of the natural numbers corresponding to tosses with heads.

Saturday, February 10, 2018

Counting goods

Suppose I am choosing between receiving two goods, A and B, or one, namely C, where all the goods are equal. Obviously, I should go for the two. But why?

Maybe what we should say is this. Since A is at least as good as C, and B is non-negative, I have at least as good reason to go for the two goods as to go for the one. This uses the plausible assumption that if one adds a good to a good, one gets something at least as good. (It would be plausible to say that one gets something better, but infinitary cases provide a counterexample.) But there is no parallel argument that it is at least as good to go for the one good as to go for the two. Hence, it is false that I have at least as good reason to go for the one as to go for the two. Thus, I have better reason to go for the two.

This line of thought might actually solve the puzzles in these two posts: headaches and future sufferings. And it's very simple and obvious. But I missed it. Or am I missing something now?

Friday, February 9, 2018

Counting infinitely many headaches

If the worries in this post work, then the argument in this one needs improvement.

Suppose there are two groups of people, the As and the Bs, all of whom have headaches. You can relieve the headaches of the As or of the Bs, but not both. You don’t know how many As or Bs there are, or even whether the numbers are finite or finite. But you do know there are more As than Bs.


  1. You should relieve the As’ headaches rather than the Bs’, because there are more As than Bs.

But what does it mean to say that there are more As than Bs? Our best analysis (simplifying and assuming the Axiom of Choice) is something like this:

  1. There is no one-to-one function from the As to the Bs.

So, it seems:

  1. You should relieve the As’ headache rather than the Bs’, because there is no one-to-one function from the As to the Bs.

For you should be able to replace an explanation by its analysis.

But that’s strange. Why should the non-existence of a one-to-one function from one set or plurality to another set or plurality explain the existence of a moral duty to make a particular preferential judgment between them?

If the number of As and Bs is finite, I think we can do better. We can then express the claim that there are more As than Bs by an infinite disjunction of claims of the form:

  1. There exist n As and there do not exist n Bs,

which claims can be written as simple existentially quantified claims, without any mention of functions, sets or pluralities.

Any such claim as (4) does seem to have some intuitive moral force, and so maybe their disjunction does.

But in the infinite case, we can’t find a disjunction of existentially quantified claims that analysis the claim that there are more As than Bs.

Maybe what we should say is that “there are more As than Bs” is primitive, and the claim about there not being a one-to-one function is just a useful mathematical equivalence to it, rather than an analysis?

The thoughts here are also related to this post.

Thursday, February 8, 2018

Ersatz objects and presentism

Let Q be a set of all relevant unary predicates (relative to some set of concerns). Let PQ be the powerset of Q. Let T be the set of abstract ersatz times (e.g., real numbers or maximal tensed propositions). Then an ersatz pre-object is a partial function f from a non-empty subset of T to PQ. Let b be a function from the set of ersatz pre-objects to T such that b(f) is a time in the domain of f (this uses the Axiom of Choice; I think the use of it is probably eliminable, but it simplifies the presentation). For any ersatz pre-object f, let n(f) be the number of objects o that did, do or will exist at b(f) and that are such that:

  1. o did, does or will exist at every time in the domain of f

  2. o did not, does not and will not exist at any time not in the domain of f

  3. for every time t in the domain of f and every predicate F in Q, o did, does or will satisfy F at t if and only if F ∈ f(t).

Then let the set of all ersatz objects relative to Q be:

  • OQ = { (i,f) : i < n(f) },

where i ranges over ordinals and f over ersatz pre-objects. We then say that an ersatz object (i, f) ersatz-satisfies a predicate F at a time t if and only if F ∈ f(t).

The presentist can then do with ersatz objects anything that the eternalist can do with non-ersatz objects, as long as we stick to unary predicates. In particular, she can do cross-time counting, being able to say things like: “There were more dogs than cats born in the 18th century.”

Extending this construction to non-unary predicates is a challenging project, however.

Presentism and counting future sufferings

I find it hard to see why on presentism or growing block theory it’s a bad thing that I will suffer, given that the suffering is unreal. Perhaps, though, the presentist or growing blocker can say that is a primitive fact that it is bad for me that a bad thing will happen to me.

But there is now a second problem for the presentist. Suppose I am comparing two states of affairs:

  1. Alice will suffer for an hour in 10 hours.
  2. Bob will suffer for an hour in 5 hours and again for an hour in 15 hours.

Other things being equal, Alice is better off than Bob. But why?

The eternalist can say:

  1. There are more one-hour bouts of suffering for Bob than for Alice.

Maybe the growing blocker can say:

  1. It will be the case in 16 hours that there are more bouts of suffering for Bob than for Alice.

(I feel that this doesn’t quite explain why it’s B is twice as bad, given that the difference between B and A shouldn’t be grounded in what happens in 16 hours, but nevermind that for this post.)

But what about the presentist? Let’s suppose preentism is true. We might now try to explain our comparative judgment by future-tensing (1):

  1. There will be more bouts of suffering for Bob than for Alice.

But what does that mean? Our best account of “There are more Xs than Ys” is that the set of Xs is bigger than the set of Ys. But given presentism, the set of Bob’s future bouts of suffering is no bigger than the set of Alice’s future bouts of suffering, because if presentism is true, then both sets are empty as there are no future bouts of suffering. So (3) cannot just mean that there are more future bouts of suffering for Bob than for Alice. Perhaps it means that:

  1. It will be the case that the set of Bob’s bouts of suffering is larger than the set of Alice’s.

This is true. In 5.5 hours, there will presently be one bout of suffering for Bob and none for Alice, so it will then be the case that the set of Bob’s bouts of suffering is larger than the set of Alice’s. But while it is true, it is similarly true that:

  1. It will be the case that the set of Alice’s bouts of suffering is larger than the set of Bob’s.

For in 10.5 hours, there will presently be one bout for Alice and none for Bob. If we read (3) as (4), then, we have to likewise say that there will be more bouts of suffering for Alice than for Bob, and so we don’t have an explanation of why Alice is better off.

Perhaps, though, instead of counting bouts of suffering, the presentist can count intervals of time during which there is suffering. For instance:

  1. The set of hour-long periods of time during which Bob is suffering is bigger than the set of hour-long periods of time during which Alice is suffering.

Notice that the times here need to be something like abstract ersatz times. For the presentist does not think there are any future real concrete times, and so if the periods were real and concrete, the two sets in (6) would be both empty.

And now we have a puzzle. How can fact (6), which is just a fact about sets of abstract ersatz times, explain the fact about how Bob is (or is going to be) worse off than Alice? I can see how a comparative fact about sets of sufferings might make Bob worse off than Alice. But a comparative fact about sets of abstract times should not. It is true that (6) entails that Bob is worse off than Alice. But (6) isn’t the explanation of why.

Our best explanation of why Bob is worse off than Alice is, thus, (1). But the presentist can’t accept (1). So, presentism is probably false.

Wednesday, February 7, 2018

A really weird place in conceptual space regarding infinity

Here’s a super-weird philosophy of infinity idea. Maybe:

  1. The countable Axiom of Choice is false,

  2. There are sets that are infinite but not Dedekind infinite, and

  3. You cannot have an actual Dedekind infinity of things, but

  4. You can have an actual non-Dedekind infinity of things.

If this were true, you could have actual infinites, but you couldn’t have Hilbert’s Hotel.

Background: A set is Dedekind-infinite if and only if it is the same cardinality as a proper subset of itself. Given the countable Axiom of Choice, one can prove that every infinite set is Dedekind infinite. But we need some version of the Axiom of Choice for the proof (assuming ZF set theory is consistent). So without the Axiom of Choice, there might be infinite but not Dedekind-infinite sets (call them “non-Dedekind infinite”). Hilbert’s Hotel depends on the fact that its rooms form a Dedekind infinity. But a non-Dedekind infinity would necessarily escape the paradox.

Granted, this is crazy. But for the sake of technical precision, it’s worth noting that the move from the impossibility of Hilbert’s Hotel to the impossibility of an actual infinite depends on further assumptions, such as the countable Axiom of Choice or some assumption about how if actual Dedekind infinities are impossible, non-Dedekind ones are as well. These further assumption are pretty plausible, so this is just a very minor point.

I think the same technical issue affects the arguments in my Infinity, Causation and Paradox book (coming out in August, I just heard). In the book I pretty freely use the countable Axiom of Choice anyway.

Tuesday, February 6, 2018

Another intuition concerning the Brouwer axiom

Suppose the Brouwer axiom is false. Thus, there is some possible world w1 such that at w1 our world w0 is impossible. Here’s another world that’s impossible at w1: the world w at which every proposition is both true and false. Thus at w1, possibility does not distinguish between our lovely world and w. But that seems to me to be a sign that the possibility in question isn’t making the kinds of distinctions we want metaphysical modality to make. So, if we are dealing with metaphysical modality, the Brouwer axiom is true.

You can't run this argument if you run this one.

Generating and destroying

Thanks to emails with a presentist-inclining graduate student, I have come to realize that on eternalism there seems to be a really interesting disanalogy between the generation of an item and its destruction, at least in our natural order.

Insofar as you generate an item, it seems you do two things:

  • Cause the item to have a lower temporal boundary at some time t1.

  • Cause the item to exist simpliciter.

But insofar as you destroy an item, you do only one thing:

  • Cause the item to have an upper temporal boundary at some time t2.

You certainly don’t cause the item not to exist simpliciter, since if an item is posited to exist simpliciter, it exists simpliciter (a tautology, of course). It is a conceptual impossibility to act on an existing item and make it not exist, though of course you can act on an item existing-at-t1 and make it not exist-at-t2 and you can counteract the causal activity of something that would otherwise have caused an item to exist. However, existing-at-t isn’t existing but a species of location, just as existing-in-Paris isn’t existing but a species of location.

I suppose one could imagine a world where generation always involves two separate causes: one causes existence simpliciter and another selects when the object exists. In that world there would be an analogy between the when-cause and the destroyer.

(Maybe our world is like that with respect to substance. Maybe only God causes existence simpliciter, while we only cause the temporal location of the substances that God causes to exist?)

I suppose one could see in all this an instance of a deep asymmetry between good and evil.

Monday, February 5, 2018

A heuristic argument for the Brouwer axiom

Suppose that:

  1. We cannot make sense of impossible worlds, but only of possible ones, so the only worlds there are are possible ones.

  2. Necessarily, a possible worlds semantics for alethic modality is correct.

  3. Worlds are necessary beings, and it is essential to them that they are worlds.

Now, suppose the Brouwer axiom, that if something is true then it’s necessarily possible, is not right. Then the following proposition is true at the actual world but not at all worlds:

  1. Every world is possible.

(For if Brouwer is false at w1, then there is a world w2 such that w2 is possible at w1 but w1 is not possible at w2. Since w1 is still a world at w2, at w2 it is the case that there is an impossible world.)

Say that the “extent of possibility” at a world w is the collection of all the worlds that are possible at w. Thus, given 1-3, if Brouwer fails, the actual world is a world that maximizes the extent of possibility, by making all the worlds be possible at it. But it seems intuitively unlikely that if worlds differ in the extent of possibility, the actual world should be so lucky as to be among the maximizers of the extent of possibility.

So, given 1-3, we have some reason to accept Brouwer.

Counting down from infinity

In one version of the Kalaam argument, Bill Craig argues against forming an infinite past by successive addition by asking something like this: Why would someone who had been counting down from infinity have been finished today rather than, say, yesterday? This argument puzzles me. After all, there is a perfectly good reason why she finished today: because today she reached zero and yesterday she was still on the number one. And yesterday she was on one because the day before she was on two. And so on.

Of course, one can object that such a regress generates no explanation. But then the Kalaam argument needs a Principle of Sufficient Reason that says that there must be explanations of such regressive facts and an account of explanation according to which the explanations cannot be found in the regresses themselves. And with these two assumptions in place, one doesn’t need the Kalaam argument to rule out an infinite past: one can just run a “Leibnizian style” cosmological argument directly.

Saturday, February 3, 2018

Generalizing the Euthyphro argument

On my reading of the Euthyphro, what is supposed to be wrong with the idea that the pious is what the gods love is that as even Euthyphro knows the gods love the pious because it is pious. (The famous dilemma is largely a rhetorical flourish. It’s clear to Socrates and Euthyphro which horn to take.)

It seems to me that little is gained when one says that the pious is what it is appropriate for the gods to love. For it is just as clear that the reason that it is appropriate for the gods to love pious actions is that the pious actions are pious.

Once we see this, we see that a similar objection can be levied against many appropriateness views, such as Strawson’s view that the blameworthy is what makes appropriate certain reactive attitudes. For it is clear that these reactive attitudes are appropriate precisely because the person is blameworthy.

There might be some things that are to be accounted for in terms of appropriate responses. But I can’t think of any other than perhaps pure social constructions like money.

Friday, February 2, 2018

A non-dimensionless infinitesimal probabilistic constant?

Suppose you throw a dart at a circular target of some radius r in such a way that the possible impact points are uniformly distributed over the target. Classically, the probability that you hit the center of the target is zero. But suppose that you believe in infinitesimal probabilities, and hence assign an infinitesimal probability α(r)>0 to hitting the center of the target.

Now, α(r) intuitively should vary with r. If you double the radius, you quadruple the area of the target, and so you should be only one quarter as likely to hit the center. If that’s right, then α(r)=β/r2 for some infinitesimal constant β.

This means that in addition to the usual constants of physics, there is a special infinitesimal constant measuring the probability of hitting the center of a target. Now, there is nothing surprising about general probabilistic stuff involving constants like π and e. But these are dimensionless constants. However, β is not dimensionless: in SI units, it is expressed in square meters. And this seems incredible to me, namely that there should be a non-dimensionless constant calibrating the probabilities of a uniformly distributed dart throw. Any non-dimensionless constant should vary between worlds with different laws of nature—after all, there will be worlds where meters make no sense at all (a meter is 1/299792458 of the distance light travels in a second; but you can have a world where there is no such thing as light). So, it seems, the laws of nature tell us something about the probabilities of uniform throws. That seems incredible.

It is so much better to just say the probability is zero. :-)

Thursday, February 1, 2018

Leibniz: a reductionist of the mental?

Leibniz talks about all substances having unconscious perceptions, something that threatens to be nonsense and to make Leibniz into a panpsychist.

I wonder if Leibniz wasn’t being unduly provocative. Let me tell you a story about monads. If Alice is as monad, Alice has a family of possible states, the Ps, such that for each state s among the Ps, Alice’s teleological features make it be the case that there is a state of affairs s* concerning the monads—Alice and the other monads—such that it is good (or proper) for Alice to have s precisely insofar as s* obtains.

This seems a sensible story, one that neither threatens to be nonsense nor to make its proponent a panpsychist. It may even be a true story. But now note that it is reasonable to describe the state s of Alice as directly representing the state of affairs s* around her. Teleological features are apt to be hyperintensional, so the teleological property that it is good for Alice to have s precisely insofar as s* obtains is apt to be hyperintensional in respect to s*, which is precisely what we expect of a representation relation.

And it seems not much of a stretch to use the word “perception” for a non-derivative representation (Leibniz indeed expressly connects “perception” with “representation”). But it doesn’t really make for panpsychism. The mental is teleological, but the teleological need not be mental, and on this story perceptions are just a function of teleology pure and simple. In heliotropic plants, it is good for the plant that the state of the petals match the position of the sun, and that’s all that’s needed for the teleological mirroring—while plants might have some properly mental properties, such mirroring is not sufficient for it (cf. this really neat piece that Scott Hill pointed me to).

If we see it this way, and take “perception” to be just a teleological mirroring, then it is only what Leibniz calls apperceptions or conscious perceptions that correspond to what we consider mental properties. But now Leibniz is actually looking anti-Cartesian. For while Descartes thought that mental properties were irreducible, if we take only the conscious perceptions to be mental, Leibniz is actually a reductionist about the mental. In Principles of Nature and Grace 4, Leibniz says that sometimes in animals the unconscious perceptions are developed into more distinct perceptions that are the subject of reflective representation: representation of representation.

Leibniz may thus be the first person to offer the reduction of conscious properties to second-order representations, and if these representations are in fact not mental (except in Leibniz’s misleading vocabulary), then Leibniz is a reductionist about the mental. He isn't a panpsychist, though I suppose he could count as a panprotopsychist. And it would be very odd to call someone who is a reductionist about the mental an idealist.

Of course, Leibniz doesn’t reduce the mental to the physical or the natural as these are understood in contemporary non-teleological materialism. And that’s good: non-teleological naturalist reductions are a hopeless project (cf. this).


For almost three years, I’ve occasionally been thinking about a certain mathematical question about infinity and probability arising from my work in formal epistemology (more details below). I posted on mathoverflow, and got no answer. And then a couple of days ago, I saw that the answer is trivial, at least by the standards of research mathematics. :-)

It’s also not a very philosophically interesting answer. For a while, I’ve been collecting results that say that under certain conditions, there is no appropriate probability function. So I asked myself this: Is there a way of assigning a finitely additive probability function to all possible events (i.e., all subsets of the state space) defined by a countable infinity of independent fair coin tosses such that (a) facts about disjoint sets of coins are independent and (b) the probabilities are invariant under arbitrary heads-tails reversals? I suspected the answer was negative, which would have been rather philosophically interesting, suggesting a tension between the independence and symmmetry considerations in (a) and (b).

But it turns out that the answer is positive. This isn’t philosophically interesting. For the conditions (a) and (b) are too weak for the measure to match our intuitions about coin tosses. To really match these intuitions, we would also need a third condition, invariance under permutations of coins, and that we can’t have (that follows from the same method that is used to prove the Banach-Tarski paradox). It would, however, have been interesting if just (a) and (b) were too much.

Oh well.

Wednesday, January 31, 2018

A consideration in favor of factory farming

In the US, the number of days per week that people in the under $25,000 income bracket (2012 data) consume meat is approximately the same as in higher income brackets. It seems very plausible to think that what makes this equality in the consumption of meat possible is the affordability of meat resulting from factory farming.

Suppose this is so and factory farming were eliminated. Then, there might be little change in the consumption of meat by people at the top of the income scale, as they could afford meat from small and expensive operations, but we would expect a significant decrease in the consumption of meat at the lower end of the income scale. Since people’s feelings of happiness and misery are largely tied to comparative evaluations, this inequality would be likely to lead to increased misery in those at the lower levels of the income scale who are already living lives of quiet desperation.

Moreover, observe that the impact that a pleasure has on a life intuitively depends on what other pleasures are available in life. Suppose I ceased to eat meat. But my own life is really great, and opportunities for pleasure abound. I have a wonderful family, a really fun job, access to Baylor’s incredible recreational facilities, books, video games, microcontrollers from Aliexpress, and enough leisure to enjoy all these. Losing regular access to the gustatory pleasures of meat would make only a small dent in my subjective wellbeing. (It would mostly be a nuisance: I would have to think harder how to fulfill my high caloric needs.)

But imagine that I am really poor. I am trying to support my family by holding down multiple low-paying jobs. There are few pleasures I have time for in my life and much stress. In that context, losing regular access to meat would take away one of the few pleasures in life—the pleasure of eating meat with a regularity that in the past mainly the rich could afford. Moreover, the pleasures of eating are often not just solitary pleasures but communal: I would lose sharing this pleasure with other family members living difficult lives.

In other words, the elimination of factory farming would have a highly unequal effect: the rich could still have their meat, and meat plausibly makes for a much smaller contribution to the well-being of the rich given all the other opportunities for pleasure available to them; but the poor would lose much of their access to one of the much smaller number of pleasures in their life, which would be a much greater loss.

Now, of course, an argument like this won’t justify every inhumane farming method in the name of making meat available to the poor. But it seems to me to be a serious argument in favor of some factory farming practices. Moreover, there is here a positive, albeit weak, argument in favor of even those of us who could afford meat farmed more humanely to eat factory farmed meat: in doing so, we contribute to the economies of scale that make meat affordable to those in lower income brackets to a historically unprecedented degree.