Problems of ontology
If there was a theme to my week in browsing the internet, it would be arguments over ontology warping people’s thinking. I think it’s a legitimate subject for discourse, I guess, but the more I see it lead people astray the less useful it seems. Somehow intelligent, thoughtful people seem to think crazy things when they start worrying about what is “really real”, especially regarding morality. This is not a new debate of course, but it was reignited by the publication of Sam Harris’ book The Moral Landscape, which argued that morals are “true” in some sense and could be determined scientifically. On the off-chance I haven’t posted this before, I was sympathetic to this argument before I discovered a PhD thesis from Joshua Greene entitled The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it. Here’s a link to all 250+ pages of it (worth reading at least some of it), but you might just want to read this shorter article. The basic idea is quite simple: morality is a property of minds, not the natural world, and therefore is not “true” in some universal way. That doesn’t mean you should go out and kill your neighbor or rob a bank, but even if it did, the facts about morality would be the same. There is the Truth about morality, and, separately, What to Do About it. That’s the most frustrating objection I’ve heard to moral anti-realism, so I thought I’d get it out of the way before continuing.
This weeks troubles started with a post on Cosmic Variance about another moral realist, and why he was wrong. Good on Sean for setting the record straight, but I was surprised to see the moral realist he was arguing with was none other than Richard Carrier, who so spectacularly and elegantly defined naturalism as “no ontologically basic mental entities”. Naturalism is perhaps a discussion for another post, but I think I may have brought it up before. If not, here’s the link. If he was advocating moral realism, perhaps I should at least consider his view. After reading his argument, I was surprised by the subtle missteps in reasoning he made. I suspect it is due mainly to Carrier’s desire to recover what he sees as “beneficial” aspects of Christian doctrine such as an absolute moral force, as well as goodness, kindness, and other things I really would call unmitigated goods.
Carrier manages to agree with me on almost every philosophical fact, and yet calls his view realism, whereas I call myself an anti-realist. Situations such as these suggest that at least one of us is failing to make our beliefs pay rent in anticipated experience. I think Carrier’s desire to find a naturalistic source for the good bits of Christianity gives him the motive, but luckily I don’t have to speculate on exactly where he went wrong, since he provides an explicit discussion of his reasoning in his post on moral ontology. He uses a number of examples in his post, but I think the first is sufficient to explain his logic:
Take, for instance, the scariness of an enraged bear: a bear is scary to a person (because of the horrible harm it can do) but not scary to Superman, even though it’s the very same bear, and thus none of its intrinsic properties have changed. Thus the bear’s scariness is relative, but still real. It is not a product of anyone’s opinions, it is not a cultural construct, but a physical fact about bears and people. Thus the scariness of an enraged bear is not a property of the bear alone but a property of the entire bear-person system.
Certainly you cannot observe bear-scariness under a microscope or pick it up with a radio antenna, but, he claims, it’s not solely a mental phenomenon. Therefore, assuming we aren’t superman, we ought to believe bears are scary. Given this definition of ought, its only a few (completely valid) philosophical jumps to oughts for values. Given that we have certain goals, goals like happiness and fulfillment that are common to almost all intelligent agents, there are certain instrumental values we ought to have, like the rule of law, free expression, etc. Thus, he concludes, as there are values grounded in real life that we should hold, regardless of any other rational belief, morality is real. I don’t deeply disagree with this, although I feel it’s slightly misleading based on what moral realists usually believe.
But, as I said, I think the real problem comes when you try to use these beliefs about morality to constrain your expectations of the world. Although this is absolutely essential to the pursuit of rationalism, I think Carrier can be forgiven for not including it in his article since he usefully covered so much philosophical ground. I will also save this for my next post, but in case you’re reading this before I’ve posted it, ask yourself this; if a highly intelligent (and therefore not irrationally amoral) alien/robot suddenly came to our planet, what “morals” would you expect it to have by Carrier’s definition, assuming you have no previous information about its beliefs and goals?