About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019) and AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021).

Monday, April 11, 2011

Currently reading - Value and Virtue in a Godless Universe by Erik J. Wielenberg

Thanks to one of my beloved commenters for recommending this book. I'm glad to be reading it, and there is certainly some interesting stuff in it - including a very transhumanist sort of approach to the idea of morally enhancing ourselves (see Chapter 4).

The book is really just too Kantian for my taste. I've never found much of Kant's philosophy to be at all convincing or even plausible, so I'm inevitably going to be out of sympathy with someone who wants to help himself, without a lot of argument, to a broadly Kantian way of looking at the world. More particularly, the book relies on a naive objectivism about values and moral obligations that it never really earns. E.g., Wielenberg's approach to divine command theories of ethics is to reject them based on their alleged incompatibility with such claims as that pain is intrinsically bad and falling in love is intrinsically good.

Now, pain certainly seems bad to me. I avoid it, as almost all of us do. I do, in fact, think that there is a point in the vicinity of the one that Wielenberg makes about pain, so I have some sympathy for what he says, and this would be worth some deeper exploration. After all, a powerful being that commanded us to inflict otherwise-gratuitous pain on each other would be regarded (by us, as we are) as evil. In fact, this would be pretty much a paradigm case of what we mean by an "evil" being. This suggests that our notion of evil - whatever exactly it turns out to be - is not infinitely flexible, and that it certainly does not amount to an idea of disobedience to a god's commands. It actually has something more to do with the malicious infliction of pain and suffering.

We'd be inclined to regard disobedience to a powerful being that commanded the infliction of gratuitous pain as good.

All the same, Wielenberg is very quick to insist that we just know that pain is intrinsically bad. Do we really know that? I know that I want to avoid pain for myself. I know that I am sympathetic to others, and therefore want to avoid pain for them as well. I know that I'll therefore make an unfavourable evaluation of anyone who is disposed to inflict pain avoidably and gratuitously, or who commands that this be done. What I don't know is that any other rational being, irrespective of its own desires and values, must make the same evaluations as I do or else be simply and factually wrong (or caught out by an error of reasoning of some sort).

That's a very difficult thing to demonstrate; seems quite counter-intuitive if we start working through the detail (what exactly is a sadistic Martian's mistake when it says "What is that to me?" at the prospect of causing pain to Earthlings?); and has never been satisfactorily demonstrated. The idea that it must be like that looks very like projection of our values onto the external universe and/or a product of socialisation.

Wielenberg provides an interesting manifesto for the possibility of value and virtue in a god-free world. I could, however, have done with something a bit more rigorous at key points in the argument.

18 comments:

That Guy Montag said...

There's a lot of theory behind this question so I'm not going to pretend it's disinterested, but I'm curious to see if other people have any similar thoughts because it seems to me to be key in order to even start to get to grips with a genuinely empirical moral theory.

The question is if we assume that morals really exist, and that we're not simply born understanding them, what kind of account would be necessary to learn about them?

For example one answer some people out there will suggest is that what happens is that morals are simply inculcated in us by our societies and social surroundings. What I want to see is what sort of theory would take us from seeing this inculcation as not a culture imposing a particular set of actions on me, but as instead playing a part in letting me see real moral facts such as "this moral principle is a brute fact about this society" or something to that effect. Would a story that could do this sort of a job be amenable to or equivalent to scientific investigation? Would it work to explain how I could come to see someone else's pain as something to be avoided? Could it tell a story that could show how I might view my own pain as needing to be avoided? Would it be continuous with a more general theory of perception?

I hope this doesn't sound too arrogant but my last question is particularly for you Russell. I'm sure you're aware that one of the most common arguments for Rationalism more generally is that it's needed to explain perception. John Locke for instance turned his blank slate into a bare cupboard but all of the module theories of mind treat experience in the same way only instead of say a conceptual scheme the translating element is the structure of the module in the brain. Doesn't this suggest that Kant might have something say about pain, which I'll admit I'm slightly naively assuming should be seen as perceptual?

strangebeasty said...

I think that moral philosophy is in need of a positive account of the phenomenon of value judgment that can be sensibly opposed to the naive and false idea of value as something objective or inherent. I still haven't seen it go beyond a criticism of the idea of moral objectivity as apparently untenable. I agree that it is in fact untenable and false, but we don't really have an argument if we don't have anything convincing to say about what's really going on. This is one of the reasons I've been pushing the notion of commitment as the basis that remains for moral judgments when the more obviously subjective wants, preferences, and emotions are put aside. Commitments are often both unconscious and grounded in personal identity, which allows them to appear as though they are sources of intensely important objective perceptions even though they actually motivate subjective evaluations.

Paul W. said...

That Guy Montag:

"The question is if we assume that morals really exist, and that we're not simply born understanding them, what kind of account would be necessary to learn about them?"

I'd think it would look a lot like accounts of other forms of learning, like visual perception, language learning, and intuitive "theory of mind" (which autistic folk have trouble with).

In all three areas, there are clearly strong geneticaly determined neurological biases to detect certain kinds of patterns, think and learn about the detected patterns mean, hit on certain kinds of concepts,
and carve the world up accordingly.

In at least the first two cases, there are maturational issues---we're not born knowing those things, and the brain circuitry grows and develops, partly in interaction with the world.

That development could be mostly "maturational" in the sense that within broad parameters of environmental stimuli, with "enough" experiences of the right very general sorts, a very specific module structure will reliably emerge.

For example, kittens raised in an environment with no vertical features don't develop proper circuitry in the visual cortex for detecting tall thin things, and are doomed to run into chair legs thereafter. They have the genes for vertical feature detection, and some of the resulting circuitry, but without the right stimuli, it doesn't develop into a working vertical object detector.

(In almost any remotely normal environment, kittens will develop very similar visual systems, with all the usual parts doing all the usual things, but in a freaky environment with no vertical features, they're screwed.)

I'd expect morality to be very roughly analogous---we have genes that evolved to grow a "proper" morality "module," sorta, but environment plays a big role in whether it develops properly.

A big theoretical question is how specific the preprogrammed "module" structure is---how reliably does a certain fairly specific "normal" structure emerge, across a broad range of environments?

Anonymous said...

That so many people think this is a specially challenging and hot problem for atheists testifies to widespread, total ignorance of philosophical ethics and that there are an awful lot of Protestant atheists trying to recover from their childhoods.

Catholics don't grow up believing anything remotely like the divine command theory, anyway.

People studying philosophical ethics spend as much time as it takes to read Euthyphro on that notion and never turn back.

March Hare said...

I think there's a much deeper problem with pain (or pleasure) when you get down to brass tacks it is simply the flow, or pattern, of information in a brain which is simply a cascading electrochemical reaction.

Can we truly say such a thing is objectively good or bad, is there a difference if the substrate is a brain or if it's a computer simulation, or if it's simply lots of pens and paper? When you get down to the nitty-gritty it all gets a little bit complex.

The argument is simple to make in subjective terms, but the moment you try to make it objective you run into all sorts of complications and paradoxes. Which is, I guess, why we're both more or less subscribers to moral error theory.

Havok said...

A more recent and (possibly) more sophisticated defence of moral realism from Wielenberg is found in "In Defense of Non-Natural Non-Theistic Moral Realism".
Not having read more than chapter 2 of the book, I'm not sure how this compares to it.

Russell Blackford said...

I can't kinda deal quickly with all the thoughts in these comments, but keep up the discussion folks. Meanwhile, some sketchy, staccato thoughts.

I don't think there's any great difficulty in principle with why we dislike and avoid painful things. Painful things have in common their ability to hurt us, and not necessarily a lot else. Generally, however, they are things that damage our bodily functioning or can even threaten our lives. There seems to be a clear enough reason why we'd find those things painful. It's not a perfect match, especially under modern conditions where we can often control penetrations of the body (e.g. with scalpels and vaccination needles) so that they are actually doing us good - but they still hurt.

I don't think there's any further question about why we generally find painful experiences unpleasant. That seems like one question too many. But there's a question as to how we come to fear pain itself, as opposed to painful things. Again, it doesn't seem to me to be difficult in principle - we have the intellectual capacity to abstract away the varied causes of pain and identify the experience of pain in the abstract as (to say the least) unpleasant ... and so we can have these discussions of pain in the abstract, not just of things that hurt.

The more difficult question is why we care about each other's pain, not just our own. That does seem to be largely innate to us as a species, even if it needs a certain amount of socialisation to reinforce it. So there's presumably an evolutionary story as to how it came about. Human societies would be very different (if they were even possible) if we didn't more-or-less universally respond in this sympathetic way to others' pain. But whether we can ever get a really detailed and robust scientific account of how it came about remains to be seen.

It doesn't look that difficult in principle to get a scientific account of how it develops within individual lives, and there's quite of body of research on this, even though I'm not the person to start wheeling out the references.

My view about moral systems, very roughly, is that they exist to meet certain needs (and we can a story about what a need is) of creatures like us who have certain capacities, certain vulnerabilities, and a responsiveness to each other that enables us to bond and form families and societies. Moral systems impose positive obligations on us and they also impose restrictions on us. In doing so, they do contribute to the welfare of conscious creatures, as Sam Harris would say, but more particularly they contribute to the viability of the societies concerned - it's not surprising that moral systems are heavily weighted to the interests of various in groups. These days, we have all sorts of reasons (we could discuss these) to prefer moral systems that are less weighted to the in group. But I think it's unreasonable to expect a moral system to give no extra weight to the in group at all or to expect it to require individuals to give no extra weight to their own interests. That kinda neglects why human beings need moral systems in the first place - mainly to assist mutual welfare by restricting or regulating selfishness, but not to eliminate selfishness altogether, and not to function in a way that gives no advantage to the particular society.

In the end I see particular human moral systems as constructions built on a evolutionary foundation. But they're not arbitrary constructions - they serve functions and not just any system will work. Also, we can rationally evaluate some of these systems as better than others.

Svlad Cjelli said...

It's problematic that "pain" can mean either
"things that cause the sensation of pain, and how they cause the sensation of pain"
or simply "the sensation of pain".

Things that cause the sensation of pain can be either good things or bad things. The sensation of pain itself, I would argue, is good when the thing causing it is bad, and bad when the thing causing it is good.

The last paragraph has unwritten exceptions, as hinted by "I would argue". There is for example the matter of degree, as in the sensation being inappropriately severe even when the causing thing is bad. Another objection may come from any person enjoying the experience of various bodily stress functions, such as a masochist, curious individual or thrill-seeker.

Finally, maybe "suffering" is less ambiguous than "pain"?

Dave Ricks said...

Gaius Sempronius Gracchus cleared my head. I have a similar opinion, not from reading philosophy, but from seeing dogs play nice at a dog park. You can't explain that!*

But really, if we pick any species of social mammal, we see some playing, and some fighting. How are humans are any different?

I see the Golden Rule as a verbal expression of some prelingual emotion E in mammals. If I'm correct, then expressing this emotion E as a verbal rule VR (or modeling it as a mathematical optimization MO) is a fine thing (for Sam H.), just don't mistake VR (or MO) for E.**

Of course, morality is more than the Golden Rule. Humans are different from other animals by the predictive power of our intelligence giving us foresight of our consequences, and by the increasing power of our tools (vaccines, A-bombs, etc.), so we have dilemmas. Mathematical optimization is one approach to resolve dilemmas, but I won't mistake math for emotions that can be manipulated to bring out a vote.

Morality = emotion + debating dilemmas***

*being ironic
**wink @ E (for Sam H.)
***my opinion, other views welcome

That Guy Montag said...

Russell:

Thanks for the full response.

Anyway I think there are a range of points that I found interesting in the responses.

First there's March Hare's point about the flow of information which I think is worth examining more fully more generally, because the kind of theory I think we need here is one that's giving a story about the relationship between the mind and the world.

Another interesting point is Paul's examination of perception from a neuroscience perspective. It seems to me that this kind of account opens the door for looking at the cognitive side of perception which I've said before I think has a role to play, in particular I think this is where normative theories fit best.

The final point is a bit of a paraphrase of Svlad's point, sorry if I've misinterpreted it, that the question with pain isn't necessarily whether it motivates us, but rather whether it's the sensation of pain that counts as the reason, or the fact that pain represents a harm that counts as a reason. Just as an aside I want to be very clear that I'm using represent very loosly here: I don't want to commit myself to any particular kind of theory about cognitive content.

Now Russell from what I can tell there are roughly two lines to the position you're putting forward. The first is that when we properly understand morals, they need to be seen as the individual's will or desires pushing outwards, a very Humean position. The second point is that moral facts, if they exist at all, will be grounded in a kind of social aggregate of those desires. Now this makes sense, because you're not appealing to any fundamentally different kind of thing. The only tension I can think of is that there might be a question of how an evolutionary account imposes itself on desires, but overall this seems to me to be a coherent moral system.

Where I would want to come in at this moment is just to suggest that there's at least one other coherent way to answer this question. I think it would be possible to develop Svlad's point to the point where what looks like emotional affect becomes really just the perceptual end of a very different kind of throughput of information. Maybe a good analogy would be making a decision about some area of dispute. We might start off pretty certain about a particular line of argument A. Someone raises B and points to various problems with A and in turn we feel less certain about A. C is raised as a tentative alternative option, but we then find that it resolves the issues both A and B set out to resolve. We might not feel too confident still, but as issue after issue is resolved we can come to be certain about the rightness of C. The point is that each step of this account includes a subjective element of the degree of certainty with which we hold a particular point, but what is doing the shifting around will be various reasons. Now I realise there is an argument here that we could always say that what counts as a reason just corresponds to our desires. Whether that works better as an account or not is not what I'm trying to establish here, all I'm trying to show is that it's possible to construct a plausible account which doesn't put subjective desires at the starting point, but instead places them as responding to or being responsible to the world.

Svlad Cjelli said...

"The final point is a bit of a paraphrase of Svlad's point, sorry if I've misinterpreted it, that the question with pain isn't necessarily whether it motivates us, but rather whether it's the sensation of pain that counts as the reason, or the fact that pain represents a harm that counts as a reason. Just as an aside I want to be very clear that I'm using represent very loosly here: I don't want to commit myself to any particular kind of theory about cognitive content."

The interpretation looks correct, as far as I can tell.
I'd also like to repurpose this response-post to bemoan how tangled and amalgamous the mental totality of a human is. ("Mental totality" because terms like mind are often understood as a limited part of what goes on in the head.)
When considering the possibility of lacking a strong sense of self to accept at face value as the authority on what a human wants, it seems very daunting to sort the "desires".

That Guy Montag said...

Svlad:

I agree. It seems to me that if we follow some of the thinking from say modular theories of mind the self becomes a very strange sort of object and that seems like a good argument against the idea that there can be a self for subjectivism to be motivated from.

My motivation more generally though is that I don't think this is a purely negative answer. Scepticism about the self, at least when it's motivated by modern theories of the mind, seems to me to lead to realism: if there is no self to motivate morals, either it doesn't exist or it's external; if it's external the puzzle is what sort of state do I need to be in in order to perceive it and only then, maybe, what it is that I perceive that motivates it. Once I've started the project of trying to figure out what state I need to be in in order to say I have perceived an external fact about morals.

So to bring this back to your response how does this reflect on desires? Well, step one is that I don't think the subjectivists aren't motivated. There is a sensation of being motivated to act. It seems to me that it's right that we respect this well grounded intuition, I only need to shift the end point away from self, to the reasons themselves. To be motivated is to perceive a reason for action. From there it seems a plausible story we can start to tell will be a scientific one with motivations as observations and, just as with vision, we can have better and worse evidence for having an experience, more or less vivid kinds of experiences, tools that can either improve or confound our experience, and a conclusion that is motivated by the sum of those observations.

Svlad Cjelli said...

Yes, if we dislodge, so to speak, the desires from the self, then they're external in a certain sense, specifically external to the self. But I think they can still be said to be internal in a sense of how they relate to the concert, or collective, of selfless entities involved.
Around this point, I begin to lose my confidence on the topic, but we seem to be heading in the general direction of reasons that might be considered objectively valid/binding in some sense of physical determinism, i.e. caused causes rather than every-day psychological motivations or reasoned reasonings - though the difference seems to lie mostly in scale to me.

March Hare said...

TGM/SV, I think the further you go down this road the closer you get to the idea that desires etc. are all simply in the brain and what we desire is not a change in the external world but simply an alteration/stimulation in the brain.

How that relates to morality or subjective vs. objective I'm too busy at work to properly think about.

That Guy Montag said...

Svlad:
Okay, that last comment of mine was about as clear as mud. Sorry.

I'm going to go off and have a think about how I might make my point clearer but as a rough guide I want to get away from talking about what morals are and instead to talk about how we come to know about them. As part of that I want to stop talking about desires as these tangible things things made up of a self created will, and instead to see them as the conscious, mind dependent part, of a flow of causes from a real moral fact to an action. I have to be careful about this analogy, but I really think that we need to see desires as essentially the same as colours.

I do want to just briefly address the point you make about where you end up once you've started to accept scepticism about the self. As I see it, your thought is if our moral ontology isn't grounded in the idea of a solid self, then the way to ground it will be in solid scientific facts about human beings as a whole. Now this might ultimately be true, it might also be enough to develop a thorough going realism about morals. For my part though that kind of thinking is a bit too far down the road. My goal is just to show how it's possible for us to have evidence of real moral facts and from there to find out what they consist in. My own suspicion is that no one thing counts as a moral fact and that actually there are as many moral facts as there are moral questions: the facts about whether or not I should cheat on my girlfriend are different from the facts about whether or not we should ban burkha.

Svlad Cjelli said...

I have a vague sense that seeing desires as colours might be fruitful, though I still haven't got it figured out.

Yes, that's roughly what I was thinking, and it's farther down the road then I would like to go as well. Honestly, ethics and meta-ethics have never quite been my cups of tea.

That Guy Montag said...

MH:

I suspect thinking like that is going to form some part of an overall picture of morals for sure. I wonder if it's maybe a bit too much to include it at this level of the analysis though? I'm having a hard enough time thinking about this simply from the level of common, everyday experience of the way the world impacts on us. If I try to include discussion of brain states I think it will just make it too complicated to get anything across.

Svlad:

Metaethics has never been my strong suit either and you're being far too kind by calling me farther down any such road: I'm just a slightly pretentious second year philosophy student. Right now, not sounding completely batshit nuts is about as high as I'm aiming.

Svlad Cjelli said...

It's a place to aim as good as or better than any other.