I must raise this one before (a) becoming more intermittent for a week or so, and (b) moving to some other subjects for a while. It's relevant to the "counting grains of sand" issue that we were discussing the other day.
On pages 71-73 of The Moral Landscape, Sam briefly discusses the work of Derek Parfit. Quite properly, he notes on page 71 (and explains over the page in more detail) that Parfit has shown how consequentialist theories of morality lead to "troubling paradoxes" whether we are concerned to maximise total units of welfare (or whatever) or average units. He (Sam) refers to this as one of the "practical difficulties for consequentialism", but it's not just a practical difficulty. It's a challenge to the very coherence, or at least the intuitiveness when probed, of consequentialism - or at least of any kind of consequentialism that claims we are objectively required to maximise something (I have a consequentialist approach myself, in a broader sense).
Unless Parfit's paradoxes can be solved, consequentialism, in the relevant sense, is in deep conceptual-theoretical, not just practical trouble. Sam does not claim to be able to solve them and doesn't offer any good grounds for trust that they can be solved.
Perhaps they can be. (My own "solution" is to deny that there are objectively binding moral principles and that we are trying to maximise anything; to base our decisions on a plurality of values; and generally to take a pluralistic approach to how we ought to act. I have an article about this in a fairly recent issue of The Journal of Medical Ethics, where I apply it to the life extension debate).
But in the end, he concedes that summation of "welfare" cannot be our only metric (while saying that at the extremes there must be some kind of metric - this is just starting to get good, and is as close as he comes to addressing the metric problem).
But he then fobs off the problem by remarking how certain moral questions are difficult to solve in practice, and that nothing untoward follows from the practical difficulty or impossibility of knowing the consequences of our thoughts and actions. So, it's as if this were just another case of not being able, in practice, to count the grains of sand on the beach.
Unfortunately, it's not like that. The paradoxes from Parfit, which Sam cannot solve, any more than I can without cheating, are not about the practical difficulty of knowing the consequences of our actions or of counting well-being units. They go far deeper. To treat them as if they are about practical difficulties is, alas, to make an elementary error.
Once again, I feel I'm being hard on someone whom I admire and consider an ally. However, I do wonder why he was satisfied with this passage, or why the people whom he consulted let it through in this form. It's probably better that I don't speculate here - it's presumably just an honest error, after all. But this passage provides an opportunity for the book to go to the next level ... and it actually starts to get there before dealing with the problem as if it were merely about knowing the practical results of our actions (something that Parfit is not on about at all, as TML itself makes clear).
The following section is also interesting, but, whatever else it does, it doesn't address the questions raised by Parfit (among other things it argues that perhaps we should continue to accept that people will be biased towards loved ones, etc., as they may actually end up working to maximise global well-being; it's a kind of rule-utilitarian argument, but not one that addresses the deep problems about maximising).
Oh well, shall we talk about something else for now?