- Your car is running low on oil.
- If your car runs out of oil, the engine will seize up.
- You don’t want your car’s engine to seize up.
- Therefore, you ought to change the oil in your car.
My problem with it is that it's only a hypothetical imperative. We can imagine a rational being somewhere in the universe that, for whatever reason, doesn't care about whether or not its car's engine seizes up, or, for whatever reason, might even want its car's engine to seize up. This kind of reasoning can deliver us "oughts" (and even Hume accepted this). What it can't do is deliver us oughts that transcend our desires, oughts that are objectively binding on us irrespective of our desire sets.
I don't see any reason to think that our desire sets will all be the same, even after rational reflection. Why - without cheating and introducing a concept of rational reflection that is already moralised - would they not be sensitive to our differing initial desires? So even if we imagine a sort of "ought" that is about the relationship between desire-after-rational-reflection and what conduces to its fulfilment, there's still no reason that I can see - other than a leap of faith - why we will all end up acting in the same way in circumstances C.
To take this a bit further, most of us do have a lot of common desires, and there are often ways that we can compromise and come up with collective ways of acting that fulfil a lot of them. Once again, this will set boundaries to what kinds of systems of law and morality societies come up with in practice. These systems are not just arbitrary - i.e, not just any system will do. But that's a long way from saying either that there is just one "correct" system or that everyone has good reasons, after reflection, to go along completely with her society's local system (as in vulgar moral relativism).
35 comments:
"What it can't do is deliver us oughts that transcend our desires..."
It also can't give us green oughts. To hell with that! I like the color green. Perhaps I will stand on a street corner with a petition and collect signatures in protest.
I will need a chant.
"1,2,3,4, we want oughts transcending desires!/
5,6,7,8 also, oughts should be green, because a lot of people like that color and few people strongly dislike it so it's a good choice according to a lot of utility functions that people actually hold which are quite sensitive to lowering anyone's utility, but don't forget that many people do hold green as a favorite color too even if not as much as blue, so choosing green creates a lot of positive value, and once we collectively decide to make oughts green that will clear the hurdle of deciding what to do and then we can work on figuring out how to do it."
Hmm could use some work.
Russell, this is going to be necessarily broad, necessarily laymanesque and may highlight that I haven't read the back and forths between you and Massimo and Mr Harris as closely as I could have (Although I won't beat myself up too much on that front, I'm not employed to read interesting philosophical blogs, sadly), but I'm struggling to see a particularly drastic difference between what Carrier is saying and what Harris is saying.
They're both saying you can, to some extent, get ought from is, unless I'm mistaken. Harris, effectively: The ought is "We want human well being." With the goal posts shifted like that, his arguments become more plausible, but with various issues which you and others have laid out. Carrier seems to be "Ought is our desires." And again, the goal post is shifted a little, but once it is, you can reasonably freely say a lot of stuff about what one ought to do, as long as you know the desire in advance to place into the equation.
Now, granted, Carrier is far less prominent than Harris, so the ripples of his little blog may not stretch so far, but it seems odd that Harris's book and ideas are so immediately and widely ripped to shreds and is/ought is still seen as an untouchable pillar (unless this is an assumption to far on my behalf), and yet the three quarters of Carrier's blog that I've read so far have said the same, or very similar, and seems to be part of a school of thought that's been around for a minute, with no one kicking up a stink like they have with Harris.
I guess my question is what's the difference between Moral Realism and what Harris has been proclaiming, and is there any reason that Sam touched such a nerve other than the obvious notoriety he already has?
Hi Russell--
I would like to see the hypothetical tightened-up version of the argument so that it would actually be logically valid. Do you disagree that, in order for the word "ought" to appear in the conclusion, it (or a synonym) has to appear in the premises? (Probably you don't disagree with that.) Then the question is whether that premise has the character of something purely empirical, a fact about the world -- which I don't think it will.
I suspect you are trying to *define* "ought" in a way that is free of the kind of assumptions I am talking about. (I could be wrong about that, so feel free to set me straight.) But I don't think it can be done. Or if you insist, you will end up making a statement about the empirical situation you are identifying with "ought," rather than what most people think of as the moral imperative associated with it.
All I'm saying is: something as innocent as "we ought to act in a way so as to make happen things we think ought to happen" is still a premise, one that should that be stated explicitly, and not confused with a definition or an empirical statement.
I completely understand the non-universality of the imperative, that's not what I'm taking issue with.
"All I'm saying is: something as innocent as "we ought to act in a way so as to make happen things we think ought to happen" is still a premise, one that should that be stated explicitly, and not confused with a definition or an empirical statement."
"What the Tortoise Said to Achilles"
David, I think there's less "stink" because a book by a best-selling author is of much more cultural significance than a blog post. Also, it attracts discussion over a much greater period of time.
But in my case, I also think that what Carrier is saying is basically correct up to a certain point in a way that I don't think is so with Harris. I do think that Carrier's approach will fail to deliver a truly objective morality, but I think the basic kind of moral psychology he's operating with is about right.
As far as I can see, quickly, his view is very similar to Michael Smith's which I go along with part of the way.
I really need to look at Carrier's piece in much more detail - or preferably read the more formal version in the Loftus book (but, again, all these books cost money and libraries can't justify buying them all).
Sean, I haven't forgotten you - I'll get back to you on your comment.
Oh, David - I should add for your sake that I don't see the is/ought thing as sacrosanct and neither did Hume (or Mackie if it comes to that). The is/ought question is really about how you can go from "is" to "ought" without mentioning a desire in your premises and deriving a merely hypothetical "ought". Hume was well aware that hypothetical imperatives can be derived from the combination of facts about the world and desires. The problem isn't that you can't derive oughts at all, it's that you can't derive oughts solely from facts about the world without mentioning desires, and that if you mention desires you inevitably end up deriving oughts that are contingent on the desire sets of agents (and are in that sense subjective). We don't seem to be able to derive oughts that are independent of the desire sets of sgents (and in that sense objective).
No offence to Sam, but Carrier seems to have a better grasp of the problem than he has, or at least than he had before he got into this debate. :)
A lot of the kerfuffle with Harris was that there was something that he wasn't "getting" about Hume, etc., whereas Carrier does seem to get this stuff. But again, his attempt to tackle it looks very like Michael Smith's, from a quick read of his post, and I've never believed that that will get us all the way to an objective morality.
Well, getting back to this, I don't think that "ought" in the context we're talking about (the care engine, etc.) means more than a claim that doing something is in accordance with instrumental rationality. And I don't think that means anything in addition to some course of action conducing to fulfilling a desire (or averting something feared or obtaining something valued, or something of the kind). So we don't need to put "ought" in the premises each time, but we do need to put the reference to a desire in the premises.
Thus, I can say:
P1. Sean desires that his car's engine not seize up.
P2. If Sean oils his car's engine it will conduce to his car's engine not seizing up.
C. (Other things being equal)it is instrumentally rational for Sean to oil his car's engine.
And if we want we can replace C. with:
C'. (Other things being equal) Sean ought to oil his car's engine.
So, instrumental or practical rationality has a logic of its own that enables us to deduce the kind of ought we're currently talking about, as long as we make reference in the premises to both an agent's desire and some fact about what course of action would help to fulfil it.
The trouble is that this is a long way from getting us moral oughts as I believe they are usually understood, i.e. reasons for acting that are desire-transcendent.
Now Smith or Carrier want to say that real practical rationality isn't about fulfilling your actual desires but about fulfilling the desires you'd have on full rational reflection, or some such thing. But even so, that will still only give us reasons for acting that are relative to facts about ourselves as individuals. It won't give us reasons for acting that are independent of facts about individuals' psychological makeup. Worse, as long as we'd have different desires even after rational reflection, we'd still each have reasons to act in various different ways - sometimes only slightly different, sometimes very different. The way that is rational for you to act, with your set of rationally reflective desires will not be guaranteed to be rational for me to act with mine.
Given that there are things we all care about in common, we might reach a social contract whereby we all agree to act within certain constraints. That's familiar from Hobbesian reasoning, prisoner's dilemma problems, and so on. Actual moral codes may be like this - imperfect, culturally-evolving solutions to such problems - so, again, I don't think that actual moral codes are just arbitrary. They cannot take just any form at all.
But nor can be we say that it will always be most rational for a particular individual X who has desire set D to act in circumstances C in accordance with the local code. In many cases, it won't be.
Presumably Carrier addresses this somewhere in his long post, so I suppose I need to read it more carefully. But it seems to me to be an insurmountable problem for someone who wants a fully objective morality.
Cheers Russell,
Especially for the secondary clarification. I mentioned Carriers blog and your responses over at my usual haunts and someone suggested I check out some John Searle (his book, Making the Social World) which certainly sounds interesting enough. More importantly though, he suggested I check out the podcasts of Searle's berkeley lectures.
I did a double take: This may be old news to some of you, but I just discovered how many free lectures are up on itunes from berkeley, oxford, stanford, etc. All for free!
Again, apologies if it's old news, but I was like a kid in a candy store.
Coincidentally, this was recently posted by Michael De Dora over at 'Rationally Speaking'.
A repost of my comment on Michael's article:
"I know that Massimo and Russell Blackford have raised important criticisms of Sam Harris's arguments in 'The Moral Landscape', but I share Harris's conviction that there are such things as 'beneficial-thus-better' values and 'harmful-thus-worse' ones.
Yes, Harris assumes human well-being as a premise and therefore fails to solve Hume's is-ought problem. But in the context of human life, of human aims and aversions, surely this is an acceptable premise on which to construct a quasi-objective morality? Isn't this the pragmatic path to take, even if it means failing to address Hume's (perhaps insoluble) problem?"
~
From what I understand of the arguments for sophisticated moral relativism, they are convincing enough for me to abandon my previously held naive (and ignorant) moral objectivism. But as I mentioned in the above comment, surely Harris and his fellow moral quasi-objectivists may validly posit human well-being as the premise of morality, regardless of the is-ought issue? Because it's a practical place to start?
I realise that this may come across as being dismissive of the philosophical complexities (which Harris has been predictably accused of). But I admire Harris for focusing on the practicality of morality, on ideas that encourage better ethics that hopefully result in greater and more widespread human well-being, Hume notwithstanding.
" ... that everyone has good reasons, after reflection, to go along completely with her society's local system (as in vulgar moral relativism)."
I'm surprised. Is that what the vulgar moral relativism is? Somehow I've avoided hearing this before.
It seems possible to justify virtually any action using the car/oil argument.
For example:
- Henry is seen committing a serious crime.
- If Henry allows the witness to live, he will go to jail.
- Henry doesn't want to go to jail.
- Henry ought to kill the witness.
Essentially, the argument boils down to the premise Sean mentioned in his post, “We ought to do that which would bring about what we want.” How is that a moral argument unless you add additional value judgments on top?
Russell, to make your cleaned-up version of the argument valid by the ordinary standards of logic, you obviously have to add the premise that you are taking as implicit:
P3. It is instrumentally rational to do things that cause your desires to be fulfilled. (Or something along those lines.)
Now, of course you can say that it is just a definition. If so, that's fine, but definitions can always be removed by replacing the term with its meaning. In which case, what you have actually proven is
C''. In order for his desires to be fulfilled, Sean should oil his car's engine.
Again, fine, but not all that interesting. The reason I'm being pedantic about this is even when your "oughts" are purely instrumental, logic dictates that some specification of what they are must be included in the premises of your argument. Just a demand of logic, irrespective of the substance of what we're talking about. Of course it really only becomes important or controversial when we start talking about morality, but if we can't get the logical underpinnings right there's not much hope for us.
My reading of Harris (and most moral realists I've encountered) is that he accepts the basic is-ought distinction, and isn't actually trying to bridge it, despite his high-level rhetoric.
He's shot himself in the foot by saying that the is-ought distinction is bogus, but then arguing exactly as though he accepts it. His high-level rhetoric sucks---it's actually false---but what he's selling is mostly reasonable modest moral realism.
His discussion of sociopaths is crucial. He's making it quite clear that he doesn't believe in absolute Objective Prescriptiveness---if you don't have any basic moral values, you can't get to them with logic from non-moral facts.
"we ought to act in a way so as to make happen things we think ought to happen"
This type of 'ought' comes up often when I'm trying to convince people that humans don't have metaphysical free will. To make the argument that we are essentially like robots following a program, I usually give an example like:
Say you’re building a deck. You have a lot of screws to screw into the wood. You choose to use a screw gun to do this job instead of a screwdriver. Why did you choose the screw gun? Because it's easier. Why would you choose the easier option?
"Uh... because it's easier."
Yes, but why do what's easier? If your goal is to build a deck quickly and effectively, why would you do the thing that is in line with your goals?
"It's just common sense!"
"Common sense," indeed. That is your program, I tell them. We are programmed to carry out the actions that are in line with our goals. The buck stops here. Saying that "Sean ought to oil his car's engine" or "it is instrumentally rational for Sean to oil his car's engine" is really just acknowledging - it seems to me - that humans naturally agree to a plan of action that is in line with our goals. Because what is the alternative? The alternative would be a person who has the choice between a screwdriver and a screw gun, knows that the latter will be infinitely easier and more effective, wants to get the job done quickly and easily, yet uses the screwdriver anyway! That would be some pretty inconvenient programming to have.
So I see this "ought" as something like a verbal expression of a neural rule. "If your thoughts, knowledge, desires, goals, etc. point toward plan X, DO X."
We just naturally accept that.
I should clarify that I don't see this "ought" as having a whole lot to do with morality, as it is a kind of "logical ought" - different from a "you ought not murder people", which in its unqualified version involves no if/then statement.
Hmmm . . . Russell, it looks to me like you're actually agreeing with Sean. Carrier's offering an argument for instrumental utility (in which case the premise "you don't want your car's engine to seize up" is taken to contain a proposition about rational choices, thus justifying the conclusion about instrumental rationality), and Sean is saying that you can't derive moral oughts that way. I think you agree that you can't derive moral oughts that way. So . . . you agree, right?
Jason, how am I agreeing with Carrier? Aren't I saying that you won't get all the way to a fully objective morality via this approach? I've been criticising this sort of moral rationalist position. But it's a very different position from the moral naturalist position that Harris defends, so of course the criticism isn't the same.
But I've always thought that Michael Smith has a point, and I read Carrier as putting a version of the same argument. If we reflected rationally most of us would reach a lot of convergence on how to act in any circumstances, C. A lot of the disagreement is surely factual disagreement, inconsistencies in our individual desire sets, etc. But the idea that we'd reach full convergence from different starting points with different desire sets seems to me to be a leap of faith.
Consider the car example, which I've used in the past and which Carrier also uses. If we think very carefully about what we want in a car, read road test reports, go for test drives, etc., we will reach a lot of convergence as to what we evaluate as a "good" car and what we evaluate as a "bad" car. Soem of may reach agreement that car X is better than car Y. But it's a leap of faith to think we'll all reach total convergence. For example, some of us just do value fuel economy more than performance, and vice versa. There's no reason to think that a lot of reflection on our desires and a lot of facts will change that. So even after full rational reflection you might continue to evaluate car X as "better" than car Y and I might continue to evaluate car Y as better than car X. Both of us may be perfectly rational, and there is no truth as to which of us is correct.
To me, that's an anti-realist position about the goodness of cars. If someone else wants to call it a sophisticated realist position, or a revisionary realist position, well fine. It might be a matter of terminology, and that's not really what I care about. But whatever you call it, it's a position that says:
1. Evaluations by Person A are not binding on person B who has a different desire set from Person A even after rational reflection.
2. Evaluations of cars leave room for perfectly rational disagreement with no further truth as to who is right and wrong.
3. But evaluations of cars can, nonetheless, be perfectly rational.
4. When motoring writers, for example, judge the "Car of the Year" they are not using standards that are just arbitrary.
If, however, people meant, when making evaluations of cars, that they were enunciating something like facts about the world, something that is objectively binding on others irrespective of their desire sets, they'd be saying something false every time they utter a car-evaluation. Car-evaluations never succeed in doing that. So if that were the correct semantics of car evaluations we'd have to adopt an error theory of car evaluations.
I'm sure there's also a useful train of thought in that example, Russell, if you substitute "car" for "piece of music." Certainly, I would like to cudgel some people over the head with it. jussayin.
It applies to other things as well - e.g. the standards that are used by the Booker Prize jury are not just arbitrary. But that doesn't mean that whatever they decide each year is binding on everyone else, regardless of what it is that we actually want from a novel.
Aficianados of novels do, in fact, converge on a lot of agreement in what they want, as they become more sophisticated, but there's no reason to think that all these people will different starting points are going to converge in the end on exactly the same standards of excellence in a novel. So a Booker jury member may make judgments that are not just arbitrary, and may reflect a whole lot about what novels can do to meet the widespread desires of human beings. But people do differ and in the end there will always be room for rational disagreement about which is "better" - The Satanic Verses or The Lord of the Rings?
In modern societies, most people more or less accept this about most evaluations, at least when pushed. But most people seem very disinclined to accept it about that set of evaluations that we call moral evaluations.
The way you frame this argument depends rather heavily on a somewhat hidden assumption: You seem to be presuming -- without any argument stated here, although I suspect that you could provide arguments when pressed -- that the only value claim premises available are desires, which are more-or-less by definition individual and subjective and therefore only a basis for instrumental ought claims. However, such an assumption rules out in advance the possibility that there are states of affairs which are objectively, as a matter of fact, valuable to any and every human being, whether or not any particular human happens to subjectively value them or recognize them as valuable.
The primary problem with Harris' argument is that he consistently muddles the distinction between what is subjectively valued and what is objectively valuable. The problem with your argument is that it is not clear what your basis is for ruling out the very possibility of states of affairs which are objectively valuable to humans irrespective of our desires (which, I think, is what must be ruled out for error theory to be justified). I'm not saying you do not have or cannot produce such an argument; but I have not seen any evidence of such an argument in the many posts you've made in recent months on ethical and metaethical matters.
G Felis
Sorry, Jason - I misread your comment to say that I agree with Carrier, and so misunderstood waht you were getting at. My bad.
You actually wrote that I agree with Sean. Yes, basically I do. But I still think we can have these "practical syllogisms" without needing to write in each time:
P. It is instrumentally rational to do that which conduces to your desires.
What I think we should do is adopt a logical rule (like modus ponens or modus tollens, or whatever) enabling us to go from a premise about desires and a premise about an action conducing to the satisfaction of desires to a conclusion about what it is instrumentally rational to do (and what you "ought" to do in that sense).
I suppose you could work with a rule that says you're always entitled to write in Sean's extra premise. They come to the same thing and it looks to me as if you could have two systems of practical logic that handle it in different ways (just as there are different but more or less equivalent systems of first-order propositional calulus).
There's doubtless lots of formal work done on this somewhere, but I'm no logician beyond undergraduate standard, so it's not something I'd know about.
Svlad - it looks to me as if vulgar moral relativists are committed to something like what I said. I.e. I'm thinking of someone who thinks "morally good" just means "required or encouraged in the culture concerned", but still thinks that terms like "morally good" are action-guiding. If you take that view, it looks to me as if you can reflect all you like but at the end of the day you are supposed to follow the moral code of your own society.
But there's a question, I suppose, as to whether anyone has a view that vulgar when you push them. I suspect that most people who seem to be vulgar moral relativists actually have something else going on.
GF, the problem is that I just don't find this idea of something being "objectively valuable" intelligible. To me, something could only be "objectively valuable" if someone who fails to value it is thereby making some sort of mistake about the world or perhaps committing something like a logical error. But I don't see how that can ever be the case.
Someone who is fully informed about the world and making no logical error can always ask why she should value something that she just doesn't value - and if, ex hypothesi - she cannot be answered with any facts about the world that she already lacks or with any critique of her reasoning process, she gets to say of anything that we do manage to tell her, "What is that to me?"
What if we tell her that something just is valuable, and her failure to appreciate this is the mistake she's making about the world? But this just seems circular. She wants to know what it is that she has wrong, and if that's all we can tell her, we haven't advanced. And by this point the claim that something has this extra property of just being "objectively valuable" sounds downright spooky - I can't get any grip on what it now amounts to.
And she can always reply, "Fine, if you say so. But I'm going to go on not valuing it." And when she does that, it's hard to see how she is doing anything wrong (in the sense of mistaken). Certainly she's not making any mistake of instrumental reasoning - she's not doing anything that is somehow counterproductive to her own goals, which don't at all involve obtaining or furthering or respecting or whatevering this thing. So why should she care about our claim? What more is there that we can say to compel her to care about this thing that we've described as "objectively valuable"? If we say "its objective value", again that's circular. She replies, "Whatever," and goes on not caring.
I don't think there's any such thing as objective value. We project our evaluations onto the world.
There are objective properties of things. There are things and properties of things that are widely valued. There are mutual agreements to treat certain things as having value ("I'll place a value your life if you place a value on mine"). But again, I can make no sense of the further claim that there is this property of being objectively valuable.
"So even after full rational reflection you might continue to evaluate car X as "better" than car Y and I might continue to evaluate car Y as better than car X. Both of us may be perfectly rational, and there is no truth as to which of us is correct."
Neither position "about the cars" is correct. In fact, neither is a position about the cars; both are incoherent as stated or at least incomplete. It's true you may believe that you believe "car Y as better than car X," but this is an wrong belief about a belief, not a belief about cars. As a belief about cars what you stated would be incoherent.
"But people do differ and in the end there will always be room for rational disagreement about which is "better" - The Satanic Verses or The Lord of the Rings?"
The word "better" has insufficient content to have meaning, there is no room for rational disagreement about any specific meaning of "better" once the necessary context is filled in. Our brains are used to saving sentences from meaninglessness by filling in the most obvious meaning from context. Here, none of several meanings is most obvious.
The ambiguity is in the word "better", not reality. The ambiguity is in the map, not the territory.
About cars, the only correct beliefs of two perfect rationalists Russell and Brian may well be:
"The same car is better for both Russel and Brian according to Russell," "The Mazda is better for Russel according to Russell," "The Honda is better for Brian according to Russell," "The same car is better for both Russel and Brian according to Brian," "The Mazda is better for Russel according to Brian," "The Honda is better for Brian according to Brian," are false.
"Different cars are better for Russel and Brian according to Russell," "The Mazda is better for Brian according to Russell," "The Honda is better for Russel according to Russell," "Different cars are better for Russel and Brian according to Brian," "The Mazda is better for Brian according to Brian," "The Honda is better for Russel according to Brian," are true.
"Neither car is the better car according to R or B," "Neither car is the square circle car according to R or B," "Neither car is the vital dualist epicyclic luminiferous aether car according to R or B," "Neither car is the dfsgsdh car according to R or B," are true.
""The Mazda is the better car," "The Honda is the better car," "Both cars are better," are all incoherent according to R and B." is true.
"No perfect rationalist believes 'The Mazda is the better car,' 'The Honda is the better car,' or 'Both cars are better,' regardless of how much information he or she has" according to Russell and Brian. Etc. for other beliefs about beliefs.
"Perfect rationalists with identical information will reach the same conclusion," is true.
"'Perfect rationalists with identical information will reach the same conclusion,' is true according to R and B ," is true. Etc.
@Anonymous
"...desires, which are more-or-less by definition individual..."
I'm not so sure that they are, exactly.
I see. What I was expecting to see was "oneself", "one's group" or "the group of people who are in agreement" rather than "one's society".
With "the group of people who are in agreement", reflection could possibly make one no longer a part of that group, effectively nullifying the reason to go along with it.
I see. Well, yeah, true. But most of the people who seem to have relativist views that have not been deeply developed seem to see it in terms of societies or cultures. But yes, it could be some other group.
I think it's more psychologically attractive at the level of cultures and societies because people imagine that it gives a mandate non-interference with other societies/cultures. That seems to be an important part of the attraction - it connects up with a certain kind of inter-cultural tolerance, etc., which may or may not be desirable.
i just read that whole carrier piece and cannot believe how awful it is. simply from the standpoint of rhetoric, it is unclear, repetitious, full of non-sequiturs, jumps around, and is generally undisciplined as argumentative exposition. for a writer who claims to be a physicalist, this character is the worst kind of metaphysician: let us treat words as actual things, and then let the words do things that real objects cannot do. He's like Hegel!
moreover in the argument itself, despite diminishing the idea that the greatest part of valuation or morality is cultural (and thus relative), he repeatedly references the cultural milieu surrounding various kinds of value. it's all mixed up! there's a word for this kind of thing, and it's not philosophy. i'm sorry to snark on a PHD, but the word i have in mind starts with b-u-l and ends with h-i-t.
it is unbelievable that anyone would elevate that article by treating it as worthy of serious response. it is so confused, simply trying to make sense of it well enough to critique it is a complete waste of time. "Moral Realism" AGAIN?
the contortions! the twisted logic! join the circus!
Actually, I find nothing at all mysterious about the notion of something being objectively valuable to humans as humans: oxygen, for example. We are organisms with needs, and our opinions and ideas have bugger all to do with those facts. The tricky part is finding that which is both objectively valuable *and* a sensible basis for what we want from ethical theory (which was the problematic open question I raised last week).
Oxygen is valuable to us insofar as we wish to stay alive. It's not valuable to any possible rational creature and it's not valuable to someone who doesn't wish to stay alive. It's valuable relative to a desire set.
To take it a bit further, I have no difficulty at all with morality being built on desire sets (or purposes) that are extremely common or virtually universal among human beings - so much so that we talk about "needs" rather than "desires". In fact, I think that's what they typically are built on. E.g., we need some level of social peace to achieve most of our desires, including our desire to live with a degree of security.
We do need oxygen. Although strictly speaking I'd say that oxygen is something we require for a purpose or to meet a desire - the purpose of continuing to live or the desire to stay alive! Needs are relative to some kind of requirement that we have. So when I say, "I need X", someone can reply "What do you need it for?" asking me what goal it will help me meet, or what desire it will fulfil, or what purpose I will use it for. In some cases, I may need it just for my survival.
My problem is with the idea of something that transcends these needs and desires - something that just is valuable, not valuable to someone or for some purpose, or whatever.
For example, if some kind of behaviour, say phi-ing, is going to cut off the supply of oxygen to us all, sure, we'd all assess that behaviour as "bad". Phi-ing is massively counterproductive to very basic and universal desires or purposes or "needs", and I'd expect every society's moral code to have a rule that forbids phi-ing or drastically restricts it.
Similarly, all societies have at least some restrictions, usually pretty severe ones, on killing members of the in-group. Without that sort of restriction we'd miss out on the basic security that we need for our peace of mind and all sorts of other things.
Wow, GT - that sure sounds harsh. I do want to get back to the detail of the article, though I'd rather read the version in the book, which will presumably be tighter.
Again, I think his approach is going to take him to some sort of relativism or else to something like Mackean error theory, because he has no non-circular way of guaranteeing total convergence after rational reflection.
Now, some people want to call relativist theories of the kind that he is (in my estimation) going to be forced towards "moral realist" theories. I think that's misleading. But be that as it may, I don't see how he can a plausible objective morality out of this.
Well I don't usually rant but sheesh. On the bright side, at least the world is not made safe for Moral Realism on this argument. =D
I feel the same way about Carrier's piece; it is irredeemably awful in every way.
He keeps it interesting by intermixing many types of flawed reasoning with incorrect premises, making it hard to tell at a glance if any given sentence is out of place because its somehow wrong, a non-sequitur, or one of several strands of argument carelessly tangled into a Gordian knot, as if woven by a drunken spider playing cat's cradle with a tumbleweed.
I wouldn't be quite as harsh as GT, but I agree that Carrier is not a serious or sophisticated philosopher, and I don't think he should be lauded as if he were. He might have some interesting insights to offer in certain philosophical areas, but I haven't observed that to be the case. In my estimation (see here and here), he confuses rather than elucidates basic concepts. I think his errors are serious enough to be pointed out, though, if only because he has an audience that should know better.
@Brian: I liked your first comment too. LOL.
Post a Comment