In some of my recent posts, I have been defending the rationality of certain moderate kinds of moral relativism, while making the usual sorts of points that philosophers love about the incoherence of vulgar moral relativism. I've also been trying to convey some idea of why I don't like to wear the moral relativist tag, myself - even though I'd be in good company with Gilbert Harman (for example).
One reason why an intelligent person might become a rational moral relativist is that she might come to see the moralities of various societies as different ways of ensuring a degree of social coordination (which involves gaining certain benefits from social interaction and avoiding certain things going badly wrong, etc.). If one looks at morality in this way, it may come to seem largely conventional, and therefore open to revision. However, this approach also involves placing a deeper value on actually gaining the desired degree of social coordination. That value will not be seen as merely conventional, and whichever moral norms seem especially vital to it will not be so easily revisable.
This does not entail that values relating to social coordination must be seen as "objective" in the strong sense that any rational being who rejects them is making an intellectual error. But there is a middle ground between something's being merely conventional, and hence readily open to revision, and the "something"'s being objective in much the same sense as the laws of physics. The middle ground relates to those things that it is in our nature to value or fear, and which objectively have a power to affect human beings - inter-subjectively and cross-culturally. It seems to me that we have very good (if ultimately species-specific) reasons to build institutions and societies on the recognition of those things, without considering them to be either merely conventional or involving objective value (or disvalue) in the sense that Martians or psychopaths would be compelled, on pain of irrationality, to value/disvalue them.
Since there are various things that beings like us are naturally inclined to value (or to fear), it looks like there will be a plurality of sources for a rational morality for human beings. This, I think, must lead to a degree of pluralism in any workable normative system. Furthermore, some things may be valued/feared by human beings only in certain environments, but they might provide sources for moral norms whenever those environments actually obtain. An example might be the exercise of certain sophisticated kinds of personal autonomy that (arguably) would be incomprehensible in some kinds of societies but are typically highly valued once they develop.
To whatever extent it is true that commonsense moral thinking assumes morality is more objective than this, an error theory of commonsense moral thinking (or metaethical scepticism) is correct. That does not, however, entail that we lack good reasons for acting in accordance with at least some moral norms. Beings like us have reason to act in accordance with whatever moral norms are most important for human societies' obtaining some of the things that human beings rationally value and providing some protection against those things that human beings rationally fear. Note that this provides us with a test of actual or proposed moral norms. We can distinguish between core norms, which might be almost unrevisable, and more peripheral norms that might be long overdue for revision. By contrast, a vulgar relativist will advocate slavish acceptance of whatever norms actually prevail within her society (with all the problems this raises). At best, a vulgar relativist can act like an Old Testament prophet calling on her society to honour its own professed morality.
By this point, the theory of morality that I am developing (and which I elaborate in a slightly different way in my "Stem-cell research on other worlds" article in The Journal of Medical Ethics) does not bear much resemblance to vulgar moral relativism, though it is consistent with the idea that even the most vulgar of moral relativists are onto something.
In the end, that "something" is just that we need a morality which uses only cranes, no skyhooks. It should be able to be grounded in the naturalistic account of the universe developed by modern science over the past 400 years. We should not have to rely on anything spooky like the will of a supernatural being, or the presence of objectively prescriptive non-natural moral properties, to have a well-grounded and workable morality with practical implications for the situations in which we find ourselves.