Thanks to one of my beloved commenters for recommending this book. I'm glad to be reading it, and there is certainly some interesting stuff in it - including a very transhumanist sort of approach to the idea of morally enhancing ourselves (see Chapter 4).
The book is really just too Kantian for my taste. I've never found much of Kant's philosophy to be at all convincing or even plausible, so I'm inevitably going to be out of sympathy with someone who wants to help himself, without a lot of argument, to a broadly Kantian way of looking at the world. More particularly, the book relies on a naive objectivism about values and moral obligations that it never really earns. E.g., Wielenberg's approach to divine command theories of ethics is to reject them based on their alleged incompatibility with such claims as that pain is intrinsically bad and falling in love is intrinsically good.
Now, pain certainly seems bad to me. I avoid it, as almost all of us do. I do, in fact, think that there is a point in the vicinity of the one that Wielenberg makes about pain, so I have some sympathy for what he says, and this would be worth some deeper exploration. After all, a powerful being that commanded us to inflict otherwise-gratuitous pain on each other would be regarded (by us, as we are) as evil. In fact, this would be pretty much a paradigm case of what we mean by an "evil" being. This suggests that our notion of evil - whatever exactly it turns out to be - is not infinitely flexible, and that it certainly does not amount to an idea of disobedience to a god's commands. It actually has something more to do with the malicious infliction of pain and suffering.
We'd be inclined to regard disobedience to a powerful being that commanded the infliction of gratuitous pain as good.
All the same, Wielenberg is very quick to insist that we just know that pain is intrinsically bad. Do we really know that? I know that I want to avoid pain for myself. I know that I am sympathetic to others, and therefore want to avoid pain for them as well. I know that I'll therefore make an unfavourable evaluation of anyone who is disposed to inflict pain avoidably and gratuitously, or who commands that this be done. What I don't know is that any other rational being, irrespective of its own desires and values, must make the same evaluations as I do or else be simply and factually wrong (or caught out by an error of reasoning of some sort).
That's a very difficult thing to demonstrate; seems quite counter-intuitive if we start working through the detail (what exactly is a sadistic Martian's mistake when it says "What is that to me?" at the prospect of causing pain to Earthlings?); and has never been satisfactorily demonstrated. The idea that it must be like that looks very like projection of our values onto the external universe and/or a product of socialisation.
Wielenberg provides an interesting manifesto for the possibility of value and virtue in a god-free world. I could, however, have done with something a bit more rigorous at key points in the argument.