About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019) and AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021).

Sunday, September 30, 2007

Moral scepticism revisited

In the spirit of revisiting some past highlights of my posting here and elsewhere, here's what I wrote about moral scepticism on this blog eighteen months' ago.
====

Moral scepticism is the philosophical position that will most frequently be defended on this blog.

As a moral sceptic should I believe that there is nothing wrong with murder, rape, dishonesty, cruelty, and mayhem ... and ...?

Well, actually no. I certainly don't believe any of those things, and nor should I. I believe there are very good reasons to prefer kindness to cruelty, loyalty to treachery, non-violent resolutions of conflict to violent ones. In fact, I probably share most (though perhaps not all) of the same core moral attitudes as you.

I can't speak for everyone who has ever claimed to be a moral sceptic, but all I mean by the expression is this. A great deal of ordinary commonsense thinking about morality, as well as a great deal of philosophical thinking about it, asserts, or simply assumes, something that is not true, as far as I can see. It asserts - or assumes - that morality is "objective" in a sense that transcends the desires, interests, values, and fears that human beings actually have.

By contrast, I see the moral norms that prevail in particular societies as arising historically from just those sorts of desires, interests, values and fears. Moral norms ("don't murder or lie"; "be kind") can very often be justified, but the justification will need to appeal to actual human desires, etc. Beyond a certain core, morality may be somewhat underdetermined, its detailed content a product of ongoing bargaining and compromise.

This way of thinking about morality may not make a lot of difference for many important purposes, but it does seem, to me at least, to entail some far-reaching changes to the way we think about morality in the abstract. It also affects how we should consider some of the practical moral decisions that arise socially. It can affect what substantive positions in moral philosophy are intellectually supportable. In particular, I think it implies that many of the more peripheral or unusual examples of whether such and such would be the "right" or "wrong" thing to do may not have clear or determinate answers. Again, some conventional moral wisdom may be without plausible justification.

All that said, most of our core commonsense morality can be justified quite easily. For example, it is not going to be terribly hard to justify having a rule that forbids killing people who fear dying, though there is a great deal more to be said about the detail of this - and I doubtless will say more in future blogs.
====

This seems fairly much right, and may do for practical purposes, but I'd like to be clearer these days in distinguishing descriptive claims about the phenomenon of morality and its roots from prescriptive claims about how we should act, all things considered.

The plausible meta-ethical positions seem to be a group of sophisticated moral relativist theories - the sort of position I associate with Gilbert Harman, David Wong, Neil Levy, Max Hocutt, and others - and the error theories (moral scepticism) of people like JL Mackie, Richard Garner, Richard Joyce, and Joshua Greene. There are various other positions in the same ballpark, such as the sophisticated non-cognitivism of Simon Blackburn. I am sometimes irritated when other philosophers assume that the only theory in this sort of ballpark is the popular but vulgar kind of moral relativism that we are all taught to avoid in first-year ethics courses.

I think the moral sceptics have the best story to tell, since it is difficult for any of the other non-objectivists to avoid identifying an error somewhere in commonsense meta-ethics. In particular, sophisticated relativists are going to have to say that morality is relative to some society, while admitting that this is not what ordinary people within societies think - therefore, they seem committed to the idea that ordinary people within societies are in error about the nature of morality. While the sophisticated moral relativists can ask us to reinterpret moral claims, translating them into a different framework in which they become relativised claims, that does not avoid the niggling fact that we know erroneous thinking is going on (if we accept a theory such as Harman's).

Vulgar relativist theories, of course, have all sorts of problems. Error theories and sophisticated relativist theories try to get around these by distinguishing questions about whether one has reason to support a moral system, or certain of its norms, from whether certain norms are in fact the norms of a moral system. The answer to the first might be "No" even though the answer to the second is "Yes." Or we might have reason to propose social acceptance of a new norm that is not currently a norm of the system.

Making that distinction is a bitter pill for many people to swallow - it may be counterintuitive. However, that doesn't worry me. This is one of the areas of discourse where I'd expect commonsense intuitions to be unreliable, since we are all socialised to think of certain moral norms as just "correct" (and people from the neighbouring society are socialised to think of their moral norms as "correct"). The psychological force of such intuitions need have nothing to do with evidence and everything to do with our nature as social animals, the need for some order and regularity, and so on. I don't see it as in any way a criticism of sophisticated moral relativism or of error theory.

The question is, what do we do once we realise that the moral norms around us (and often internalised by us) are not "correct" in any unproblematic sense? That's where a sophisticated theory is needed (I currently like to call my own version "naturalistic moral pluralism", but it is technically an error theory). Vulgar moral relativism is, of course, a dismal failure at this point.

Saturday, September 29, 2007

Sex, Drugs, Death, and the Law

I'm re-reading a favourite book: Sex, Drugs, Death, and the Law, by David A.J. Richards. My immediate interest is Richards' theoretical account of the limits of the criminal law, but he is very good on the specifics of issues.

Throughout the book, he argues against over-criminalisation of such things as homosexuality, prostitution, drug use, and euthanasia, basically relying on solid liberal arguments for non-intervention by the state in matters of personal choice (in the absence of secular harm). This is all solid stuff. However, I am pleasantly surprised - since I'd forgotten - how impressive he can be when analysing the history and content of various moral ideals that the state has so often promoted into a public morality and attempted to impose by law.

At one point, he nicely picks apart one of St. Augustine's famous arguments: that sex is something inherently shameful and degrading. Augustine argued that human beings universally insist on (1) having sex unobserved and (2) covering their genitals in public. He concluded from this that the only explanation for such facts is that we experience sex as inherently degrading because it involves loss of control, and it can be made morally acceptable only by a controlled intention to procreate.

Augustine's two facts are, in any event, open to some debate and qualification, though the first is typical of human beings in all cultures and the second could probably be restated more accurately without totally undermining Augustine's point. There are various things that can be said about such facts from an anthropological perspective, but Richards makes a nice point that applies to the first one, in particular, though it might also have a connection to the second. Most of us would, indeed, be embarrassed to have sex in front of an audience, and I'm confident that public sexual performances are altered by the fact that those concerned are attempting to put on a pose and keep something about themselves concealed. But this embarrassment factor is not because we think sex is inherently degrading.

Richards attributes our unwillingness to be observed publicly, while having sex, to the familiar fact that sex is a "profoundly personal, spontaneous, absorbing experience" in which the partipants "express intimate fantasies and vulnerabilities which typically cannot brook the sense of an external, critical observer." Sex is the kind of thing that requires privacy - not many of us want to be open to scrutiny from the world at large when we are in many ways at our most vulnerable and trusting, lowering the layers of psychological (as well as merely physical) self-protection that we need for other activities. As Richards suggests, this doesn't entail that we feel ashamed about sex or about our nature as sexual beings.

I think that this is all plausible and that Richards expresses the idea very well.

Doubtless, many people do feel some kind of shame or disgust about sexuality and sexual acts; they may base their moral beliefs on such feelings, and try to elevate those beliefs to the level of policy, imposing them on others who don't share them. I should add that the psychological basis for these sex-negative feelings may go beyond the point that Richards makes (the one about our feelings of vulnerability that make most of us unwilling to be observed in moments of sexual intimacy with our lovers). In addition to Richards' point, I expect that some folks have more deep-seated feelings of fear or disgust, or suspicion of our animal nature, or some mixture of these and similar things. Some might (I can imagine) have these feelings even in the absence of cultural teachings that instil or reinforce them. But none of this, of course, provides a rational basis for drawing conclusions about the inherent moral character of sex, let alone for advancing beyond that to conclusions about public policy.

Thursday, September 27, 2007

It's okay to change your mind

(For various reasons, I'm a bit squeezed on blogging time at the moment, so I've decided to republish some highlights from the past few years, among other entries. This piece first appeared in my old irregular column Eye of the Storm, on the Betterhumans website, back in March 2004, and has previously been republished on the IEET site. I still subscribe to the views expressed here, or most of them (I might not be so clearly opposed to designing the personalities of children by genetic means, though it's still a kind of enhancement that I'm not so comfortable with). Accordingly, I offer them to a new audience with only very slight modification to keep it from being dated.)

===========
Almost everyone these days undertakes some sort of psychological self-improvement. From New Age to neuroscience, do-it-yourself books on mind modification weigh down bookstore shelves around the world. But in an age when genetic engineering and pharmaceuticals promise to allow mental reshaping far beyond anything possible with Seven Habits, we're being forced to confront the question of just how far we should go. It's one thing to increase our physical and mental capabilities, such as using genetic enhancement to extend our lifespan or drugs to increase our cognitive powers. It's another to make genetic or brain changes that alter our desires and emotions, changing what we want to use those enhanced capabilities for.

Arguably, the latter is a deeper change, one that could have an even greater impact on our nature. This thought is strengthened by bioethicist Erik Parens' description, in his introduction to an anthology of essays entitled Enhancing Human Traits: Ethical and Social Implications. The essays were the product of a seminar held at the Hastings Center in 1992. At the Hastings seminar, four scenarios for human enhancement were discussed, the last attracting the most heated opposition. The scenarios were as follows:

1. Our ability to resist disease is increased (most participants thought this was ethically acceptable, as long as "we assume that all persons will have equal access to such a new form of prevention").
2. Our ability to stay alert and get by without sleep is enhanced.
3. Our long-term memory is enhanced.
4. A reduction is made in our more ferocious psychological tendencies, with a corresponding increase in our generosity.

The fourth of these does seem to be the most challenging to our ethical thinking. It is not surprising that it received the most resistance. But are our desires and emotions sacrosanct?

Altered states

Human systems of morality are based, at least in part, on the social reconciliation of species-wide (though individually variable) desires and emotional responses, inherited from our evolutionary ancestry. If those desires and emotions changed, the conditions under which we interact and cooperate in societies would change as well. So would our various moral systems.

In his monumental study of the possible convergence of scientific and humanistic knowledge, Consilience, Edward O. Wilson predicts that future generations will actually recoil from redesigning human emotions and the epigenic rules (or genetically-inherited regularities) of human mental development, since these elements, he says, "compose the physical soul of the species."

"Alter the emotions and epigenic rules enough," Wilson continues, "and people might in some sense be 'better,' but they would no longer be human. Neutralize the elements of human nature in favor of pure rationality, and the result would be badly-constructed, protein-based computers. Why should a species give up the defining core of its existence, built by millions of years of biological trial and error?"

Two initial points can be made in response to this. First, it is not obvious why Wilson portrays the choices as being between our current range of desires and emotions and none at all—the life of a rational, but emotionless, computer. It is certainly difficult to see why we would want to turn ourselves into totally emotionless beings, but this does not rule out changing certain aspects of our emotions. The way Wilson formulates this part of his argument, he is attacking a straw man.

Second — and this is a deeper issue — it is not clear what work the concept of "ceasing to be human" is doing in the argument. Our nature could change considerably without the outcome being that we were no longer human at all. Alternatively, even if we thought it was no longer appropriate to apply the word "human" to ourselves (or our descendants), where does that point lead us? Would we (or they) somehow have lost moral worth?

Not necessarily. We should concede that some imaginable changes would be for the worse. Perhaps there is something especially valuable about having the capacity for a wide range of emotions, including grief as well as joy. As I have discussed, we might well be horrified at a society that found ways to flatten our range of emotional responses. We don't want to turn ourselves into beings of shallow experience, or (as in Wilson's talk of "protein-based computers") without subjective experience at all.

But what if we encountered a "lost race" of beings almost like ourselves, yet with a slightly different range of typical desires and emotional responses, stemming from a different evolutionary history? Imagine, for the sake of argument, that this species turned out to be as keenly sentient and self-conscious as we are, and slightly more intelligent. Imagine that it communicated in complex languages, as we do, and had built up a rich tradition of art and culture. Imagine also that it was less disposed, by nature, to be aggressive or to experience some forms of jealousy.

It is far from clear that we would be these beings' moral superiors, or that a world which contained them, rather than us, would thereby be worse than our own. To make such judgments, we would need to know much more detail. Even then, the value of the two worlds might defy comparison.

If this is correct, there may be scope for considerable changes in human nature (and culture) without any diminution of our moral status, or any loss of value in the world — even if the changes meant that we could be considered, in a sense, nonhuman. Accordingly, consideration of our moral status does not in itself rule out even quite drastic steps to redesign human nature. It is all a matter of what, exactly, would be lost, and of what might be gained.

But why make changes?

Why seek to do any of this? Well, as individuals, we might have good reasons to try to free ourselves from at least some psychological traits that we have inherited from our evolutionary past. They might not suit our rational ideals of ourselves; or they might just be inconvenient for life in our modern environment. As a species, we might one day redesign ourselves on a wide scale if some consensus could be reached on desirable changes.

Take, for example, the fear of death. It is reasonable enough to have projects, relationships, commitments and interests that attach us to life, and thus to wish to go on living. The mechanism of fear might be useful to us in helping us stay alive, and a genetic predisposition to fear death may well have increased our evolutionary ancestors' inclusive fitness (their capacity to pass on their genes to succeeding generations). Granting all that, however, does the degree to which we sometimes fear death—the sense of nagging anxiety or even panic that the thought of death can cause—actually contribute to individual or social happiness? If we could reach into ourselves and rewrite our own emotional code, in order to harmonize our personalities with our rationally considered ideas of what constitutes a happy life, might we not reduce our fear of death in the abstract, while retaining fear responses to situations of immediate physical danger?

More generally, the particular range of desires and emotions that human beings currently have may not be the optimum for our happiness as individuals, or for useful social cooperation in modern environments. It was never designed for those purposes.

When I refer to our happiness as individuals I do not mean simply superficial feelings of pleasure. We might want far more than this. For example, as many philosophers have suggested, we might want to live in touch with reality, have deep feelings, create beauty, achieve remarkable things, exercise or challenge our physical and cognitive abilities, and so on. But there is no reason to think that our current range of desires and emotions is the most effective possible for helping us to achieve happy lives in this sense.

After all, to the extent that we have a species-wide repertoire of desires and emotions it has an evolutionary explanation. Presumably this repertoire promoted our ancestors' inclusive fitness in the environment of evolutionary adaptation. However, what we most care about, whether as individuals or at the social level, is not the passing on of our genes. Some people have even made conscious decisions not to have children. We all have plans and projects that are far more important to us than maximizing our inclusive fitness, which is quite simply not a conscious goal for most people. Surely this is not what consciously motivates people to have children.

To take another example, it seems clear that human beings as a species are inclined to be largely, but not entirely, monogamous. We are more monogamous than chimpanzees or bonobos, but it is a cliché of evolutionary biology that men are genetically programmed, at least to some extent, to stray into polygamous ventures. In his provocative book The Red Queen, Matt Ridley argues that women are also predisposed, to some extent, to extramarital liaisons. But at the same time, men and women are predisposed to sexual jealousy.

All of this causes much strife for individuals and our society. Might we not be better off if people were more perfectly monogamous? Alternatively, in a world of fairly reliable contraception, childless-by-choice couples, and greater intellectual sophistication about these things (from reading books by Matt Ridley, for example), might we not be even better off if people were less predisposed to sexual jealousy? Either way, our current mix of propensities does not seem optimal for our happiness, much as it may be explicable in Darwinian terms.

A more prosaic example is our love of sugar-rich foods. This was doubtless of benefit to our evolutionary ancestors, and helped them to pass on their genes, in an environment where sugar was relatively scarce. It is now positively damaging to our health, in a dramatically different environment where sugar is easy to produce and available in abundance. Perhaps we should change our psychology so that we have a greater desire for fibrous, vitamin-rich foods, and a lesser desire for sugar.

Alternatives and implications

Of course, we have many alternatives. We could cooperate socially to reduce the availability of sugary foods, or to make them less of a temptation by imposing advertising restrictions. As individuals, we can make conscious decisions not to act on our desire for sugar, or to do so only as an occasional treat. Still, the problem would be easier to solve if we had less desire for sugar in the first place.

In short, there is nothing fundamentally wrong about changing our psychology. The inherited repertoire of human desires and emotions is not inviolable. Perhaps the desires and emotions that should be preserved are those which we would endorse if we fully understood our own psychology and its evolutionary genealogy. There is no Archimedean point to which we can step, somewhere entirely out of our own desires and emotions, but we can at least look at what we really want in the environment that we now find ourselves in, and try to bring some elements of our desires and emotions into line with our rationally endorsed values and goals.

The difficulty is that we lack both the scientific knowledge and—let us face it—the wisdom to start all over again. In that light, some methods of changing ourselves would surely be more trouble than they are worth, and are not currently justifiable. If, for example, we tried to make inheritable changes to human psychological nature through germline genetic modification, we would be running monstrous risks. Genes typically have many effects (are pleiotropic), while even far simpler phenotypical characteristics than our psychological predispositions are affected by the cooperation of many genes (such characteristics are said to be polygenic). For the foreseeable future, the complex interactions of genes and human psychology may rule out the genetic redesign of the latter.

This also suggests that designing the custom-made personalities of individual children may never be feasible, and may to be too risky to be attempted. That, in turn, may limit the other kinds of enhancements that we should make to children, since there is the risk in any individual case of a mismatch of alterable capabilities and practically unalterable psychological dispositions (Nicholas Agar is one philosopher who has discussed similar issues). When we are thinking about genetic modification, it seems rational to focus on increasing our resistance to aging and disease—and perhaps on increasing our general cognitive abilities—before we start tampering with our desires and emotions themselves, or giving our children individual, custom-made talents.

However, if our happiness as individuals is impeded by desires and emotions that we want to disown, there are more everyday ways to try to change ourselves than using genetic modification. Perhaps we are best off if we can make the changes we desire through individual self-examination and insight, associating with people who already seem to have the kind of species-atypical psychological makeup that we aspire to, reading books about the experiences of such people, and so on.

Yet some of the desires and emotions that we want to disown might be too deep for us to reach by these methods. In this case, I see nothing wrong in principle with more direct physical changes to ourselves, such as if we can design safe, effective drugs that help reduce our craving for sugar (or our fear of death, and so on).

The point of this debate, then, should not be that there is a general moral rule against tampering with our inherited nature. Indeed, such tampering might be justified. Rather, we need to acknowledge that it would necessarily be a piecemeal, iterative process. It would begin with efforts by individuals to change those aspects of themselves that they rationally disapprove of. At one end of the spectrum of possibilities, a program of genetic alteration of the personalities of our children would be undesirable. All that said, there is no overriding objection to using technological means to modify our own personalities, and ultimately to reshape human nature. After all, self-help books are a type of technology too.

Saturday, September 22, 2007

Does science presuppose naturalism?

There's some interesting discussion going on around the web as to whether some form of naturalism - philosophical or methodological - is presupposed by science. Tom Clark has a useful post here, and there some good exchanges on Richard Dawkins' site. I haven't yet had a chance to absorb this article by Yon Fishman, except for its abstract, but I promise to look for the full text.

It appears to me that Fishman may well be correct in his criticism of the reasoning in the Dover decision, though that would not imply that the Dover judge reached the wrong result, since the ultimate issue is not whether a body of beliefs with a supernatural element could be science but simply whether, in all the current circumstances, the teaching of Intelligent Design in state schools in the US is forbidden by the First Amendment. I think there's ample reason to believe that it is. In any event, even if it would be possible in principle for intelligent design to be genuine science - notwithstanding its supernatural element - there is no basis to believe that it is genuine science in its current form. The main reasoning in the Dover decision does not seem to be marred by anything along the way that is not strictly correct (although there is surely room for further discussion about how the reasoning should best be recast).

Here is the best summary I can give as to why even methodological naturalism is not strictly required for the practice of science.

First, many scientists have historically proposed hypotheses that don't conform in any strict sense to a requirement of methodological naturalism: hypotheses that seem to require supernatural interventions from time to time or one-off. Some early theories of reproduction seem like this, as do theories that explain the geological strata in terms of Noah's flood, and we could probably think of other examples. I'd rather not say they were doing something other than science. Such an approach to science has not been fruitful, but it seems to me that it was a kind of science as science has been understood historically.

Second, science can indeed examine some of these hypotheses to see whether the evidence favours them, which is just as well. After all, we'd like to know whether these theories are actually likely to be correct without just ruling them out a priori.

As a massive understatement, the 19th-century Noah's-flood theory of geological strata and fossils seems untenable. Even without our modern knowledge of how geology actually works, we can see how it can't do the job it was supposed to do, and it's an embarrassment to those Young Earth Creationists who still rely on it. (I owe the thrust of the following to Philip Kitcher, though I'm giving a brief and garbled version for which Kitcher is not at all responsible.)

The Noah's-flood theory does give some sort of explanation as to why fish first occur lower in the geological strata than flying animals such as birds, but it should predict that flightless birds will appear lower in the strata than ordinary birds. Likewise, bats should be at the top of the strata, higher than primates. (This is very simplified, why should animals that like water, like fish, die in the flood before birds? The animals at the bottom should be whatever are most vulnerable to being killed by flooding. Flying birds should, however, be among the last to go, as they can easily get to higher ground and even fly above the waters until they eventually fall from starvation or exhaustion ... so, you get the idea. The point is that the theory fails to match the detail of fossils and strata, and became hopelessly impossible to hold onto even in the 19th century.)

Similarly, we can examine the evidence for such things as the power of intercessory prayer (as Tom Clark's post indicates), the efficacy of claimed supernatural powers, and so on. As long as those theories give systematic accounts of how things should happen, it's possible in principle to test whether the evidence favours them.

I'd say that the use of supernatural explanations has not been fruitful, and that these explanations should not be preferred as they are often ad hoc, fail the test of consilience, and so on, but they are not rejected a priori. They can be science, but so far they have been shown to be very bad science. Intelligent Design is arguably not science at all, because it is not able to postulate any system by which its seemingly supernatural "intelligence" works, but in principle there could be a version of ID that is more scientific. The trouble is that no one has any clue what it would be like - the idea made some sense in the 19th century, but it now appears to be a dead end, and we have plenty of evidence that efforts to promote ID are motivated by religious piety rather than by genuine efforts to augment biological science with new kinds of systematic explanation.

One kind of theory that science can never test in any systematic way is the self-insulating "deceptive creator" theory: e.g., the omphalos theory that God created the Earth 6000 years ago, complete with all the signs - such as fossils - of a much longer history. However, scientists, like anyone else, are entitled to dismiss this kind of theory as implausible and ad hoc rather than ruling it out merely because it posits something supernatural.

I should add that if we accept that all the above is correct, only to say that anything science can postulate is "natural" by definition, we are making methodological naturalism trivially true, in which case it gives no methodological guidance. We want to be able to test, rather than rule out a priori, a whole lot of claims about interventions by deities, the actions of individuals with anomalous psychic or magical powers, and so on. If these are supernatural, then testing the evidence for supernatural hypotheses is part of science. If they are considered part of "the natural", should they turn out to be real entities, forces, and so on, then methodological naturalism has no content. Indeed, once we define "the natural" in such a way, any kind of naturalism is devoid of substantive content - philosophical naturalism would boil down to something like, "The only kinds of things that exist are the ones that exist." Naturalism may end up having some irreducible vagueness to it, but it must have some content.

However you analyse it, neither philosophical naturalism nor methodological naturalism appears to be necessary for science. Philosophical naturalism is a meta-inference about what sorts of things are likely to exist - based on all the well-corroborated scientific inferences to hand - and it should be thought of as a philosophical theory rather than as part of science. It's a highly plausible theory - one that I subscribe to - even if difficult to state with total precision. Methodological naturalism is more a summary of the (again plausible) idea that supernatural hypotheses tend to be bad science. That may create a mild, practical presumption against using supernatural hypotheses, but it doesn't rule them out of all contention.

Friday, September 21, 2007

Living with Darwin

I've just finished reading Philip Kitcher's new book, Living with Darwin: Evolution, Design, and the Future of Faith. I unreservedly recommend this book to anyone who is interested in the issues mentioned in its sub-title. Kitcher is an outstanding philosopher of science, whose gifts include a knack for explaining why scientific inferences about unobservable things such as events in space, or in the remote past, can be favoured - overwhelmingly - by the evidence. His explanation of why the Darwinian picture was accepted in the 19th century, and should still be accepted, are as good as anything of the kind that I have read. It compares with Richard Dawkins' careful explanation of how complexity can evolve over evolutionary timescales, to be found in Climbing Mount Improbable, or with Kitcher's own account, in The Advancement of Science, of how Galileo was able to convince his contemporaries of such phenomena as the moons of Jupiter, even without a theory of the telescope or an instrument so user-friendly as any modern designs.

Kitcher's analysis of genesis-based accounts of life's variety and the fossil record is a joy to read. In vivid prose, he examines the details of the creationists' arguments, showing just where these must go wrong, and how they became untenable back in the 19th century, even before such advances as radioactive dating techniques.

He also depicts the intellectual difficulties for providential forms of religion, and for all varieties of supernaturalism (though his case against providentialism is the more impressive aspect).

Over the years, Kitcher's views have hardened, and he now takes the stance that providential religion is almost impossible to reconcile with a Darwinian picture of life's history on Earth. He is so firm about this, and so persuasive, that we could just about include him among the so-called "New Atheists", but he makes an effort to distance himself from them, specifically mentioning Dawkins ... whom he obviously admires, but not for his passionately-expressed attacks on religious faith. In one interview that I've listened to on the net, Kitcher was at even greater pains to distance himself from Dawkins, who responded sharply on his website.

Kitcher is conciliatory towards what he calls "spiritual religion" and sympathethic to the emotional needs of religious believers in general; he clearly believes that Dawkins is insufficiently sensitive to the latter, particularly to people for whom belief in a providential deity is a source of comfort as they lead lives that are pretty tough, circumscribed, and insecure. I'm not convinced that Dawkins does fail to appreciate this, but as far as it goes the point is well worth emphasising.

I've made a similar point in the past when writing about the vision offered by Camus in his great essay, "The Myth of Sisyphus": as I stated then, the vision of an authentic, zestful life that Camus offers may well be attractive to a successful intellectual like Camus himself, involved in uniquely creative work that gives his life a sense of meaning. It may not be enough for people whose lives offer far less genuine freedom and unique creativity. This does not entail that Camus was wrong (or that Dawkins is, or Kitcher for that matter), but it should mute any scorn that any of us might feel for people who appear to be living inauthentic lives by the rigorous standards of French existentialism ... if, indeed, Camus is to be counted as an existentialist. Many goodhearted people might discern the bleakness of Camus' worldview, when it is explained to them, without finding anything liberating, or otherwise beneficial, in it. The same applies to the rigorous philosophical naturalism that Kitcher and Dawkins share.

Kitcher is very sensitive to this issue, and he explores how society, particularly US society, would need to evolve before such a naturalistic view - and reinterpretations of mainstream religion along lines that are compatible with it - could become psychologically acceptable to more people.

He expresses disagreement with less conciliatory Enlightenment apologists (though the only one he mentions by name is Dawkins) about two specific things: (1) we cannot categorically deny that we will ever find something that matches the claims of the transcendent, and, more importantly, (2) there is a possibility of "spiritual religion" (which sounds a bit like the "Einsteinian" kind that Dawkins discusses sympathetically in The God Delusion and elsewhere) that treats the Christian (say) teachings symbolically and rejects supernatural elements. I'm not sure that these really are disagreements with Dawkins, since the latter seems to acknowledge them both (as do I, if it comes to that). There is certainly a difference of emphasis, but Kitcher seems to think it is something more. Perhaps the real difference is that, despite the area of agreement between Dawkins and Kitcher, one of them (the British scientist) would actually like to see religion disappear entirely, while the other (the American philosopher) would be content to see it transformed.

Overall, Living with Darwin gets high marks from me, and I recommend it to the same people who would buy The God Delusion or Daniel Dennett's Breaking the Spell. Even those readers who are more sympathetic to the providential and supernatural elements of religion that Kitcher opposes will be interested by his observations about religion's possible future. At the same time, his careful analysis of the creation-evolution debate will be invaluable to anyone who wants to be able to explain just why the case for biological evolution is so overwhelming, and how the Intelligent Design movement fails to provide any sort of viable alternative.

Monday, September 17, 2007

You say phenomics, I say phonemics ... let's call the whole thing off

Today's edition of The Australian has a front-page story (long enough to be continued over on page 2) about the parlous standards of primary school teachers in the state of Victoria. This does sound a bit scary, since not one of a group of 40 teachers managed to get full marks on a spelling test at 14-year-old standard, with such words as "subterranean", "miscellaneous", "embarrassing", and "adolescence". Worse, these folks weren't making one silly slip-up each: their average score on the test was only seven out of eleven.

However, the effect of the story is ruined when it goes on complaining about how teachers don't know "phenomics" (defined as "the sounds that make up words") or possess "a good phenomic understanding to help spell words". How embarrassing. How positively subterranean.

Friday, September 14, 2007

Rosenhouse on Blackford on Dawkins

There's some interesting discussion over here at Jason Rosenhouse's EvolutionBlog - after I had a rush of blood and wrote a very long comment in response to a post on a related blog.

Actually, Jason's immediately subsequent post has more or less superseded the one entitled "Blackford on Dawkins", and is now picking up more traffic.

My thesis is that much of the current debate about Richard Dawkins and the so-called "New Atheism" is distorted by the fact that Dawkins' detractors read his work in a way that is deaf to issues of tone and nuance. In fairness, Dawkins' supporters (some of them) sometimes show the same tendency when dealing with people whom they see as opponents. It's an all-too-human response to views that we find frightening.

Thursday, September 13, 2007

Is technological innovation decelerating?

I was surprised to see this hoary question exhumed by Gwynne Dyer, in a recent article that argues for the technological deceleration thesis. There was some discussion of Dyer's piece going on over here, but it didn't seem to get far.

I expressed some thoughts of my own back in 1998, in an article called "Singularity Shadow", originally published in Quadrant magazine and available on my website. At the time, I was prompted by some (then recent) claims by Robert Zubrin. While I don't necesarily endorse every observation that I made when I wrote the article ten years ago, it still seems to me that I got basically got it right: if we are looking for physically big things in the landscape, there has not been much qualitative change in developed countries since (say) the 1960s. I.e., we have much the same sorts of big things (buildings, ships, planes, lit-up cities, etc.). But if we are looking at the way technology has become smooth, ubiquitous, comfortable, conformable to our wills, then the qualitative change over the past few decades is impressive and continuing, with computers revolutionising the ways we live our lives - just as the motor car and the contraceptive pill did.

I'm not a decelerationist on this issue - but nor am I a radical accelerationist. Even Moore's law, which involves continuing doublings of computer power per dollar, does not entail that there will be extraordinary changes in our lived experience. An immensely greater increase in computer power sometimes produces little change of human significance, partly because we so often underestimate what is involved even in apparently simple goals such as robotic locomotion. Nonetheless, it is naive to think that modern computer technology, biotech, and such things as new materials, have not altered the way we live and think. We don't need to see gigantic IBM mainframes wandering around the landscape, like Hollywood dinosaurs, to get the point that change happens. Sometimes less really is more.

Tuesday, September 11, 2007

Therapeutic cloning debate moves to Western Australia

Western Australia's lower house has now passed legislation to legalise therapeutic cloning, in line with federal legislation that has already been mirrored in Victoria and New South Wales.

Once again, the Catholic Church oppposed this legislation, with the Archbishop of Perth threatening Catholic MPs if they voted in its favour. What is especially nauseating about this whole protracted process of getting some reasonable legislative reforms in Australia is the continued attempt by religious leaders to tear down any separation of church and state. They continue their efforts to impose a specifically religious - and barbaric - morality on the rest of us by means of the coercive power of the state. Yet again, it must be said: bishops and priests have no moral or intellectual authority, and their views deserve no credence in the formulation of public policy. We've given their medieval view of the world too much respect in the past, and it's about time we stopped. They can believe what they want, but there is no reason to defer to them when they meddle in the public sphere.

Sunday, September 09, 2007

Streamlining www.russellblackford.com

I've put in just a bit of effort over the last few days to streamline my website, www.russellblackford.com. The main page had been getting rather cluttered, but I think it is now simpler and more self-explanatory. I really need to go and look at some of the other pages that hang off it - it's quite an elaborate site, overall, and needs to be managed better than I usually do. However, the most important thing is for the main page to be an up-to-date and user-friendly resource.

Sunday, September 02, 2007

Writers' festival

I found my way to a few events at the Melbourne Writers' festival over the past week or so. The most interesting, perhaps, was the "politics of atheism" panel yesterday morning, which was advertised as featuring AC Grayling, Humphrey McQueen, and Val Noone. In the event, McQueen dropped out and was replaced by a female academic whose name I didn't catch, unfortunately, since she was very engaging and informative. She'd done some research on the major pentecostal churches in Australia and had some interesting observations about their culture and their obsessions (political and otherwise). I've had enough dealings with such people - though a long time ago - to have recognised some of what she was saying.

Val Noone, a veteran lefty Catholic activist, seemed an unpleasant fellow. He spent his time ranting about the "false gods" of capitalism, technology, and the like, which was all irrelevant to the topic, and attacked Grayling quite personally.

By contrast, Grayling was a class act. I've made a note to myself that I must learn to be as patient with the likes of Noone as he was. Grayling endured the emotive and personal nature of Noone's remarks, while sticking to the issues and appearing quite calm and unruffled. The contrast made Noone appear churlish.