Jerry Coyne has a thread over here with some interesting discussion. I'm not too fazed by the issue of some people having perfect pitch, since I don't doubt that some people are better than others at a whole range of perceptual skills. We all perform some pretty impressive feats, such as recognising faces and voices, when, in the scheme of things, most human beings look and sound very alike. Even our ability to tell at glance whether a particular four-legged critter is a cat or a small dog is a rather impressive feat. And then, of course, there's all the complexity of language. Our brains do process sensory data in complex ways that we're not aware of.
But if someone tells me that she has highly reliable intuitions as to whether we live in a world with a divine creator or one without ... I'm going to be very suspicious. If it turns out that she's experienced a large number of worlds with such a creator and a large number without, and she can now reliably sense which sort she's currently in without necessarily knowing how she did it ... well, it goes without saying that I'll be impressed. :D
Without independent evidence or some kind of highly impressive story like that, however, I'm not going to take other people as reliable God detectors. I'm very happy, though, that there are reliable pitch detectors, face recognisers, cat/dog distinguishers, and so on.
About Me
- Russell Blackford
- Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).
Tuesday, November 30, 2010
Poly this, poly that
Last time we got on to the topic of polygamy and polyamory, things did not go smoothly. It's a topic that I raise with trepidation. Still, I'm working on exactly that issue today in my current effort to get a good draft of The Book completed. I have to say something about marriage, and specifically about how the state should respond to Muslim polygamy. It will be briefish, because this could be a large study in itself, and I doubt that I know the best policy response. But something has to be said, however tentative.
By coincidence, there's an article up over at Butterflies and Wheels, by Homa Arjomand, arguing for the retention of a Canadian law that apparently prohibits polygamist relationships. It's created some interesting discussion, including my own contribution to the debate. You might like to have a look and add your two cents' worth.
By coincidence, there's an article up over at Butterflies and Wheels, by Homa Arjomand, arguing for the retention of a Canadian law that apparently prohibits polygamist relationships. It's created some interesting discussion, including my own contribution to the debate. You might like to have a look and add your two cents' worth.
New JET article by Nicholas Agar
Over at The Journal of Evolution and Technology we've published a new article by Nicholas Agar, in which he summarises some of the argument from his new book, Humanity's End.
Agar argues against what he classifies as radical enhancement, while distancing himself from bioconservative positions that oppose enhancement on principle. The abstract reads:
This paper summarizes a couple of the main arguments from my new book, Humanity’s End. In the book I argue against radical enhancement - the adjustment of human attributes and abilities to levels that greatly exceed what is currently possible for human beings. I’m curious to see what reaction this elicits in a journal whose readership includes some of radical enhancement’s most imaginative and committed advocates.
We'll be interested in receiving responses. Meanwhile, another sample:
In 2004 I published a book with the title Liberal Eugenics. It was a defense of genetic enhancement. So what’s a defender of enhancement doing turning around and attacking enhancement. Did I get religion?
Earlier in this piece, I suggested that the debate about human enhancement should mature beyond a simple duel between its opponents and defenders. A realistic, scientifically-informed presentation enables us to discriminate morally between different varieties and degrees of human enhancement. It reveals enhancement to be a way of treating human beings that can be good if practiced in moderation but dangerous if taken to extremes. Many of the influences humans direct at themselves fall into this category - drinking alcohol, exposure to direct sunlight, exercising, consuming saturated fats, and so on. Too much sun substantially elevates the risk of skin cancer. A moderate amount furnishes the body with requisite vitamin D. Alcoholism is a disease that destroys lives. But moderate drinking offers enjoyable experiences, promotes certain forms of sociability, and may reduce the risk of heart disease.
Agar argues against what he classifies as radical enhancement, while distancing himself from bioconservative positions that oppose enhancement on principle. The abstract reads:
This paper summarizes a couple of the main arguments from my new book, Humanity’s End. In the book I argue against radical enhancement - the adjustment of human attributes and abilities to levels that greatly exceed what is currently possible for human beings. I’m curious to see what reaction this elicits in a journal whose readership includes some of radical enhancement’s most imaginative and committed advocates.
We'll be interested in receiving responses. Meanwhile, another sample:
In 2004 I published a book with the title Liberal Eugenics. It was a defense of genetic enhancement. So what’s a defender of enhancement doing turning around and attacking enhancement. Did I get religion?
Earlier in this piece, I suggested that the debate about human enhancement should mature beyond a simple duel between its opponents and defenders. A realistic, scientifically-informed presentation enables us to discriminate morally between different varieties and degrees of human enhancement. It reveals enhancement to be a way of treating human beings that can be good if practiced in moderation but dangerous if taken to extremes. Many of the influences humans direct at themselves fall into this category - drinking alcohol, exposure to direct sunlight, exercising, consuming saturated fats, and so on. Too much sun substantially elevates the risk of skin cancer. A moderate amount furnishes the body with requisite vitamin D. Alcoholism is a disease that destroys lives. But moderate drinking offers enjoyable experiences, promotes certain forms of sociability, and may reduce the risk of heart disease.
Sunday, November 28, 2010
Kenneth Lipp on Islamophobia
Further to my recent discussion of this issue, Kenneth Lipp now has a solid post on the subject.
Sample:
The term Islamophobia dates back to the late 1980s, but came into prolific usage after the September 11, 2001 attacks in the United States, to refer to types of political dialogue that appeared prejudicially resistant to “pro-Islamic argument."
Professor Anne Sophie Roald writes that steps were taken toward official acceptance of the term in January 2001 at the "Stockholm International Forum on Combating Intolerance", where Islamophobia was recognized as a form of intolerance alongside Xenophobia and Antisemitism.
In 1997, the British Runnymede Trust defined Islamophobia as the "dread or hatred of Islam and therefore, to the fear and dislike of all Muslims," stating that it also refers to the practice of discriminating against Muslims by excluding them from the economic, social, and public life of the nation. It purports to include the perception that “Islam has no values in common with other cultures, is inferior to the West and is a violent political ideology rather than a religion.” The trust’s website boasts the accomplishment-
“Through the work of the Commission on British Muslims and Islamophobia, Runnymede achieved tangible response from policy makers and the general public. For example, the Government approved the first state funding for specifically Muslim schools in late 1997, and there has been some improvement in media portrayals of Islam. The UK National Census in 2001 contained a question on religion.”
I chose this sample because it is useful to have a reminder that the current widespread use of the term "Islamophobia" has a history, and particularly a history of deliberate politicking for its acceptance. Now, that's not to deny that there can be elements of racism or cultural xenophobia feeding into criticisms of Islam (or even that criticism of Islam can sometimes feed back into racism and cultural xenophobia). I don't think we should forget this, especially when we see the criticism emanating from right-wing groups with faux-traditionalist stances on cultural issues and a general dislike of immigrants.
But nor is the idea of Islamophobia entirely innocent. It is all too easy to use to beat up on people who are attempting to engage in genuine dialogue about the nature of Islam - particularly its more extreme or political forms - and how non-Muslim citizens of liberal democracies should respond to it.
I should also oberve that earmarking government funds to specifically religious schools and including a question about religion on a census don't necessarily seem like great policy outcomes. I can definitely see arguments for the latter, from a town planning viewpoint, but these are not the sorts of policies that knock my socks off. Just sayin'.
Sample:
The term Islamophobia dates back to the late 1980s, but came into prolific usage after the September 11, 2001 attacks in the United States, to refer to types of political dialogue that appeared prejudicially resistant to “pro-Islamic argument."
Professor Anne Sophie Roald writes that steps were taken toward official acceptance of the term in January 2001 at the "Stockholm International Forum on Combating Intolerance", where Islamophobia was recognized as a form of intolerance alongside Xenophobia and Antisemitism.
In 1997, the British Runnymede Trust defined Islamophobia as the "dread or hatred of Islam and therefore, to the fear and dislike of all Muslims," stating that it also refers to the practice of discriminating against Muslims by excluding them from the economic, social, and public life of the nation. It purports to include the perception that “Islam has no values in common with other cultures, is inferior to the West and is a violent political ideology rather than a religion.” The trust’s website boasts the accomplishment-
“Through the work of the Commission on British Muslims and Islamophobia, Runnymede achieved tangible response from policy makers and the general public. For example, the Government approved the first state funding for specifically Muslim schools in late 1997, and there has been some improvement in media portrayals of Islam. The UK National Census in 2001 contained a question on religion.”
I chose this sample because it is useful to have a reminder that the current widespread use of the term "Islamophobia" has a history, and particularly a history of deliberate politicking for its acceptance. Now, that's not to deny that there can be elements of racism or cultural xenophobia feeding into criticisms of Islam (or even that criticism of Islam can sometimes feed back into racism and cultural xenophobia). I don't think we should forget this, especially when we see the criticism emanating from right-wing groups with faux-traditionalist stances on cultural issues and a general dislike of immigrants.
But nor is the idea of Islamophobia entirely innocent. It is all too easy to use to beat up on people who are attempting to engage in genuine dialogue about the nature of Islam - particularly its more extreme or political forms - and how non-Muslim citizens of liberal democracies should respond to it.
I should also oberve that earmarking government funds to specifically religious schools and including a question about religion on a census don't necessarily seem like great policy outcomes. I can definitely see arguments for the latter, from a town planning viewpoint, but these are not the sorts of policies that knock my socks off. Just sayin'.
Keeping the humanities alive - and a bit on "other ways of knowing"
I missed this post by Jerry Coyne over at Why Evolution is True last week. It's a great post, and I'm totally on board with it - except I'm going to add a caveat or a gloss to the very last para. Meanwhile, sample:
I don’t know where I’d be now had I not gone to The College of William and Mary, a liberal arts school that enforced a wide education on everyone, even prospective scientists. It was there that I took a fantastic fine arts course from a charismatic professor who absolutely awakened my interest in art, leading to gazillions of museum visits in the last four decades. I had courses in Old English, Beowulf, Greek tragedy, ethics, Indian (i.e., Asian) art, economics, German scientific literature, and modern American fiction. Every one of these left a residuum in my neurons. I still can’t pass up an article on Beowulf.
Once again, I have to rub in the point that those horrible Gnu Atheists like Jerry, Richard, and Ophelia are not philistines or scientismists, if scientism is supposed to be some kind of devaluing of the humanities and of human experience in general.
But I do need to comment on this:
And I don’t give two hoots for a scientist who cares nothing for music, art, literature—or food! They’re missing a great swath of the world’s wonder. Those other disciplines aren’t really “ways of knowing,” but they’re ways of experiencing, and to die without that panoply of experience, had it been available to you, is to have lived in vain.
Absolutely right! And I have to reciprocate by saying that I don't give two hoots for humanities scholars who are ignorant of or hostile to the science disciplines.
But this whole "other ways of knowing" is a pain in the arse. It's a phrase that tends to be used by people who want to devalue scientific knowledge, treating it as just one more interpretation of the world, no more true than that contained in mythology, holy books, reports of mystical experiences, etc., or at least by people who want to be able to say that whatever is contained in mythology, holy books, reports of mystical experiences, etc., may be true, and that human reason cannot check up on it.
I don't believe there are "other ways of knowing" in the sense that is usually intended. What I believe is simply that there are many techniques that are used to find out stuff. All of those techniques are available to scientists, just as they are to everyone else. However, science has refined some techniques to unprecedented levels of precision, control, systematicity, and so on, and has thus made progress with problems that were intractable for thousands of years ... but started to become more tractable around about the beginning of the seventeenth century.
It should also be pointed out that the techniques that science has refined to this extent are also available to humanities scholars, just as those used by humanities scholars are available to scientists. There's just one world and there's no clear demarcation as to what techniques are going to be useful to find out stuff about it.
However, it's important to emphasise that humanities scholars frequently do find out new stuff, or at least stuff that is new to the academy. You won't really discover entirely new stuff in an undergraduate course in the humanities, but I doubt that you will in an undergraduate science course either.
Why did I say "or at least new to the academy"? It's because humanities scholars are generally dealing with human experience on Earth, so the stuff they find out will often be stuff that is or was known to somebody. If textual-historical scholarship on the Bible reveals that the Gospel of Matthew relies heavily on the Gospel of Mark and must have been written later, that is finding out something that was once known (very likely by whoever authored the Gospel of Matthew!). If someone manages to resolve what happened to Queen Zenobia after the fall of Palmyra (was she beheaded, as some sources say, or was she taken back to Rome, led in triumph, but ultimately set free, as other sources say?), that will be finding out something that we don't currently know. Obviously, however, it was once known, for example to the Emperor Aurelian, who defeated her in battle.
I don't think there is any sharp line between the sciences and the humanities, but you can see, I hope, how humanistic scholars are often trying to find out stuff that was once known by human beings who are not around to ask but have left traces, or sometimes by human beings who are still around but have not organised their knowledge in a sufficiently systematic and public way for the purposes of the academy.
That's not all that humanities scholars do, of course: often they are trying to find rich, integrated, convincing ways to interpret such different things that are open to interpretation as novels, paintings, and statutes. But it's a big part of what they do. Scientists are often trying to find out stuff about the non-human world, particularly the workings of phenomena that are too small or too large or too distant for direct observation with our senses. Or about things that happened so long ago that there are no human records that provide a sort of memory of them.
That's not all that scientists do - I'm not forgetting psychology, human physiology, human genetics, etc., and I realise that science also studies human beings. I'm not using it as a definition of science. However, it's a big part of what really got science going in the seventeenth century and of what the sciences are focused on even now. Even when human beings are studied by scientists, it is likely to be different aspects of human beings from those that interest humanities scholars.
However, there's nothing spooky about the fact that humanities scholars and scientists are often trying to find out different things and that different techniques are likely to work for finding out these different things. An advanced knowledge of mathematics may be much more useful to a physicist than to an historian trying to settle what really happened to Zenobia. The latter may need to develop advanced skills in understanding a raft of ancient languages that are used in our conflicting records of poor Zenobia's fate. These languages may be of little use to a physicist.
There are no "other ways of knowing", if this refers to esoteric techniques that get us in touch with a supernatural realm. There are, however, numerous techniques for finding out stuff. Some of these techniques require no unusual training (I can look out the window and find out various things). Others may require advanced training, whether in mathematics, languages, the acquisition of extensive knowledge bases, developing certain ways of thinking about problems (yes, lawyers really are trained to think in a certain way, but there's nothing spooky about it ... it's continuous with how we're all trained in critical thinking), and so on.
A wide variety of techniques that are available as needed for all disciplines, but with different, sometimes dramatically different, disciplinary emphases on which are important? Yes. Spooky "other ways of knowing"? No.
I don’t know where I’d be now had I not gone to The College of William and Mary, a liberal arts school that enforced a wide education on everyone, even prospective scientists. It was there that I took a fantastic fine arts course from a charismatic professor who absolutely awakened my interest in art, leading to gazillions of museum visits in the last four decades. I had courses in Old English, Beowulf, Greek tragedy, ethics, Indian (i.e., Asian) art, economics, German scientific literature, and modern American fiction. Every one of these left a residuum in my neurons. I still can’t pass up an article on Beowulf.
Once again, I have to rub in the point that those horrible Gnu Atheists like Jerry, Richard, and Ophelia are not philistines or scientismists, if scientism is supposed to be some kind of devaluing of the humanities and of human experience in general.
But I do need to comment on this:
And I don’t give two hoots for a scientist who cares nothing for music, art, literature—or food! They’re missing a great swath of the world’s wonder. Those other disciplines aren’t really “ways of knowing,” but they’re ways of experiencing, and to die without that panoply of experience, had it been available to you, is to have lived in vain.
Absolutely right! And I have to reciprocate by saying that I don't give two hoots for humanities scholars who are ignorant of or hostile to the science disciplines.
But this whole "other ways of knowing" is a pain in the arse. It's a phrase that tends to be used by people who want to devalue scientific knowledge, treating it as just one more interpretation of the world, no more true than that contained in mythology, holy books, reports of mystical experiences, etc., or at least by people who want to be able to say that whatever is contained in mythology, holy books, reports of mystical experiences, etc., may be true, and that human reason cannot check up on it.
I don't believe there are "other ways of knowing" in the sense that is usually intended. What I believe is simply that there are many techniques that are used to find out stuff. All of those techniques are available to scientists, just as they are to everyone else. However, science has refined some techniques to unprecedented levels of precision, control, systematicity, and so on, and has thus made progress with problems that were intractable for thousands of years ... but started to become more tractable around about the beginning of the seventeenth century.
It should also be pointed out that the techniques that science has refined to this extent are also available to humanities scholars, just as those used by humanities scholars are available to scientists. There's just one world and there's no clear demarcation as to what techniques are going to be useful to find out stuff about it.
However, it's important to emphasise that humanities scholars frequently do find out new stuff, or at least stuff that is new to the academy. You won't really discover entirely new stuff in an undergraduate course in the humanities, but I doubt that you will in an undergraduate science course either.
Why did I say "or at least new to the academy"? It's because humanities scholars are generally dealing with human experience on Earth, so the stuff they find out will often be stuff that is or was known to somebody. If textual-historical scholarship on the Bible reveals that the Gospel of Matthew relies heavily on the Gospel of Mark and must have been written later, that is finding out something that was once known (very likely by whoever authored the Gospel of Matthew!). If someone manages to resolve what happened to Queen Zenobia after the fall of Palmyra (was she beheaded, as some sources say, or was she taken back to Rome, led in triumph, but ultimately set free, as other sources say?), that will be finding out something that we don't currently know. Obviously, however, it was once known, for example to the Emperor Aurelian, who defeated her in battle.
I don't think there is any sharp line between the sciences and the humanities, but you can see, I hope, how humanistic scholars are often trying to find out stuff that was once known by human beings who are not around to ask but have left traces, or sometimes by human beings who are still around but have not organised their knowledge in a sufficiently systematic and public way for the purposes of the academy.
That's not all that humanities scholars do, of course: often they are trying to find rich, integrated, convincing ways to interpret such different things that are open to interpretation as novels, paintings, and statutes. But it's a big part of what they do. Scientists are often trying to find out stuff about the non-human world, particularly the workings of phenomena that are too small or too large or too distant for direct observation with our senses. Or about things that happened so long ago that there are no human records that provide a sort of memory of them.
That's not all that scientists do - I'm not forgetting psychology, human physiology, human genetics, etc., and I realise that science also studies human beings. I'm not using it as a definition of science. However, it's a big part of what really got science going in the seventeenth century and of what the sciences are focused on even now. Even when human beings are studied by scientists, it is likely to be different aspects of human beings from those that interest humanities scholars.
However, there's nothing spooky about the fact that humanities scholars and scientists are often trying to find out different things and that different techniques are likely to work for finding out these different things. An advanced knowledge of mathematics may be much more useful to a physicist than to an historian trying to settle what really happened to Zenobia. The latter may need to develop advanced skills in understanding a raft of ancient languages that are used in our conflicting records of poor Zenobia's fate. These languages may be of little use to a physicist.
There are no "other ways of knowing", if this refers to esoteric techniques that get us in touch with a supernatural realm. There are, however, numerous techniques for finding out stuff. Some of these techniques require no unusual training (I can look out the window and find out various things). Others may require advanced training, whether in mathematics, languages, the acquisition of extensive knowledge bases, developing certain ways of thinking about problems (yes, lawyers really are trained to think in a certain way, but there's nothing spooky about it ... it's continuous with how we're all trained in critical thinking), and so on.
A wide variety of techniques that are available as needed for all disciplines, but with different, sometimes dramatically different, disciplinary emphases on which are important? Yes. Spooky "other ways of knowing"? No.
Saturday, November 27, 2010
V. nears an end
The ending to the penultimate chapter of V. - though it's the last chapter that deals with events in the novel's mid-1950s "present".
Later, out in the street, mear the sea steps she inexplicably took his hand and began to run. The buildings in this part of Valletta, eleven years after the war's end, had not been rebuilt. The street, however, was level and clear. Hand in hand with Brenda whom he'd met yesterday, Profane ran down the street. Presently, sudden and in silence, all illumination in Valletta, houselight and streetlight, was extinguished. Profane and Brenda continued to run through the abruptly absolute night, momentum alone carrying them toward the edge of Malta, and the Mediterranean beyond.
Anyone want to discuss this? It's a very rich piece of prose, but I wish I had a clearer response to it. If you read it in the context of Profane's conversation with Brenda immediately before, in which she reads him a prose poem that sums up much of the tone and imagery of the book, and he confesses that all his experiences haven't taught him "a goddamn thing", it becomes even richer, but again even more resistant to a clear response, at least from me.
This is, of course, one of the most famous not-quite-endings in contemporary fiction, if "contemporary" means something like "written in the last 50 years", and there are many similar endings (usually real ones actually at the end of books) both before and after, which give it resonance as part of the classically American fictional mega-text. The ending of Huck Finn is a good example. Still, how exactly should one feel as Benny Profane and his new girlfriend run hand-in-hand through the streets of Malta, in total darkness, and after the chaotic, often surreal, experiences that have driven Benny along for the previous 450 page?
Later, out in the street, mear the sea steps she inexplicably took his hand and began to run. The buildings in this part of Valletta, eleven years after the war's end, had not been rebuilt. The street, however, was level and clear. Hand in hand with Brenda whom he'd met yesterday, Profane ran down the street. Presently, sudden and in silence, all illumination in Valletta, houselight and streetlight, was extinguished. Profane and Brenda continued to run through the abruptly absolute night, momentum alone carrying them toward the edge of Malta, and the Mediterranean beyond.
Anyone want to discuss this? It's a very rich piece of prose, but I wish I had a clearer response to it. If you read it in the context of Profane's conversation with Brenda immediately before, in which she reads him a prose poem that sums up much of the tone and imagery of the book, and he confesses that all his experiences haven't taught him "a goddamn thing", it becomes even richer, but again even more resistant to a clear response, at least from me.
This is, of course, one of the most famous not-quite-endings in contemporary fiction, if "contemporary" means something like "written in the last 50 years", and there are many similar endings (usually real ones actually at the end of books) both before and after, which give it resonance as part of the classically American fictional mega-text. The ending of Huck Finn is a good example. Still, how exactly should one feel as Benny Profane and his new girlfriend run hand-in-hand through the streets of Malta, in total darkness, and after the chaotic, often surreal, experiences that have driven Benny along for the previous 450 page?
20 essential works of utopian fiction
How many books have you read from this intriguing list? In my case, there are quite a few that I feel as if I've read because they are much discussed, etc., but alas I can only swear to having read 10 of the 20. I've read most of H.G. Wells as at one point or another, and may have read the Wells items a long time ago, but I can't swear to them. In one other case, I haven't read the book, but have seen the movie.
Guess this is something else I need to brush up on.
H/T Tim Handorf
Guess this is something else I need to brush up on.
H/T Tim Handorf
Friday, November 26, 2010
More on the Islamophobia question
I left dangling the claim made by Saba Mahmood (and of course similar claims have been made by many others) that such phenomena as the Danish cartoons involve an element of "racism", and so do not deserve our sympathy.
The thought here is that racism extends beyond dubious biological notions of race to include hostility to groups marked by religious and cultural characteristics. This characterization tends to undermine the value of satire directed at Islam, or at radical forms or manifestations of Islam, associating it with mere racial slurs. Concomitantly, it suggests that we should be unsympathetic to free-speech-based defences of such things as the Danish cartoons, in the face of violence, government condemnation, legislation, etc.
This issue won't go away, and it needs to be addressed with a bit more focus than I've given it so far. Islam is, of course, not a "race," or even an ethnicity. It is a belief system that posits an otherwordly order, with an almighty god (Allah) and numerous other supernatural beings, such as angels, Satan, and demons; a means of spiritual transformation (via submission to Allah); an eschatology (with Hell, Paradise, and a final judgment); and associated rituals and canons of conduct. Nothing in this is confined to any specific "race", and of course Islam has many millions of followers all over the world, the largest number in Indonesia. How, then, is satire directed at Islam, or at its iconic figures and symbols, in any way comparable to racism?
Perhaps the comparison can be made to stick, but Mahmood does little beyond suggesting that religion, like biology or ancestry, is not simply a matter of choice. That is obviously true. Well, it's obvious to me: religion is, in very many cases, a matter of socializsation from parents and other elders in one's community, rather than a matter of individual judgment based on the seemingly superior evidence for one or another set of claims ("Jesus was the Son of God," "Muhammad was God's prophet," etc.).
But what follows? The true contrast with race is not that race is unchosen, whereas religion is unproblematically chosen - clearly it isn't, at least in typical cases. It is that racism is not, generally speaking, based on objections to doctrines, associated practices, and canons of conduct. Even where racism has been fueled by doctrinal disagreements, as with Christian anti-Semitism, it is possible to distinguish between doctrinal disagreement and racial hatred. Admittedly, some dislike of Islam, or impatience with Muslims and their spiritual leaders, may have a quasi-racist character, grounded in parochialism and xenophobia, and perhaps a dislike of Arabs in particular. But Islam also contains ideas, and in a liberal democracy these are fair targets for criticism or repudiation.
Religious doctrines influence the social and political attitudes of their adherents in ways that merit public comment (favorable or otherwise), and many religious leaders and organizations exert vast power. It is in the public interest that all this be subjected to monitoring and criticism. By contrast, nothing like this applies to the category of "race."
Even if some attacks on Islam are motivated by something like racist thinking, which may very well be the case, it doesn't follow that that's the sole motivation. In other cases, it may not figure in the motivation at all, or may play only a negligible part. There are independent and legitimate reasons why some people might wish to criticize Islam, or certain forms of Islam, or to express hostility towards it. These relate to their disapproval of various doctrines, canons of conduct, associated cultural practices, and so on, and to the power wielded by its leaders and organizational structures. Expressions of disapproval cannot simply be dismissed, a priori, with the assumption that they are improperly motivated.
Moreover, opponents of Islam, or some of its forms, cannot reasonably be expected to keep quiet when accused of racism or the quasi-racism of "Islamophobia." Such accusations are likely to inflame passions, even if they intimidate some individuals into silence. Moreover, the state is not well placed to tease apart motivations: since Islam, particularly its more aggressively political forms, attracts hostility because of its ideas and its impact on the world, the state has little choice but to take anti-Islamic critique and satire at face value.
Accusations of racism, or something similar, may have some truth when applied to some of Islam's opponents, but they do not provide a good basis for suppressing, demonising, or marginalising anti-Islamic speech. Indeed, any state policy that equated hostility to Islam with racism, suppressing some speech and demonising the speakers, would tend to add to resentments against Islam in Western socities.
We do well, perhaps, to scrutinize ourselves as individuals, to be alert to possible racism, even unconscious, as part of our motivational set. That, however, is no reason for the state, or for any of us, to treat anti-Islamic satire as an ipso facto worthless form of speech with an improper motive behind it.
The thought here is that racism extends beyond dubious biological notions of race to include hostility to groups marked by religious and cultural characteristics. This characterization tends to undermine the value of satire directed at Islam, or at radical forms or manifestations of Islam, associating it with mere racial slurs. Concomitantly, it suggests that we should be unsympathetic to free-speech-based defences of such things as the Danish cartoons, in the face of violence, government condemnation, legislation, etc.
This issue won't go away, and it needs to be addressed with a bit more focus than I've given it so far. Islam is, of course, not a "race," or even an ethnicity. It is a belief system that posits an otherwordly order, with an almighty god (Allah) and numerous other supernatural beings, such as angels, Satan, and demons; a means of spiritual transformation (via submission to Allah); an eschatology (with Hell, Paradise, and a final judgment); and associated rituals and canons of conduct. Nothing in this is confined to any specific "race", and of course Islam has many millions of followers all over the world, the largest number in Indonesia. How, then, is satire directed at Islam, or at its iconic figures and symbols, in any way comparable to racism?
Perhaps the comparison can be made to stick, but Mahmood does little beyond suggesting that religion, like biology or ancestry, is not simply a matter of choice. That is obviously true. Well, it's obvious to me: religion is, in very many cases, a matter of socializsation from parents and other elders in one's community, rather than a matter of individual judgment based on the seemingly superior evidence for one or another set of claims ("Jesus was the Son of God," "Muhammad was God's prophet," etc.).
But what follows? The true contrast with race is not that race is unchosen, whereas religion is unproblematically chosen - clearly it isn't, at least in typical cases. It is that racism is not, generally speaking, based on objections to doctrines, associated practices, and canons of conduct. Even where racism has been fueled by doctrinal disagreements, as with Christian anti-Semitism, it is possible to distinguish between doctrinal disagreement and racial hatred. Admittedly, some dislike of Islam, or impatience with Muslims and their spiritual leaders, may have a quasi-racist character, grounded in parochialism and xenophobia, and perhaps a dislike of Arabs in particular. But Islam also contains ideas, and in a liberal democracy these are fair targets for criticism or repudiation.
Religious doctrines influence the social and political attitudes of their adherents in ways that merit public comment (favorable or otherwise), and many religious leaders and organizations exert vast power. It is in the public interest that all this be subjected to monitoring and criticism. By contrast, nothing like this applies to the category of "race."
Even if some attacks on Islam are motivated by something like racist thinking, which may very well be the case, it doesn't follow that that's the sole motivation. In other cases, it may not figure in the motivation at all, or may play only a negligible part. There are independent and legitimate reasons why some people might wish to criticize Islam, or certain forms of Islam, or to express hostility towards it. These relate to their disapproval of various doctrines, canons of conduct, associated cultural practices, and so on, and to the power wielded by its leaders and organizational structures. Expressions of disapproval cannot simply be dismissed, a priori, with the assumption that they are improperly motivated.
Moreover, opponents of Islam, or some of its forms, cannot reasonably be expected to keep quiet when accused of racism or the quasi-racism of "Islamophobia." Such accusations are likely to inflame passions, even if they intimidate some individuals into silence. Moreover, the state is not well placed to tease apart motivations: since Islam, particularly its more aggressively political forms, attracts hostility because of its ideas and its impact on the world, the state has little choice but to take anti-Islamic critique and satire at face value.
Accusations of racism, or something similar, may have some truth when applied to some of Islam's opponents, but they do not provide a good basis for suppressing, demonising, or marginalising anti-Islamic speech. Indeed, any state policy that equated hostility to Islam with racism, suppressing some speech and demonising the speakers, would tend to add to resentments against Islam in Western socities.
We do well, perhaps, to scrutinize ourselves as individuals, to be alert to possible racism, even unconscious, as part of our motivational set. That, however, is no reason for the state, or for any of us, to treat anti-Islamic satire as an ipso facto worthless form of speech with an improper motive behind it.
Thursday, November 25, 2010
More on Wagg's nonsense
Wagg says:
I believe that if you equate skepticism with anything other than science, you’ve missed the point. As for Christianity, skepticism has nothing to say except about testable claims associated therein. Bleeding statues? Yes, skepticism comes into play. Jesus rose and is in heaven? Seems unlikely, but there’s not a lot more to say.
This seems like a very good time to bash our heads against the desk. First, we don't encounter claims such as those about Jesus' alleged resurrection in isolation. We encounter them in the context of entire systems of thought whose plausibility on any one claim can depend on their plausibility across a whole range of issues. That is, in fact, one reason why religions take work to refute: a claim about Jesus' resurrection may be made by many different theological systems, and its plausibility within any one system will depend on whether the system as a whole seems plausible. If your claim that Jesus rose from the dead depends on the plausibility of your total system, but that in turn depends on a whole range of other dubious claims - perhaps, for example, claims about the age of the Earth or the provenance of the Bible - then the claim about Jesus will end up being in trouble. Sorting out the logic of this kind of thing can be quite tricky, and it goes far beyond the sophomoric "Seems unlikely."
Yes, it does seem unlikely. But there's a lot more to say about why it really is unlikely.
But set that aside. Here's the important point that I want to make. Why should skepticism just be about science? Why shouldn't it be about rational inquiry generally, including the kinds of rational inquiry carried out within the humanities?
Now maybe Wagg doesn't mean "science as opposed to the humanities". Perhaps by "science" Wagg really meant "rational inquiry" and he actually meant to say:
I believe that if you equate skepticism with anything other than rational inquiry, you’ve missed the point.
But there is a great deal that can be said by rational inquiry in general about the plausibility of the claim that Jesus rose from the dead. For Zeus's sake, why doesn't Wagg go and read a few books by Bart Ehrman to get a sense of what we know through the humanistic part of rational inquiry - meticulous examination of texts and their provenance, for example - about how reliable the biblical accounts of Jesus' resurrection are likely to be?
He can't have it both ways. If he's going to use "science" in its common sense of what goes on in science faculties rather than, say, humanities faculties, there is no reason at all to confine skepticism to science. There are many extraordinary claims that are best examined via the rational methods used in the humanities, e.g by textual-historical scholars.
If, on the other hand, he wants to use the word "science" to mean the entirety of rational inquiry then there is a great deal that "science", as so defined, can say about a claim such as the one that Jesus rose from the dead.
This brings me back to another point from a couple of weeks ago, that there are many things that we know via the humanities and many other things that we know by ordinary kinds of observation that are not, in my view, sufficiently sophisticated or systematic to be termed "science". Some readers, particularly over at RD.net, seem to have interpreted that as an attack on science or some sort of defence of accommodationism in the manner of Eugenie Scott. It was, of course, the exact opposite.
My point was that non-accommodationists don't have to deny the obvious fact that the humanities, as well as the sciences, have something to offer us in obtaining knowledge of the world. Which techniques are most effective in making progress will depend on the circumstances. There are also circumstances where we can gain knowledge without doing anything as rigorous or systematic as what is done by either competent working scientists or by reputable scholars in the humanities.
However, science, the humanities, and ordinary experience are all continuous with each other and can be informed by each other. There are no sharp dividing lines. Nor are there any spooky, yet reliable, "other ways of knowing" that are discontinuous with them, enable us to take huge epistemic short-cuts, and give us knowledge of a supernatural world. At least, I don't see any reason to believe that there are.
Contrary, to what some accommodationists seem to think, non-accommodationists are not committed to denying the value of, say, rigorous textual scholarship, or that of highly-trained reading of literary texts to get a sense of their rhythm, tone, and dramatic intent (one of my examples was making a judgment about how to say a particular line in Macbeth). The idea that non-accommodationism about religion commits us to saying silly things about, for example, the uselessness or charlatanry of the humanities is simply false. It's a straw man argument, and we shouldn't give it credence.
(That's not to deny that a certain amount of what we've seen from the humanities in the last few decades really has looked like charlatanry, but the tendency seems to be receding to some extent, and the Sokal hoax and the circumstances that led to it shouldn't be allowed to put genuine scholarship in disrepute.)
This business with Wagg is a case in point. Rational inquiry has much to say about the alleged resurrection of Jesus. Much of what it has to say comes from that area of rational inquiry that we normally classify within the humanities rather than the sciences. If Wagg doesn't recognise the value of the humanities in the process of skeptical inquiry, or in the skeptic movement, so much for Wagg.
I believe that if you equate skepticism with anything other than science, you’ve missed the point. As for Christianity, skepticism has nothing to say except about testable claims associated therein. Bleeding statues? Yes, skepticism comes into play. Jesus rose and is in heaven? Seems unlikely, but there’s not a lot more to say.
This seems like a very good time to bash our heads against the desk. First, we don't encounter claims such as those about Jesus' alleged resurrection in isolation. We encounter them in the context of entire systems of thought whose plausibility on any one claim can depend on their plausibility across a whole range of issues. That is, in fact, one reason why religions take work to refute: a claim about Jesus' resurrection may be made by many different theological systems, and its plausibility within any one system will depend on whether the system as a whole seems plausible. If your claim that Jesus rose from the dead depends on the plausibility of your total system, but that in turn depends on a whole range of other dubious claims - perhaps, for example, claims about the age of the Earth or the provenance of the Bible - then the claim about Jesus will end up being in trouble. Sorting out the logic of this kind of thing can be quite tricky, and it goes far beyond the sophomoric "Seems unlikely."
Yes, it does seem unlikely. But there's a lot more to say about why it really is unlikely.
But set that aside. Here's the important point that I want to make. Why should skepticism just be about science? Why shouldn't it be about rational inquiry generally, including the kinds of rational inquiry carried out within the humanities?
Now maybe Wagg doesn't mean "science as opposed to the humanities". Perhaps by "science" Wagg really meant "rational inquiry" and he actually meant to say:
I believe that if you equate skepticism with anything other than rational inquiry, you’ve missed the point.
But there is a great deal that can be said by rational inquiry in general about the plausibility of the claim that Jesus rose from the dead. For Zeus's sake, why doesn't Wagg go and read a few books by Bart Ehrman to get a sense of what we know through the humanistic part of rational inquiry - meticulous examination of texts and their provenance, for example - about how reliable the biblical accounts of Jesus' resurrection are likely to be?
He can't have it both ways. If he's going to use "science" in its common sense of what goes on in science faculties rather than, say, humanities faculties, there is no reason at all to confine skepticism to science. There are many extraordinary claims that are best examined via the rational methods used in the humanities, e.g by textual-historical scholars.
If, on the other hand, he wants to use the word "science" to mean the entirety of rational inquiry then there is a great deal that "science", as so defined, can say about a claim such as the one that Jesus rose from the dead.
This brings me back to another point from a couple of weeks ago, that there are many things that we know via the humanities and many other things that we know by ordinary kinds of observation that are not, in my view, sufficiently sophisticated or systematic to be termed "science". Some readers, particularly over at RD.net, seem to have interpreted that as an attack on science or some sort of defence of accommodationism in the manner of Eugenie Scott. It was, of course, the exact opposite.
My point was that non-accommodationists don't have to deny the obvious fact that the humanities, as well as the sciences, have something to offer us in obtaining knowledge of the world. Which techniques are most effective in making progress will depend on the circumstances. There are also circumstances where we can gain knowledge without doing anything as rigorous or systematic as what is done by either competent working scientists or by reputable scholars in the humanities.
However, science, the humanities, and ordinary experience are all continuous with each other and can be informed by each other. There are no sharp dividing lines. Nor are there any spooky, yet reliable, "other ways of knowing" that are discontinuous with them, enable us to take huge epistemic short-cuts, and give us knowledge of a supernatural world. At least, I don't see any reason to believe that there are.
Contrary, to what some accommodationists seem to think, non-accommodationists are not committed to denying the value of, say, rigorous textual scholarship, or that of highly-trained reading of literary texts to get a sense of their rhythm, tone, and dramatic intent (one of my examples was making a judgment about how to say a particular line in Macbeth). The idea that non-accommodationism about religion commits us to saying silly things about, for example, the uselessness or charlatanry of the humanities is simply false. It's a straw man argument, and we shouldn't give it credence.
(That's not to deny that a certain amount of what we've seen from the humanities in the last few decades really has looked like charlatanry, but the tendency seems to be receding to some extent, and the Sokal hoax and the circumstances that led to it shouldn't be allowed to put genuine scholarship in disrepute.)
This business with Wagg is a case in point. Rational inquiry has much to say about the alleged resurrection of Jesus. Much of what it has to say comes from that area of rational inquiry that we normally classify within the humanities rather than the sciences. If Wagg doesn't recognise the value of the humanities in the process of skeptical inquiry, or in the skeptic movement, so much for Wagg.
Currently reading: V. by Thomas Pynchon
In my spare moments over the next couple of months I'm going to work through Thomas Pynchon's novels, starting with those that I used to be very familiar with and wrote about at some length in an earlier phase of my life. I'm currently a couple of hundred pages into V., Pynchon's first novel, published in 1963, and am finding it a very strange work, coming back to it after a quarter of a century since I last read it - it's a bizarre and often perplexing story of real and imagined conspiracies, intrigue, lust, and decadence, all conveyed in a tone that seems to mingle fascination and distaste.
I'll report back in a few days when I reach the end. I must also check what I wrote about it all those years ago when I discussed it in my doctoral dissertation on the "return to myth" in modern fictional narrative.
I'll report back in a few days when I reach the end. I must also check what I wrote about it all those years ago when I discussed it in my doctoral dissertation on the "return to myth" in modern fictional narrative.
Wednesday, November 24, 2010
We need a broader skeptic movement - contra Jeff Wagg's idiocy and hypocrisy
PZ Myers has already replied to this idiotic and destructive post by Jeff Wagg, who puts the boot into a recent skeptics convention - Skepticon 3. [Edit: And Ophelia Benson has her say over here at Butterflies and Wheels.]
To be fair to Wagg, he also offers some (rather faint, insincere-sounding) praise to the organisers. But according to his line of argument, the program contained too many topics that were devoted to skepticism about religious claims. Oh noes! They're being skeptical about the truth claims of religion at a skeptics convention!
For Cthulhu's sake, what extraordinary claims can it possibly be more important to express skepticism about than those of religion?
Wagg says:
The pro-atheist cause is an entirely different endeavor with a community that overlaps strongly with the skeptical community. Skepticism is about drawing conclusions that are proportioned to the available evidence. That’s it. And I think keeping the two things separate i[s] vitally important.
What rubbish! First, skepticism has an honorable pedigree going back to ancient Greece, and the concept is far broader than this definition.
But even if you want to define "skepticism" very narrowly, part of what the "pro-atheist cause" does is subject claims about deities to rational examination, seeing what evidence exists for or against the existence of these entities. Conceptually, that is exactly the same thing as examining what evidence there is for or against the existence of Bigfoot or the efficacy of astrology. Of course, it's not as safe and comfortable, socially and politically, to get stuck into the evidence relating to deities as it is with that relating to Bigfoot, but that's all the more reason why people who wish to do so should be made welcome at skeptics conferences - and invited to present on atheism-related topics. The kind of unwelcoming stance that Wagg has taken to the atheist speakers is the last thing we need from people like him, who have some influence on the movement.
If anything, the skeptics movement should be moving to a wider definition, so that it embraces not just skepticism about religion but also skepticism about other extraordinary-but-popular claims that are difficult to square with the scentific picture of the world: such claims as those relating to the existence of libertarian free will. Let's have a genuine equal opportunity skeptic movement that goes well beyond relatively trivial claims about New Age woo, cryptozoology and the like - much as I love me some cryptozoology - and subjects claims that really matter to skeptical scrutiny.
Wagg's attitude conveys the implication that religion should, unlike astrology or claims related to Bigfoot, be protected from skeptical scrutiny. But such scrutiny is far more important in the case of religion, which wields enormous social and political power - unlike clubs for Bigfoot aficianados or Nessie spotters. Right now, we live in a time when it's crucially important to challenge the epistemic and moral credentials of religious leaders and organisations, and Wagg has placed himself on the side of frustrating that effort. Among other things, he is forcing people like me to waste our time replying to him.
What really annoys me is the way people like this, people who are attacking their own allies in public, behave as if they are the nice, reasonable, softly-softly ones. Talk about hypocrisy!
Cheers for Skepticon for dealing with an important issue in the depth that it deserves. As for Wagg, he deserves all the flak he's currently getting. He says, near the end of his diatribe:
And I fear the damage has already been done. I see a lot of good people leaving the skeptical community because they’re uncomfortable with the tone and disappointed with, frankly, the lack of skepticism presented by many people.
If people are leaving because they see skeptical scrutiny of claims about supernatural beings, then I wonder how "skeptical" they were in the first place. Good riddance to them, I say, joining in chorus with PZ on this occasion. And good riddance to Wagg, as well, if he leaves with them, as he's welcome to do. Hopefully he'll never again be seen in the skeptic movement, if publicly expressed skepticism about religion makes him so uncomfortable. It's his choice, of course - no one should be forced out. But the option is there for him to take.
The EXIT sign is right over there ...
To be fair to Wagg, he also offers some (rather faint, insincere-sounding) praise to the organisers. But according to his line of argument, the program contained too many topics that were devoted to skepticism about religious claims. Oh noes! They're being skeptical about the truth claims of religion at a skeptics convention!
For Cthulhu's sake, what extraordinary claims can it possibly be more important to express skepticism about than those of religion?
Wagg says:
The pro-atheist cause is an entirely different endeavor with a community that overlaps strongly with the skeptical community. Skepticism is about drawing conclusions that are proportioned to the available evidence. That’s it. And I think keeping the two things separate i[s] vitally important.
What rubbish! First, skepticism has an honorable pedigree going back to ancient Greece, and the concept is far broader than this definition.
But even if you want to define "skepticism" very narrowly, part of what the "pro-atheist cause" does is subject claims about deities to rational examination, seeing what evidence exists for or against the existence of these entities. Conceptually, that is exactly the same thing as examining what evidence there is for or against the existence of Bigfoot or the efficacy of astrology. Of course, it's not as safe and comfortable, socially and politically, to get stuck into the evidence relating to deities as it is with that relating to Bigfoot, but that's all the more reason why people who wish to do so should be made welcome at skeptics conferences - and invited to present on atheism-related topics. The kind of unwelcoming stance that Wagg has taken to the atheist speakers is the last thing we need from people like him, who have some influence on the movement.
If anything, the skeptics movement should be moving to a wider definition, so that it embraces not just skepticism about religion but also skepticism about other extraordinary-but-popular claims that are difficult to square with the scentific picture of the world: such claims as those relating to the existence of libertarian free will. Let's have a genuine equal opportunity skeptic movement that goes well beyond relatively trivial claims about New Age woo, cryptozoology and the like - much as I love me some cryptozoology - and subjects claims that really matter to skeptical scrutiny.
Wagg's attitude conveys the implication that religion should, unlike astrology or claims related to Bigfoot, be protected from skeptical scrutiny. But such scrutiny is far more important in the case of religion, which wields enormous social and political power - unlike clubs for Bigfoot aficianados or Nessie spotters. Right now, we live in a time when it's crucially important to challenge the epistemic and moral credentials of religious leaders and organisations, and Wagg has placed himself on the side of frustrating that effort. Among other things, he is forcing people like me to waste our time replying to him.
What really annoys me is the way people like this, people who are attacking their own allies in public, behave as if they are the nice, reasonable, softly-softly ones. Talk about hypocrisy!
Cheers for Skepticon for dealing with an important issue in the depth that it deserves. As for Wagg, he deserves all the flak he's currently getting. He says, near the end of his diatribe:
And I fear the damage has already been done. I see a lot of good people leaving the skeptical community because they’re uncomfortable with the tone and disappointed with, frankly, the lack of skepticism presented by many people.
If people are leaving because they see skeptical scrutiny of claims about supernatural beings, then I wonder how "skeptical" they were in the first place. Good riddance to them, I say, joining in chorus with PZ on this occasion. And good riddance to Wagg, as well, if he leaves with them, as he's welcome to do. Hopefully he'll never again be seen in the skeptic movement, if publicly expressed skepticism about religion makes him so uncomfortable. It's his choice, of course - no one should be forced out. But the option is there for him to take.
The EXIT sign is right over there ...
Kitteh contest
I'm proud to be one of the, ahem!, celebrity judges - along with Ophelia Benson and Miranda Hale - of Jerry Coyne's kitteh contest. Check out the rules and send your entries to Jerry.
What use is the UN anyway?
Take the above as a genuine question if you like. Perhaps the UN has done some good above and beyond that done by the effect of the balance of nuclear terror in averting a third world war. Perhaps there has been real benefit in the body of international human rights law that it's developed, and in some of its programs. I'd welcome comments setting out just what benefits have been achieved via the UN that wouldn't have been achieved anyway. Let's hope that the list is impressive.
But I have to say that I have my doubts when the UN is unable to agree on a resolution that condemns arbitary executions, gives some examples, and is unable to issue a document with a reference to executions for sexuality among those examples. From a gay rights viewpoint, this is worse than no resolution at all. When such a reference is conspicuous by its absence, and when it's a matter of public record that the UN explicitly decided against including it, the implication can only be that the UN condones arbitrary executions of LGBT people.
When I see some of the outcomes from the UN, I sometimes shake my head and wonder, frankly, whether we'd be better off without it. Feel free to tell me why that's going too far and about some good things that it's accomplished - whenever I brood about what a shambles that organisation has become, I could do with some cheering up.
But I have to say that I have my doubts when the UN is unable to agree on a resolution that condemns arbitary executions, gives some examples, and is unable to issue a document with a reference to executions for sexuality among those examples. From a gay rights viewpoint, this is worse than no resolution at all. When such a reference is conspicuous by its absence, and when it's a matter of public record that the UN explicitly decided against including it, the implication can only be that the UN condones arbitrary executions of LGBT people.
When I see some of the outcomes from the UN, I sometimes shake my head and wonder, frankly, whether we'd be better off without it. Feel free to tell me why that's going too far and about some good things that it's accomplished - whenever I brood about what a shambles that organisation has become, I could do with some cheering up.
Tuesday, November 23, 2010
"The next step is to prohibit religious expression" - really?
Anti-religious speech is often criticized in a way that goes beyond refutation to a suggestion that it is somehow socially unacceptable. Consider Tom Frame's recent attack on what he calls "contemporary anti-theism," in which he includes such books as The God Delusion, by Richard Dawkins:
When it becomes acceptable, and even admirable, to mock and ridicule a person's religious convictions and customs — and especially when the intention is to provoke an indignant reaction — the next step is to prohibit the expression of religious sentiment in all public places and forums.
But this claim verges on paranoia. Frame is Australian, and should be well aware that there is no prospect in his (and my) home country of any prohibition on public expressions of religious sentiment - though he claims, vaguely, that there are "signs" to the contrary. If he were writing in the American context, where the presidency is intimately linked with religious ritual and involved in interaction with religious leaders, the claim would appear even more bizarre. The freedom of speech enjoyed by "contemporary anti-theists" is more than matched by that of their religious opponents.
Accordingly, Frame is legally free to liken these examples of "contemporary anti-theism" to religious fundamentalism.
But although this slur is perfectly legal, it is unfair: Frame suggests that "contemporary anti-theism" has "some of the characteristics of fundamentalism and, like all fundamentalisms, needs to be opposed." But which characteristics of fundamentalism is this anti-theism supposed to show? Frame does not say, and it is not at all clear.
For example, the anti-theists he refers to have no holy text that they treat as inerrant: they may give respect to each other's writings or to classics of science such as Charles Darwin's On the Origin of Species (1859), but that is a very different thing. They do not show extreme resistance to modernity through acts of violence, resistance to science and scholarship, and subordination of women. Dawkins and others may be confident of their positions, but not with the extreme dogmatism that clings to a position even when it is plainly contrary to robust scientific findings (liberal or moderate religious leaders may be equally confident of their positions, but that doesn't turn them into fundamentalists).
Frame appears to use the word "fundamentalism" for its hurtfulness rather than its accuracy. That is, of course, his legal right in a liberal democracy. However, his analysis exemplifies the illiberal view that satire and robust criticism are illegitimate forms of speech when directed at religion.
When it becomes acceptable, and even admirable, to mock and ridicule a person's religious convictions and customs — and especially when the intention is to provoke an indignant reaction — the next step is to prohibit the expression of religious sentiment in all public places and forums.
But this claim verges on paranoia. Frame is Australian, and should be well aware that there is no prospect in his (and my) home country of any prohibition on public expressions of religious sentiment - though he claims, vaguely, that there are "signs" to the contrary. If he were writing in the American context, where the presidency is intimately linked with religious ritual and involved in interaction with religious leaders, the claim would appear even more bizarre. The freedom of speech enjoyed by "contemporary anti-theists" is more than matched by that of their religious opponents.
Accordingly, Frame is legally free to liken these examples of "contemporary anti-theism" to religious fundamentalism.
But although this slur is perfectly legal, it is unfair: Frame suggests that "contemporary anti-theism" has "some of the characteristics of fundamentalism and, like all fundamentalisms, needs to be opposed." But which characteristics of fundamentalism is this anti-theism supposed to show? Frame does not say, and it is not at all clear.
For example, the anti-theists he refers to have no holy text that they treat as inerrant: they may give respect to each other's writings or to classics of science such as Charles Darwin's On the Origin of Species (1859), but that is a very different thing. They do not show extreme resistance to modernity through acts of violence, resistance to science and scholarship, and subordination of women. Dawkins and others may be confident of their positions, but not with the extreme dogmatism that clings to a position even when it is plainly contrary to robust scientific findings (liberal or moderate religious leaders may be equally confident of their positions, but that doesn't turn them into fundamentalists).
Frame appears to use the word "fundamentalism" for its hurtfulness rather than its accuracy. That is, of course, his legal right in a liberal democracy. However, his analysis exemplifies the illiberal view that satire and robust criticism are illegitimate forms of speech when directed at religion.
Stump an Atheist
I'm going to give a plug to this new blog created by Ryan Benson, which advocates an atheist perspective by answering questions from the public. From what I've read, Ryan is giving thoughtful, civil, and rather comprehensive answers, and this may well be one of the most effective ways to get the point across.
Go and check it out for yourself - you might like it.
Go and check it out for yourself - you might like it.
Australian Book of Atheism now on sale
It went on sale in the stores yesterday, so line up and buy a copy for yourself ... and a few as ... ahem ... Christmas prezzies. Just a suggestion of course.
Monday, November 22, 2010
More on Islamophobia and stuff
Further to last night's post, I should add that Asad and Mahmood do not seem to be suggesting that the considerations they raise provide a basis for censorship of blasphemous speech, including images such as the Danish cartoons. They seem more concerned to foster understanding of the Muslim point of view, and perhaps to create a more sympathetic response from non-Muslims in Western societies. Nonetheless, it is worth considering whether the issues they discuss should affect the policies of the secular state. In that respect, there's is a clear danger if the state acts on the basis of Asad's description of Muslim attitudes. What I call the Lockean model for relations between religion and state power does not propose that the various religions cease teaching their ideas, and arguing for them, merely that they cease jockeying for state power to impose doctrines and practices by fire and sword. This is completely inconsistent with a model that forbids one religion from "seducing" the adherents of another, or forbids opponents of religion from "seducing" the minds of religious believers generally.
Locke, of course, adduced secular reasons for prohibiting certain viewpoints, including those of Roman Catholicism, Islam, and atheism, but his reasons now appear weak. It is possible for conflicting viewpoints to co-exist in the one society, and pressing social problems don't usually arise from the presence of individuals with supposed extraterritorial loyalties, such as Catholics' loyalty to the Vatican or the Holy See (I say "usually" because there may be problems in some cases, as when crimes in one jurisdiction ar abetted in another). Nor, contrary to Locke's fears, does the presence of atheists destroy social bonds, or the efficacy of contracts and the judicial system. The tendency has been to allow a wide range of viewpoints and not to silence them - they may compete in a marketplace of ideas.
While some Muslims may find this alien to their tradition, much the same could have been said of Christians not that long ago. It seems reasonable to hope that Islam can adapt to a social environment in which it is open to criticism, and in which it is relatively routine for individual citizens to change religious faiths or lose religious faith altogether.
What about Mahmood's suggestion that Muslims experience attacks on the Prophet in a uniquely painful and personal way? Again, note the dangers if this is pressed too far. I have no way of assessing the accuracy of Mahmood's claim, though it seems plausible that pious religious adherents would feel something of the pain she describes when attacks are made on iconic figures. Perhaps this is especially so where the religious adherent is Muslim and the figure concerned is Muhammad, but it is easy enough to imagine the pain that might be suffered by pious Jews or Christians in analogous circumstances (ridicule of Moses, perhaps, or of Jesus or the Virgin Mary).
An explanation such as Mahmood's may help non-Muslims to understand what is at stake, emotionally, when satirical attacks are made on Islam, and especially on the person Prophet. Perhaps something similar explains the passionate responses of others, such as devout Catholics, to what they see as sacrileges (e.g. Andres Serrano's photograph, Piss Christ, which portrays a small crucifix submerged in the photographer's urine). All this is consistent with the militancy and litigiousness of some Catholic organizations.
Perhaps, then, we need to absorb the lesson that certain images, and perhaps mere words in come cases, can have a very high emotional impact not only for Muslims but also for adherents of other religions. That hardly excuses violent retaliation, such as occurred in following publication of the Danish cartoons, but high-impact offence can play a limited role in public policy. If it's correct that high-impact offence shades into harm, then the state has a legitimate role in protecting citizens from exposure to images and smells (for example) that produce, say, physical nausea. However, there's a catch here. There would be very few situations where exposure of others to nauseating smells has any communicative value. The situation is rather different with movies, artistic photographs, satirical cartoons, and philosophical novels! These can be avoided (even a newspaper can be closed) and they are protected by well-known free speech values.
The Danish cartoons, for example, caused offence to many people, but the immediate impact could be shut off by turning the page. A ban on such images would effectively mean that Islam was placed beyond satirical comment, as the images were not extreme as satirical cartoons go - no more so than run-of-the-mill cartoons that are published every day making fun of political proposals and events. What seems to have been most offensive about the cartoons was their ideas, for example of a linkage between Muhammad and modern-day Islamist terrorism. This, I submit, cannot be a ground for censoring newspapers.
Locke, of course, adduced secular reasons for prohibiting certain viewpoints, including those of Roman Catholicism, Islam, and atheism, but his reasons now appear weak. It is possible for conflicting viewpoints to co-exist in the one society, and pressing social problems don't usually arise from the presence of individuals with supposed extraterritorial loyalties, such as Catholics' loyalty to the Vatican or the Holy See (I say "usually" because there may be problems in some cases, as when crimes in one jurisdiction ar abetted in another). Nor, contrary to Locke's fears, does the presence of atheists destroy social bonds, or the efficacy of contracts and the judicial system. The tendency has been to allow a wide range of viewpoints and not to silence them - they may compete in a marketplace of ideas.
While some Muslims may find this alien to their tradition, much the same could have been said of Christians not that long ago. It seems reasonable to hope that Islam can adapt to a social environment in which it is open to criticism, and in which it is relatively routine for individual citizens to change religious faiths or lose religious faith altogether.
What about Mahmood's suggestion that Muslims experience attacks on the Prophet in a uniquely painful and personal way? Again, note the dangers if this is pressed too far. I have no way of assessing the accuracy of Mahmood's claim, though it seems plausible that pious religious adherents would feel something of the pain she describes when attacks are made on iconic figures. Perhaps this is especially so where the religious adherent is Muslim and the figure concerned is Muhammad, but it is easy enough to imagine the pain that might be suffered by pious Jews or Christians in analogous circumstances (ridicule of Moses, perhaps, or of Jesus or the Virgin Mary).
An explanation such as Mahmood's may help non-Muslims to understand what is at stake, emotionally, when satirical attacks are made on Islam, and especially on the person Prophet. Perhaps something similar explains the passionate responses of others, such as devout Catholics, to what they see as sacrileges (e.g. Andres Serrano's photograph, Piss Christ, which portrays a small crucifix submerged in the photographer's urine). All this is consistent with the militancy and litigiousness of some Catholic organizations.
Perhaps, then, we need to absorb the lesson that certain images, and perhaps mere words in come cases, can have a very high emotional impact not only for Muslims but also for adherents of other religions. That hardly excuses violent retaliation, such as occurred in following publication of the Danish cartoons, but high-impact offence can play a limited role in public policy. If it's correct that high-impact offence shades into harm, then the state has a legitimate role in protecting citizens from exposure to images and smells (for example) that produce, say, physical nausea. However, there's a catch here. There would be very few situations where exposure of others to nauseating smells has any communicative value. The situation is rather different with movies, artistic photographs, satirical cartoons, and philosophical novels! These can be avoided (even a newspaper can be closed) and they are protected by well-known free speech values.
The Danish cartoons, for example, caused offence to many people, but the immediate impact could be shut off by turning the page. A ban on such images would effectively mean that Islam was placed beyond satirical comment, as the images were not extreme as satirical cartoons go - no more so than run-of-the-mill cartoons that are published every day making fun of political proposals and events. What seems to have been most offensive about the cartoons was their ideas, for example of a linkage between Muhammad and modern-day Islamist terrorism. This, I submit, cannot be a ground for censoring newspapers.
Lost in space
I came across this the other day, and it will amuse a certain sub-component of my readership (as you were, the rest of you).
James Bradley on the fate and fortune of the book
Over at City of Tongues, James Bradley has a post on the future of books (I like his choice of illustration for it). Abridged large sample:
In part this is an argument about the material culture of the book, and the material culture of the tablet computer or ereader, and about the complex web of relationships and assumptions that shape our experience of text on a page and text on a screen. But it’s also about the way the medium shapes the message, and about the way our desire to replicate an old technology with a new one reveals a failure to come to grips with the real possibilities of the new. Think for a moment about the silly page-turning animations ereaders insist on inserting: aren’t they really the textual equivalent of curtains on a television? Indeed why do we need to retain the notion of the “page” at all? Why can’t text just continue down as we read, like a scroll? And if it did, what would this do to the metaphors and devices we use to shape and organise information, the chapters and sections of the analog world?
Part of the problem is the fact that codex books feel so natural to us we forget they are themselves a technology. An extremely successful one to be sure (indeed if one wanted a test of true technological success it might well be precisely this capacity to disappear, to be subsumed and naturalised into the culture). And like any technology they use us just as we use them, shaping not just the way we consume information but the way we think. Many of our important narrative forms – the novel, for instance, or narrative non-fiction – are forms which depend in fundamental ways on the physical nature of the codex book, and its emphasis upon linearity, closure, as well as the more subtle questions about page length and internal organisation I alluded to above.
This question is usually framed as one about literary form, a David Shieldsesque argument against the hegemony of the unifying narrative. But I’d suggest we need to take it a step further. Because as I’ve said before, if we’re reading books on a device that can handle video and sound, how long will it be before publishers and creators start taking advantage of those possibilities?
[...]
But I think it also makes it necessary to question some of the assumptions underpinning what ebooks are, and what they mean for literary culture and publishing more generally. Because while I’m less convinced than I was a year or two ago that long-form narratives like the novel and narrative non-fiction are going to go the way of the dinosaurs, I do think they’ll change, and that just as the novel evolved to take advantage of the codex book new forms will evolve to take advantage of tablets and ereaders.
In part this is an argument about the material culture of the book, and the material culture of the tablet computer or ereader, and about the complex web of relationships and assumptions that shape our experience of text on a page and text on a screen. But it’s also about the way the medium shapes the message, and about the way our desire to replicate an old technology with a new one reveals a failure to come to grips with the real possibilities of the new. Think for a moment about the silly page-turning animations ereaders insist on inserting: aren’t they really the textual equivalent of curtains on a television? Indeed why do we need to retain the notion of the “page” at all? Why can’t text just continue down as we read, like a scroll? And if it did, what would this do to the metaphors and devices we use to shape and organise information, the chapters and sections of the analog world?
Part of the problem is the fact that codex books feel so natural to us we forget they are themselves a technology. An extremely successful one to be sure (indeed if one wanted a test of true technological success it might well be precisely this capacity to disappear, to be subsumed and naturalised into the culture). And like any technology they use us just as we use them, shaping not just the way we consume information but the way we think. Many of our important narrative forms – the novel, for instance, or narrative non-fiction – are forms which depend in fundamental ways on the physical nature of the codex book, and its emphasis upon linearity, closure, as well as the more subtle questions about page length and internal organisation I alluded to above.
This question is usually framed as one about literary form, a David Shieldsesque argument against the hegemony of the unifying narrative. But I’d suggest we need to take it a step further. Because as I’ve said before, if we’re reading books on a device that can handle video and sound, how long will it be before publishers and creators start taking advantage of those possibilities?
[...]
But I think it also makes it necessary to question some of the assumptions underpinning what ebooks are, and what they mean for literary culture and publishing more generally. Because while I’m less convinced than I was a year or two ago that long-form narratives like the novel and narrative non-fiction are going to go the way of the dinosaurs, I do think they’ll change, and that just as the novel evolved to take advantage of the codex book new forms will evolve to take advantage of tablets and ereaders.
Ethics classes to go ahead
See this story in The Sydney Morning Herald. Sample:
PARENTS will have the right to ethics classes as an alternative to scripture in their child's school even if the principal and the majority of the school community opposes them.
The state cabinet is expected to approve the introduction of ethics classes to primary schools today after a successful trial this year. They will begin as early as term one next year.
While the classes will be voluntary for schools, the Herald has confirmed that parents who want their children to attend the classes will be able to appeal to the Education Department if the principal opposes them.
As long as the St James Ethics Centre, which will run the classes, is able to provide volunteers and there is a reasonable number of children to attend them, the department will ensure they are offered.
Students in years 5 and 6 are likely to be the first to be offered the classes, because they are the years in which the trial was run. Eventually classes will be offered in years K-6.
PARENTS will have the right to ethics classes as an alternative to scripture in their child's school even if the principal and the majority of the school community opposes them.
The state cabinet is expected to approve the introduction of ethics classes to primary schools today after a successful trial this year. They will begin as early as term one next year.
While the classes will be voluntary for schools, the Herald has confirmed that parents who want their children to attend the classes will be able to appeal to the Education Department if the principal opposes them.
As long as the St James Ethics Centre, which will run the classes, is able to provide volunteers and there is a reasonable number of children to attend them, the department will ensure they are offered.
Students in years 5 and 6 are likely to be the first to be offered the classes, because they are the years in which the trial was run. Eventually classes will be offered in years K-6.
Do clones have souls?
Here's a long message-board thread about whether comic-book clones are real characters (the point is illustrated in the scan in the original post, featuring a clash between the demonic-looking superhero Nightcrawler and the mercenary supervillain Scalphunter - the information you need is that the latter is a 6th Day-style clone of his original self).
You probably won't have time to read the whole thread, but it may amuse you to dip into it. At various points, it gets down to some serious and not-so-serious debate on such issues as whether clones would have souls and whether comics are corrupting children by presenting more-or-less sympathetic characters who have come into existence through cloning - thus undermining the idea that reproductive cloning would be morally wrong.
Some of the comments are quite insightful, as with the remark that Frankenstein's monster was a sensitive and intelligent being who was corrupted only by the unenlightened treatment he received from prejudiced humans. Whatever Mary Shelley's intentions, Frankenstein's crime wasn't defying God or violating nature but bringing into the world a person who would inevitably be mistreated. That's the real issue we'd face with cloning, or so the commenter seems to be saying.
There's much humorous discussion of how clones are handled as characters, but also some people expressing what seems like real unease. The argument about corrupting young readers is interesting - but of course, young people are exposed to an enormous amount of propaganda that reinforces popular moral ideas, including the idea that cloning is wrong or yucky. Why shouldn't they also be exposed to some narratives that implicitly present an opposed viewpoint? If the current manufactured consensus (which I, for one, have not joined) about the evil of reproductive cloning depends on the socialisation of children into the "correct" view, how much respect should we give it?
Not very much at all, I argue. If we had a safe technology for reproduction via somatic cell nuclear transfer, I'd have only slight misgivings about using it. The slight misgivings would relate mainly to the prejudice that the resulting children might face. Much of the opposition to human reproductive cloning strikes me as simply irrational, and it's a pity that we've moved so swiftly and often mindlessly to a supposed consensus that it ought to be banned.
You probably won't have time to read the whole thread, but it may amuse you to dip into it. At various points, it gets down to some serious and not-so-serious debate on such issues as whether clones would have souls and whether comics are corrupting children by presenting more-or-less sympathetic characters who have come into existence through cloning - thus undermining the idea that reproductive cloning would be morally wrong.
Some of the comments are quite insightful, as with the remark that Frankenstein's monster was a sensitive and intelligent being who was corrupted only by the unenlightened treatment he received from prejudiced humans. Whatever Mary Shelley's intentions, Frankenstein's crime wasn't defying God or violating nature but bringing into the world a person who would inevitably be mistreated. That's the real issue we'd face with cloning, or so the commenter seems to be saying.
There's much humorous discussion of how clones are handled as characters, but also some people expressing what seems like real unease. The argument about corrupting young readers is interesting - but of course, young people are exposed to an enormous amount of propaganda that reinforces popular moral ideas, including the idea that cloning is wrong or yucky. Why shouldn't they also be exposed to some narratives that implicitly present an opposed viewpoint? If the current manufactured consensus (which I, for one, have not joined) about the evil of reproductive cloning depends on the socialisation of children into the "correct" view, how much respect should we give it?
Not very much at all, I argue. If we had a safe technology for reproduction via somatic cell nuclear transfer, I'd have only slight misgivings about using it. The slight misgivings would relate mainly to the prejudice that the resulting children might face. Much of the opposition to human reproductive cloning strikes me as simply irrational, and it's a pity that we've moved so swiftly and often mindlessly to a supposed consensus that it ought to be banned.
Sunday, November 21, 2010
About those Danish cartoons ...
As is well known, the publication in September 2005 of the notorious Danish cartoons - a set of satirical cartoons depicting Muhammad, printed in the newspaper Jyllands-Posten - led to widespread violence, political debate, high-profile litigation, and numerous attempts to explain the depth of anger among (at least some) Muslims.
One such attempt is that of Talal Asad, who offers an explanation of distinctively Muslim attitudes to free speech, blasphemy, and anti-religious satire. His analysis stresses the sanctity of private thought in the Islamic tradition - religious and government authorities will not inquire into individual's hidden motives and beliefs, scrutinizing them for heresy - as opposed to freedom to express thoughts in public. According to Asad, Islam differs from Christianity in its commitment to the privacy of thought. However, it does not protect attempts to seduce the thoughts of others away from their bond with God, their responsibility to coreligionists, and what it sees as metaphysical and moral truth. In Islam, so Asad assures us, such "seduction" is regarded as a kind of violence; it is dangerous to individuals and the social order, threatening discord and violence. For Muslims who take this seriously, "it is impossible to remain silent when confronted with blasphemy … blasphemy is neither 'freedom of speech' nor the challenge of a new truth but something that seeks to disrupt a living relationship."
Saba Mahmood takes a different, though perhaps complementary, approach. She describes a special form of pain felt by (some) pious Muslims, which amounts to a sense of personal loss and sorrow when confronted by ridicule of the Prophet. On this account, the loss and sorrow relate to Muhammad's role as a moral exemplar, someone to be imitated in many of the quotidian aspects of life - such as how he walked, slept, spoke, ate, and dressed. Thus, these Muslims respond to ridicule of the prophet not with anger that a moral interdiction has been violated, but with a sense of having been violated and wounded. Like many other commentators, Mahmood also insists that such phenomena as the Danish cartoons involve an element of "racism" - extending this idea beyond biological notions of race to groups marked by religious and cultural characteristics. This characterization tends to undermine the value of satire directed at Islam, or at radical forms or manifestations of Islam, associating it with mere racial slurs, a form of speech that many of us see as essentially worthless.
How should a secular state respond to these analyses, which appear in Talal Asad, et. al., Is Critique Secular? Blasphemy, Injury, and Free Speech, Townsend Center/University ofCalifornia Press, 2009?
One such attempt is that of Talal Asad, who offers an explanation of distinctively Muslim attitudes to free speech, blasphemy, and anti-religious satire. His analysis stresses the sanctity of private thought in the Islamic tradition - religious and government authorities will not inquire into individual's hidden motives and beliefs, scrutinizing them for heresy - as opposed to freedom to express thoughts in public. According to Asad, Islam differs from Christianity in its commitment to the privacy of thought. However, it does not protect attempts to seduce the thoughts of others away from their bond with God, their responsibility to coreligionists, and what it sees as metaphysical and moral truth. In Islam, so Asad assures us, such "seduction" is regarded as a kind of violence; it is dangerous to individuals and the social order, threatening discord and violence. For Muslims who take this seriously, "it is impossible to remain silent when confronted with blasphemy … blasphemy is neither 'freedom of speech' nor the challenge of a new truth but something that seeks to disrupt a living relationship."
Saba Mahmood takes a different, though perhaps complementary, approach. She describes a special form of pain felt by (some) pious Muslims, which amounts to a sense of personal loss and sorrow when confronted by ridicule of the Prophet. On this account, the loss and sorrow relate to Muhammad's role as a moral exemplar, someone to be imitated in many of the quotidian aspects of life - such as how he walked, slept, spoke, ate, and dressed. Thus, these Muslims respond to ridicule of the prophet not with anger that a moral interdiction has been violated, but with a sense of having been violated and wounded. Like many other commentators, Mahmood also insists that such phenomena as the Danish cartoons involve an element of "racism" - extending this idea beyond biological notions of race to groups marked by religious and cultural characteristics. This characterization tends to undermine the value of satire directed at Islam, or at radical forms or manifestations of Islam, associating it with mere racial slurs, a form of speech that many of us see as essentially worthless.
How should a secular state respond to these analyses, which appear in Talal Asad, et. al., Is Critique Secular? Blasphemy, Injury, and Free Speech, Townsend Center/University of
Steven Paul Leiva on the future of printed books
With the popularity of readers really taking off just in the last year or two, this post over Steven Paul Leiva's This 'n That blog is more than timely.
I share many of Leiva's biases, no doubt in part because we are from roughly the same generation, but people coming along after us are going to see the world differently. He has produced a long, meditative, and I think wise and insightful post.
Sample:
The book, what we are now calling the traditional book, “...a written or printed work consisting of pages glued or sewn together along one side and bound in covers,” has been both a more successful and less successful delivery system than the vacuum-packed can. It never alters the “taste” of its contents, but, because it is prone to wear and tear, mold and mildew, not to mention the evil of dust and the negligence of borrowers, it does not always have a long shelf life. However it is quite user friendly—portable with pages not difficult to turn, easy on the eyes depending on the type size, and usually of a warm, inviting feel. You can underline and write in the margins if you so choose to desecrate it. In essence, books travel well with us. Books can be boon companions. If you are a book reader—and who reading this blog wouldn’t be—books might well figure into highlights of your personal history. That book or series of books you shared with your best childhood buddy, say those Frazzeta cover-illustrated paperbacks of the Mars novels by Edgar Rich Burroughs; that beat up copy of Siddhartha you were reading while sitting around the collage quad that attracted the attention of that long-haired blonde beauty who looked just liked Michelle Phillips of The Mamas and the Papas; that Saul Bellow novel that kept dogging you when you were in your twenties—you loved it, you hated it, you loved it, you hated it—; that rare book of short stories from her favorite author that you “scored” in finding at a used book store and gave to her, her smile back convincing you that she was indeed the love of your life; those Dr. Seuss books you read out loud to your kids, acting them out in a gloriously foolish performance; that dark, dangerous, bloody serial-killer novel you read while sitting on a warm, sunny beach somewhere during your most wonderful vacation ever; that great jazz musician’s biography you read while on a long train trip, the train providing the rhythm section. You can remember the look, the feel, the touch, the cover, the heft of all of these books and you remember them with great fondness, yet isn’t it the content that really deserves to be part of your memory? Wasn’t the delivery system—the look, the feel, the touch, the cover, the heft—really just, dare I call it, an appendix to the content?
Smells can bring on a flood of memories, but are they memories of smells? The aroma of a great meal is delightful, but it is not the aroma that will nourish you.
I share many of Leiva's biases, no doubt in part because we are from roughly the same generation, but people coming along after us are going to see the world differently. He has produced a long, meditative, and I think wise and insightful post.
Sample:
The book, what we are now calling the traditional book, “...a written or printed work consisting of pages glued or sewn together along one side and bound in covers,” has been both a more successful and less successful delivery system than the vacuum-packed can. It never alters the “taste” of its contents, but, because it is prone to wear and tear, mold and mildew, not to mention the evil of dust and the negligence of borrowers, it does not always have a long shelf life. However it is quite user friendly—portable with pages not difficult to turn, easy on the eyes depending on the type size, and usually of a warm, inviting feel. You can underline and write in the margins if you so choose to desecrate it. In essence, books travel well with us. Books can be boon companions. If you are a book reader—and who reading this blog wouldn’t be—books might well figure into highlights of your personal history. That book or series of books you shared with your best childhood buddy, say those Frazzeta cover-illustrated paperbacks of the Mars novels by Edgar Rich Burroughs; that beat up copy of Siddhartha you were reading while sitting around the collage quad that attracted the attention of that long-haired blonde beauty who looked just liked Michelle Phillips of The Mamas and the Papas; that Saul Bellow novel that kept dogging you when you were in your twenties—you loved it, you hated it, you loved it, you hated it—; that rare book of short stories from her favorite author that you “scored” in finding at a used book store and gave to her, her smile back convincing you that she was indeed the love of your life; those Dr. Seuss books you read out loud to your kids, acting them out in a gloriously foolish performance; that dark, dangerous, bloody serial-killer novel you read while sitting on a warm, sunny beach somewhere during your most wonderful vacation ever; that great jazz musician’s biography you read while on a long train trip, the train providing the rhythm section. You can remember the look, the feel, the touch, the cover, the heft of all of these books and you remember them with great fondness, yet isn’t it the content that really deserves to be part of your memory? Wasn’t the delivery system—the look, the feel, the touch, the cover, the heft—really just, dare I call it, an appendix to the content?
Smells can bring on a flood of memories, but are they memories of smells? The aroma of a great meal is delightful, but it is not the aroma that will nourish you.
Saturday, November 20, 2010
A better (non-spooky) account of human rights
This time I'm quoting from AC Grayling's Towards the Light: The Story of the Struggles for Liberty and Rights (Boomsbury, 2007), pp. 260-61:
Critics have focused on the arbitrariness of the claim that nature or a deity has somehow magically endowed people with rights to life, liberty, property and happiness, when in fact the idea of these things is a human invention, and their existence as rights (in those dispensations where they indeed are rights) is the result of decisions to regard them as such. I call this the "arrogatory theory of rights"; experience and rational reflection show what is required to give individuals the best chance of making flourishing lives for themselves, and these framework requirements we institute as rights in order to make the chance of that flourishing available.
So we do not need to be wedded to the idea of "natural" rights at all. The proceeding just described - of people coming to see what they should lay claim to as the basis of their social and institutional arrangements - is acceptable and justifiable on its merits; one can make out an excellent case for saying that over time a consensus has been reached about the kinds of basic laws and principles required to so arrange things for individuals and groups that they can exist in ways we ("we" in those societies) recognise as desirable. It is for this that a robust and generalised concept of liberty is needed. As has often enough been pointed out (for example, by the philosopher H.L.A. Hart), if any idea of rights is to have content, the basic one must be liberty, for without it none of the others can apply.
Critics have focused on the arbitrariness of the claim that nature or a deity has somehow magically endowed people with rights to life, liberty, property and happiness, when in fact the idea of these things is a human invention, and their existence as rights (in those dispensations where they indeed are rights) is the result of decisions to regard them as such. I call this the "arrogatory theory of rights"; experience and rational reflection show what is required to give individuals the best chance of making flourishing lives for themselves, and these framework requirements we institute as rights in order to make the chance of that flourishing available.
So we do not need to be wedded to the idea of "natural" rights at all. The proceeding just described - of people coming to see what they should lay claim to as the basis of their social and institutional arrangements - is acceptable and justifiable on its merits; one can make out an excellent case for saying that over time a consensus has been reached about the kinds of basic laws and principles required to so arrange things for individuals and groups that they can exist in ways we ("we" in those societies) recognise as desirable. It is for this that a robust and generalised concept of liberty is needed. As has often enough been pointed out (for example, by the philosopher H.L.A. Hart), if any idea of rights is to have content, the basic one must be liberty, for without it none of the others can apply.
The Australian Book of Atheism - a review
Nice review of Warren Bonnet's book in Bookseller and Publisher, quoted on Scribe's site for the book. (I'm not mentioned, but that's cool; I'll get my glory or otherwise some other time.)
More from Wolterstorff
As I indicated in the Preface, I engage myself in the practice of philosophy as a Christian. That remained in the background for large stretches of the discussion: here, in our discussion of natural human rights, it has come into the foreground.
The following should be added: if one believes that there are natural inherent human rights, then the fact that the secularist cannot account for those rights, whereas the theist who holds convictions about God's love that I have delineated can do so, is an argument for theism (of that sort). Not a foundationalist argument, but an argument nonetheless. I believe that there are natural human rights. Human beings, all of them, are irreducibly precious.
(It won't surprise you that I think this is arguing backwards. I don't deny the usefulness of a body of international human rights law, though what we have is imperfect. But to me it's pretty obvious that all talk of "natural inherent human rights" is nonsense on stilts - certainly not the kind of thing that you can use as a premise in an argument for the existence of God.)
The following should be added: if one believes that there are natural inherent human rights, then the fact that the secularist cannot account for those rights, whereas the theist who holds convictions about God's love that I have delineated can do so, is an argument for theism (of that sort). Not a foundationalist argument, but an argument nonetheless. I believe that there are natural human rights. Human beings, all of them, are irreducibly precious.
(It won't surprise you that I think this is arguing backwards. I don't deny the usefulness of a body of international human rights law, though what we have is imperfect. But to me it's pretty obvious that all talk of "natural inherent human rights" is nonsense on stilts - certainly not the kind of thing that you can use as a premise in an argument for the existence of God.)
Wolterstorff on "doing" philosophy
From Nicholas Wolsterstorff, Justice: Rights and Wrongs. Note that I am providing this for interest and/or comments, not because I endorse it, though there is clearly, I think, something to it:
Seldom anymore does the analytic philosopher assume that he is obligated qua philosopher to ground rationally what he says in certitudes; analytic philosophy as a whole is on the way to becoming "Anselmian." ... The philosopher, approaching the practice of philosophy from his life in the everyday, finds himself believing many things, both large and small. Perhaps he finds himself believing in physicalism. He then regards the challenge facing him as a philosopher not to be that of discarding all those convictions unless he can rationally ground them in certitudes; the challenge facing him is that of working out the nature and implications of his physicalist convictions in various areas of thought, doing so in such a way as to cope not only with the complications in his own mind but with the objections lodged against this line of thought by others. In principle these objections might prove so powerful that he gives up his physicalism. In place of the old foundationalist picture, the picture of an academic enterprise now being taken for granted by philosophers in the analytic tradition is what I call dialogic pluralism. The academic enterprise is a dialogue among persons of different perspectives. The goal of the enterprise remains to achieve agreement.
First, let me just note that the continual use of masculine pronouns is unfortunate. But having noted it, I'll pass over it.
I'll have some more to say about Wolterstorff over the next couple of days, but let me repeat that the passage has some force. It's not as if the man's a fool. On the other hand, there's something disappointing about it, and also something unnecessarily conservative: I just "find" myself with certain beliefs (perhaps, as in Wolsterstorff's case, religious beliefs), so I try to do philosophy in a way that, in the first instance at least, tries to sort out the ramifications of those beliefs. So it seems to go. To Wolterstorff's credit, he thinks that objections to my starting beliefs may, in principle, lead me to revise them, but note that my aim is not to interrogate my starting beliefs sceptically or ruthlessly, or even with any vigour. Objections will occur to me, and I'll hear them from others in the philosophical enterprise, but my goal is to work out a view of the world based on whatever starting point I happen to have.
The admitted force of this approach is that foundationalist enterprises have been unsuccessful over the past four centuries, and there is little prospect of agreement on what the foundations even are or on how they could ever be established in the face of disagreement (after all, the things to be established include the methods by which we establish such things).
Nonetheless, I doubt that philosophy has yet become so tame, or that it should do so. Sure, we have to start somewhere, if only tacitly. For example, it's hard to see how we'd get anywhere unless we assumed that it is legitimate to apply logical rules such as modus ponens; that the senses are, if not exactly reliable, at least sufficiently useful and open to correction to help us investigate the world; that our memories are, once again, not totally reliable, but not so wildly in error as to be useless. We may not be able to state in any exhaustive and convincing way what rock-bottom assumptions we wish to use, and the quest for foundations of this kind may well be frustrating. It has been to date.
Still, it seems a bit much, when doing philosophy, simply to start with whatever ideas we have been socialised into accepting, even though they may be quite remote from anything that looks even like a reasonable candidate for being foundational. There is a difference between taking a fairly pragmatic approach to foundations and giving a free pass, in the first instance, to whatever you find yourself believing, however unreliable may be the process by which you came to the beliefs, and however shaky may be your grounds for justifying them.
As so often with such issues, there's much to say, but I don't think Wolsterstorff can simply help himself to a whole heap of religious doctrine - as he does - simply because he "finds himself" believing it.
This is, I suppose, why I'll always be a sceptic about religions, moral codes, and the like. They contain claims that are truly remote from ordinary beliefs about physical reality or social reality, and I think that we should do philosophy with a consciousness that the onus is on us to establish them in ways that are generally acceptable to others - not to people who are gung-ho epistemological sceptics, but at least to people who don't find themselves with whatever religious and moral convictions we happen to have been socialised into. Some morality may be indispensable, of course, to avoid disaster ... but much of it may not be.
When we engage in philosophical reflection, we don't have to start with something as comprehensive as physicalism, just with ordinary knowledge of the physical and social worlds, some widely-accepted assumptions about such things as sense perception, memory, and logic, and some of the healthy scepticism that has provided the fuel for science. Poke around, see what you find if you bracket off your more specific system of disbelief ... if you have one. Maybe you'll be surprised.
I think that philosophy is still a much more penetrating and unsettling enterprise than Wolterstorff wants it to be, and from my viewpoint that's the good thing about it.
Seldom anymore does the analytic philosopher assume that he is obligated qua philosopher to ground rationally what he says in certitudes; analytic philosophy as a whole is on the way to becoming "Anselmian." ... The philosopher, approaching the practice of philosophy from his life in the everyday, finds himself believing many things, both large and small. Perhaps he finds himself believing in physicalism. He then regards the challenge facing him as a philosopher not to be that of discarding all those convictions unless he can rationally ground them in certitudes; the challenge facing him is that of working out the nature and implications of his physicalist convictions in various areas of thought, doing so in such a way as to cope not only with the complications in his own mind but with the objections lodged against this line of thought by others. In principle these objections might prove so powerful that he gives up his physicalism. In place of the old foundationalist picture, the picture of an academic enterprise now being taken for granted by philosophers in the analytic tradition is what I call dialogic pluralism. The academic enterprise is a dialogue among persons of different perspectives. The goal of the enterprise remains to achieve agreement.
First, let me just note that the continual use of masculine pronouns is unfortunate. But having noted it, I'll pass over it.
I'll have some more to say about Wolterstorff over the next couple of days, but let me repeat that the passage has some force. It's not as if the man's a fool. On the other hand, there's something disappointing about it, and also something unnecessarily conservative: I just "find" myself with certain beliefs (perhaps, as in Wolsterstorff's case, religious beliefs), so I try to do philosophy in a way that, in the first instance at least, tries to sort out the ramifications of those beliefs. So it seems to go. To Wolterstorff's credit, he thinks that objections to my starting beliefs may, in principle, lead me to revise them, but note that my aim is not to interrogate my starting beliefs sceptically or ruthlessly, or even with any vigour. Objections will occur to me, and I'll hear them from others in the philosophical enterprise, but my goal is to work out a view of the world based on whatever starting point I happen to have.
The admitted force of this approach is that foundationalist enterprises have been unsuccessful over the past four centuries, and there is little prospect of agreement on what the foundations even are or on how they could ever be established in the face of disagreement (after all, the things to be established include the methods by which we establish such things).
Nonetheless, I doubt that philosophy has yet become so tame, or that it should do so. Sure, we have to start somewhere, if only tacitly. For example, it's hard to see how we'd get anywhere unless we assumed that it is legitimate to apply logical rules such as modus ponens; that the senses are, if not exactly reliable, at least sufficiently useful and open to correction to help us investigate the world; that our memories are, once again, not totally reliable, but not so wildly in error as to be useless. We may not be able to state in any exhaustive and convincing way what rock-bottom assumptions we wish to use, and the quest for foundations of this kind may well be frustrating. It has been to date.
Still, it seems a bit much, when doing philosophy, simply to start with whatever ideas we have been socialised into accepting, even though they may be quite remote from anything that looks even like a reasonable candidate for being foundational. There is a difference between taking a fairly pragmatic approach to foundations and giving a free pass, in the first instance, to whatever you find yourself believing, however unreliable may be the process by which you came to the beliefs, and however shaky may be your grounds for justifying them.
As so often with such issues, there's much to say, but I don't think Wolsterstorff can simply help himself to a whole heap of religious doctrine - as he does - simply because he "finds himself" believing it.
This is, I suppose, why I'll always be a sceptic about religions, moral codes, and the like. They contain claims that are truly remote from ordinary beliefs about physical reality or social reality, and I think that we should do philosophy with a consciousness that the onus is on us to establish them in ways that are generally acceptable to others - not to people who are gung-ho epistemological sceptics, but at least to people who don't find themselves with whatever religious and moral convictions we happen to have been socialised into. Some morality may be indispensable, of course, to avoid disaster ... but much of it may not be.
When we engage in philosophical reflection, we don't have to start with something as comprehensive as physicalism, just with ordinary knowledge of the physical and social worlds, some widely-accepted assumptions about such things as sense perception, memory, and logic, and some of the healthy scepticism that has provided the fuel for science. Poke around, see what you find if you bracket off your more specific system of disbelief ... if you have one. Maybe you'll be surprised.
I think that philosophy is still a much more penetrating and unsettling enterprise than Wolterstorff wants it to be, and from my viewpoint that's the good thing about it.
Thursday, November 18, 2010
Thanks to all who sent me that review of 50 Voices of Disbelief
Youse are great!
I'd actually seen this review somewhere along the line - I think an advance copy was sent to Udo or to Wiley-Blackwell - but confirming that is still useful.
Sample:
Whether you are an atheist or not this volume contains a plurality of viewpoints and certainly stimulates all readers concerned by providing challenging discussions on a variety of vital issues such as the problem of evil (theodicy), the plurality of religious beliefs in the world and religious fanaticism, the bad influence of religion on the social life of human beings, the suffering and pain of all animals (and human beings) in the past and present, the supposed validity of the different rational proofs of God’s existence, and personal stories of why atheism is the only reasonable stance.
Good! Overall it's a positive review, though it's a little on the lukewarm side. The author seems to be a liberal-ish Christian who bemoans the fanaticism of religious fundamentalists and other extremists. He is unwilling to acknowledge the tendency for comprehensive apocalyptic worldwiews to press in the direction of totalitarianism and fanaticism unless this is positively opposed. He also offers what strikes me as a remarkably lame response to the Problem of Evil, which I'll quote in the interests of fairness:
First, it would undermine the autonomy of human the sense that they are absolved from preventing evil deeds themselves; secondly, it would promote laziness in people with regard to solving moral problems themselves and thereby human beings would be unable to develop important social and ethical virtues such as compassion, care etc.; and thirdly, such a world would obviously face the same difficult problems utilitarians have when they try to prevent evil (what is morally bad for one person is not necessarily morally bad for another person).
Leave aside the fact that the contributors anticipate and reply to most or all of these points.
Briefly, you don't get let off the hook for allowing suffering that it was in your power to prevent on the basis that preventing it would absolve someone else of the responsibility. Instead, you do whatever you can. That is not usually regarded as taking away the autonomy of others who are not stuck with the problem. On the contrary, it would usually be seen as manipulating other people if you deliberately refused to prevent suffering that is within your powers, deciding instead to offload the problem on those others. In controlled circumstances, it is okay to delegate a problem to subordinates, but an omnipotent being does not need subordinates. It can also be okay to give certain kinds of problems to students or trainees for educational purposes, but that is a controlled and justifiable interference with autonomy ... and it's not something you do if serious suffering is at stake should they foul up.
The business about the virtues is unconvincing. First, it gets things horribly back to front: virtues such as kindness are valuable because they are needed to counter suffering. Arguing that suffering is valuable because it brings about, among other things, the development of kindness is, frankly, not only illogical but also morally abhorrent.
In any event, an omnipotent God could give us all the neurology, etc., on which virtues such as kindness supervene, without having to put us through some sort of history of encountering actual suffering. We could be equipped, in our "programming", for the counterfactual possibility of suffering without it ever actually arising, and we could even be offered choices whether or not to bring it about (we'd freely decide not to, because that's what we'd be like).
Indeed, God himself, if he existed, would presumably be kind or compassionate whether any actual suffering existed or not - it's not something God had to learn. So the theological story itself considers kindness or compassion something that a being could possess with no particular history. It could simply be part of our nature, as created by God, just as it is part of God's nature.
All this is not to mention the millions of years of suffering that existed before any beings with the capacity to decide to oppose it ever existed. Why all that suffering, on such a vast scale and for so long, when it had no formative or educational value?
I don't know what utilitarianism has to do with the isue. The important point in the vicinity is that allowing intense and extensive suffering that you could have prevented is regarded as morally bad (to say the least) by ordinary human standards, and is certainly not a hallmark of benevolence. There may be a thought lurking here, somewhere, that our moral norms for each other should not be so demanding as to require that we seriously think or act like utilitarians, always trying to maximise global utility, given that, as individuals, we have interests of our own to pursue and it may not be reasonable to demand that we drop these or divert too many of our resources from pursuing them. There's much more to say about this, but, in short, human moral codes should (arguably) make some allowances so that they are not too burdensome for finite creatures. Or rather, they are developed for the needs of finite creatures and inevitably contain trade-offs.
But none of this applies to God: God is not a finite creature with limited resources and specific interests of the ordinary kind. He is supposed to be omnipotent, transcendent, and all-benevolent. It's not just that he works within certain relatively lenient deontic constraints that are (arguably) reasonable for finite creatures like us. His goodness, or at least his beneficence, goes far beyond that.
Like so many theodicies, the one under discussion shows two vitiating tendencies. First, it assumes that it's okay to assign God human limitations (rather covertly), wherever needed, thus failing to take his superlative abilities (particularly his omnipotence) seriously. At the same time, secondly, it allows him motivations that would be seen as monstrous if they appeared in finite beings like us. Thus, God is actually held to a more lenient moral standard than the rest of us.
Believe in such a God if you wish, but don't be surprised if others find this being appalling even if he exists. It's logically possible, of course, that we live in a world that is controlled by a God who is morally horrible by our ordinary standards. Fortunately, the motivation for believing in any God at all starts to dry up once you realise that no such being meets the warrants offered by theodicists.
All in all, it's a thoughtful review, but none its criticisms do much to shake the arguments in the book.
I'd actually seen this review somewhere along the line - I think an advance copy was sent to Udo or to Wiley-Blackwell - but confirming that is still useful.
Sample:
Whether you are an atheist or not this volume contains a plurality of viewpoints and certainly stimulates all readers concerned by providing challenging discussions on a variety of vital issues such as the problem of evil (theodicy), the plurality of religious beliefs in the world and religious fanaticism, the bad influence of religion on the social life of human beings, the suffering and pain of all animals (and human beings) in the past and present, the supposed validity of the different rational proofs of God’s existence, and personal stories of why atheism is the only reasonable stance.
Good! Overall it's a positive review, though it's a little on the lukewarm side. The author seems to be a liberal-ish Christian who bemoans the fanaticism of religious fundamentalists and other extremists. He is unwilling to acknowledge the tendency for comprehensive apocalyptic worldwiews to press in the direction of totalitarianism and fanaticism unless this is positively opposed. He also offers what strikes me as a remarkably lame response to the Problem of Evil, which I'll quote in the interests of fairness:
First, it would undermine the autonomy of human the sense that they are absolved from preventing evil deeds themselves; secondly, it would promote laziness in people with regard to solving moral problems themselves and thereby human beings would be unable to develop important social and ethical virtues such as compassion, care etc.; and thirdly, such a world would obviously face the same difficult problems utilitarians have when they try to prevent evil (what is morally bad for one person is not necessarily morally bad for another person).
Leave aside the fact that the contributors anticipate and reply to most or all of these points.
Briefly, you don't get let off the hook for allowing suffering that it was in your power to prevent on the basis that preventing it would absolve someone else of the responsibility. Instead, you do whatever you can. That is not usually regarded as taking away the autonomy of others who are not stuck with the problem. On the contrary, it would usually be seen as manipulating other people if you deliberately refused to prevent suffering that is within your powers, deciding instead to offload the problem on those others. In controlled circumstances, it is okay to delegate a problem to subordinates, but an omnipotent being does not need subordinates. It can also be okay to give certain kinds of problems to students or trainees for educational purposes, but that is a controlled and justifiable interference with autonomy ... and it's not something you do if serious suffering is at stake should they foul up.
The business about the virtues is unconvincing. First, it gets things horribly back to front: virtues such as kindness are valuable because they are needed to counter suffering. Arguing that suffering is valuable because it brings about, among other things, the development of kindness is, frankly, not only illogical but also morally abhorrent.
In any event, an omnipotent God could give us all the neurology, etc., on which virtues such as kindness supervene, without having to put us through some sort of history of encountering actual suffering. We could be equipped, in our "programming", for the counterfactual possibility of suffering without it ever actually arising, and we could even be offered choices whether or not to bring it about (we'd freely decide not to, because that's what we'd be like).
Indeed, God himself, if he existed, would presumably be kind or compassionate whether any actual suffering existed or not - it's not something God had to learn. So the theological story itself considers kindness or compassion something that a being could possess with no particular history. It could simply be part of our nature, as created by God, just as it is part of God's nature.
All this is not to mention the millions of years of suffering that existed before any beings with the capacity to decide to oppose it ever existed. Why all that suffering, on such a vast scale and for so long, when it had no formative or educational value?
I don't know what utilitarianism has to do with the isue. The important point in the vicinity is that allowing intense and extensive suffering that you could have prevented is regarded as morally bad (to say the least) by ordinary human standards, and is certainly not a hallmark of benevolence. There may be a thought lurking here, somewhere, that our moral norms for each other should not be so demanding as to require that we seriously think or act like utilitarians, always trying to maximise global utility, given that, as individuals, we have interests of our own to pursue and it may not be reasonable to demand that we drop these or divert too many of our resources from pursuing them. There's much more to say about this, but, in short, human moral codes should (arguably) make some allowances so that they are not too burdensome for finite creatures. Or rather, they are developed for the needs of finite creatures and inevitably contain trade-offs.
But none of this applies to God: God is not a finite creature with limited resources and specific interests of the ordinary kind. He is supposed to be omnipotent, transcendent, and all-benevolent. It's not just that he works within certain relatively lenient deontic constraints that are (arguably) reasonable for finite creatures like us. His goodness, or at least his beneficence, goes far beyond that.
Like so many theodicies, the one under discussion shows two vitiating tendencies. First, it assumes that it's okay to assign God human limitations (rather covertly), wherever needed, thus failing to take his superlative abilities (particularly his omnipotence) seriously. At the same time, secondly, it allows him motivations that would be seen as monstrous if they appeared in finite beings like us. Thus, God is actually held to a more lenient moral standard than the rest of us.
Believe in such a God if you wish, but don't be surprised if others find this being appalling even if he exists. It's logically possible, of course, that we live in a world that is controlled by a God who is morally horrible by our ordinary standards. Fortunately, the motivation for believing in any God at all starts to dry up once you realise that no such being meets the warrants offered by theodicists.
All in all, it's a thoughtful review, but none its criticisms do much to shake the arguments in the book.
Wednesday, November 17, 2010
A new review of 50 Voices of Disbelief, but ...
Apparently there's a new review of the book in the journal Ethical Theory and Moral Practice (Vol 13, No. 4, 2010).
I don't have easy access to this, so if anyone wants to let me know the gist of what is said (here or in an email) I'll be grateful.
I don't have easy access to this, so if anyone wants to let me know the gist of what is said (here or in an email) I'll be grateful.
Tuesday, November 16, 2010
Press release re World Philosophy Day (from the University of Queensland)
Sounds like an interesting talk if any of you can attend.
====
WORLD PHILOSOPHY DAY - THURSDAY 18 NOVEMBER
World Philosophy Day is a day for people to share thoughts, and to openly explore and discuss ideas and inspire public debate or discussion on society's challenges.
The objective is to make philosophy accessible and to create opportunities for rational reflection, discussion and to foster independent and critical thought.
Established by UNESCO in 2002, it is an international event observed annually on the third Thursday of November to honor philosophical reflection around the world.
Associate Professor William Grey will deliver an oration at the University of Queensland at 12 noon to mark the occasion. The lecture will take place at the St Lucia campus in Steele (Building 3) Lecture Room 206.
Dr Grey will reflect on 'What does it mean to be a Philosopher?' The role of philosophy and philosophers in contemporary society will be explored.
The significant social issue that Dr Grey will address concerns the challenge of ensuring a sustainable future. Dr Grey will question whether corporate-sponsored disinformation about the consequences of treating the atmosphere as an open sewer constitutes a crime against humanity, and perhaps also a crime against nature.
Dr Grey will also ask whether the misery and injustice inflicted on tens of millions by the Catholic Church's socially, ecologically and ethically indefensible ban on contraception may also come to be seen by future generations as a crime against humanity.
Following the talk there will be refreshments, light snacks and opportunity for philosophical discussion.
COPY OF PRESS RELEASE (FOR IMMEDIATE RELEASE) DATED: MONDAY 15 NOVEMBER 2010
CONTACT: William Grey Philosophy, HPRC
University of Queensland
QLD 4072 AUSTRALIA
tel: +61 (0)7 336 52099
fax: +61 (0)7 336 56266
/
Opinions and information in this press release shall be understood as neither given nor endorsed by the The University of Queensland.
====
WORLD PHILOSOPHY DAY - THURSDAY 18 NOVEMBER
World Philosophy Day is a day for people to share thoughts, and to openly explore and discuss ideas and inspire public debate or discussion on society's challenges.
The objective is to make philosophy accessible and to create opportunities for rational reflection, discussion and to foster independent and critical thought.
Established by UNESCO in 2002, it is an international event observed annually on the third Thursday of November to honor philosophical reflection around the world.
Associate Professor William Grey will deliver an oration at the University of Queensland at 12 noon to mark the occasion. The lecture will take place at the St Lucia campus in Steele (Building 3) Lecture Room 206.
Dr Grey will reflect on 'What does it mean to be a Philosopher?' The role of philosophy and philosophers in contemporary society will be explored.
The significant social issue that Dr Grey will address concerns the challenge of ensuring a sustainable future. Dr Grey will question whether corporate-sponsored disinformation about the consequences of treating the atmosphere as an open sewer constitutes a crime against humanity, and perhaps also a crime against nature.
Dr Grey will also ask whether the misery and injustice inflicted on tens of millions by the Catholic Church's socially, ecologically and ethically indefensible ban on contraception may also come to be seen by future generations as a crime against humanity.
Following the talk there will be refreshments, light snacks and opportunity for philosophical discussion.
COPY OF PRESS RELEASE (FOR IMMEDIATE RELEASE) DATED: MONDAY 15 NOVEMBER 2010
CONTACT: William Grey Philosophy, HPRC
University of Queensland
QLD 4072 AUSTRALIA
tel: +61 (0)7 336 52099
fax: +61 (0)7 336 56266
Opinions and information in this press release shall be understood as neither given nor endorsed by the The University of Queensland.
Monday, November 15, 2010
Jack Donnelly disses collective rights
For all the talk of excessive individualism, the problem in the world today is not too many individual rights but that individual human rights are not sufficiently respected. States and societies have a variety of claims on individuals and modern states have awesome powers to bring individuals to their knees; if necessary, to break their minds as well as their bodies. Human rights, and particularly legal rights, are among the few resources of individuals in the face of the modern state. The balance is already (always?) tilted against the individual. The only likely result of advocating collective human rights ... is a further strengthening of the forces of repression.
Every day we see individuals crushed by society. Rarely, if ever, do we see society torn apart by the exercise of individual human rights. Social disorder and decay are usually associated with the violation of individual human rights by the state or some other organized segment of society. Human rights are a rare and valuable intellectual and moral resource in the struggle to right the balance between society (and the state) and the individual. Unless we preserve their distinctive character and stand firm on their character as individual rights, their positive role in the struggle for human dignity may be compromised.
Discuss.
I'm not actually big on the misleading (IMO) expression "human dignity", but it'll do as a place marker in this context. I think these paras from Jack Donnelly's book on human rights sum up something important.
Every day we see individuals crushed by society. Rarely, if ever, do we see society torn apart by the exercise of individual human rights. Social disorder and decay are usually associated with the violation of individual human rights by the state or some other organized segment of society. Human rights are a rare and valuable intellectual and moral resource in the struggle to right the balance between society (and the state) and the individual. Unless we preserve their distinctive character and stand firm on their character as individual rights, their positive role in the struggle for human dignity may be compromised.
Discuss.
I'm not actually big on the misleading (IMO) expression "human dignity", but it'll do as a place marker in this context. I think these paras from Jack Donnelly's book on human rights sum up something important.
Sunday, November 14, 2010
"Eudaimonism has no room for compassion" - wtf?
Nicholas Wolterstorff claims, "No version of eudaimonism has room for compassion." This seems like madness.
Discuss.
Discuss.
No post today (spot the paradox)
There's more I want to say about science, scientism, the supernatural, rational inquiry, etc., the recent dominant themes of the blog. I'd like to bring it all together in one place, but have been spending time in the various threads rather than pulling it all together. Maybe I can get to that soon, especially since some of the reactions suggest that I need to spell out a bit more about the context of my thinking on these issues.
But this is it for today. I have Jenny getting home from the US tomorrow - yay! - should get to bed now, and have also been juggling some other things, so I guess I can look forward to a new batch of comments on the last two threads when I check in the morning.
But this is it for today. I have Jenny getting home from the US tomorrow - yay! - should get to bed now, and have also been juggling some other things, so I guess I can look forward to a new batch of comments on the last two threads when I check in the morning.
Saturday, November 13, 2010
A nice para from Harris
For the purposes of this discussion, I do not intend to make a hard distinction between "science" and other intellectual contexts in which we discuss "facts" - e.g. history. For instance, it is a fact that John F. Kennedy was assassinated. Facts of this kind fall within the context of "science," broadly construed as our best effort to form a rational account of empirical reality. Granted, one doesn't generally think of events like assassinations as "scientific" facts, but the murder of President Kennedy is as fully corroborated a fact as can be found anywhere, and it would betray a profoundly unscientific frame of mind to deny that it occurred. I think "science," therefore, should be considered a specialized branch of a larger effort to form true beliefs about events in our world.
So, there is science as "broadly construed", which is our best effort to form a rational account of empirical reality, and there is science as "a specialized branch" of that effort. That seems fair enough to me.
We just need to be clear what we're talking about in a given context. E.g., if I ask a friend what she's studying at university, and she says "science", I'll assume with some confidence that she's referring to stuff she's doing in the science faculty - she's probably not, for example, majoring in French literature or in constitutional law. She's doing stuff that falls in the "specialized branch" that Harris refers to (and there's other perfectly good stuff she could be doing which doesn't fall there). In most contexts when we talk about science, this is what we have in mind, and there are historical, pedagogical, etc., reasons for that.
But science in the sense of the specialized branch isn't radically discontinuous from everything else. Though we can point to it as something specialized, we also have to acknowledge that there's no "hard distinction" between it and the rest of rational inquiry. Or as I'd put it, science (the specialised branch of inquiry) is continuous with other branches of rational inquiry.
I think that Harris is pretty much correct on these points. Doubtless y'all can think of some additional subtleties, but Harris seems to get things about right, at least in this quote.
So, there is science as "broadly construed", which is our best effort to form a rational account of empirical reality, and there is science as "a specialized branch" of that effort. That seems fair enough to me.
We just need to be clear what we're talking about in a given context. E.g., if I ask a friend what she's studying at university, and she says "science", I'll assume with some confidence that she's referring to stuff she's doing in the science faculty - she's probably not, for example, majoring in French literature or in constitutional law. She's doing stuff that falls in the "specialized branch" that Harris refers to (and there's other perfectly good stuff she could be doing which doesn't fall there). In most contexts when we talk about science, this is what we have in mind, and there are historical, pedagogical, etc., reasons for that.
But science in the sense of the specialized branch isn't radically discontinuous from everything else. Though we can point to it as something specialized, we also have to acknowledge that there's no "hard distinction" between it and the rest of rational inquiry. Or as I'd put it, science (the specialised branch of inquiry) is continuous with other branches of rational inquiry.
I think that Harris is pretty much correct on these points. Doubtless y'all can think of some additional subtleties, but Harris seems to get things about right, at least in this quote.
Friday, November 12, 2010
Opinion piece (forthcoming) at The Drum
Just letting y'all know that I have an opinion piece coming up soon at the ABC site The Drum, in support of The Australian Book of Atheism. I'll let you know more when I know a bit more.
Thursday, November 11, 2010
These things I know - or do I?
Here are some claims that I am inclined to think true, and that I'm inclined, even after reflection, to think that I have justification or warrant for. In one or two cases, though, there may be some doubt about what is actually meant, let alone whether the claim is really true. Let's have a look:
"Right now, as I type this, the sun is shining outside my window."
"Macbeth is the main character in Macbeth."
"In Macbeth, Macbeth murders Duncan."
"Macbeth is a tragic hero."
"Macbeth loves Lady Macbeth, at least at the start of Macbeth."
"As I type, the US and Australian dollars are roughly at parity."
"Human reproductive cloning is illegal in Australia."
"I had spaghetti for lunch yesterday."
"Cheetahs are beautiful animals."
"MedellÃn was once racked by political instability and guerilla warfare."
"Maxwell Perkins was a great editor with a superb (if not unerring) ear for literature."
I've stolen the last couple of these, with minor modifications for concision and to make sure I think they are true, from Jerry Coyne's blog.
The first point I want to make about all the above is that, if I actually know these things to be true, it's not through science. My claim isn't necessarily that they are science defeaters, as if scientists are helpless to find out these sorts of things. I simply say that it wasn't through any distinctively scientific process that I found out any of the above. Nor did I find out these things by asking someone who did use a distinctively scientific process (or who in turn ... etc.). These are all things of a kind that could have been found out long before the methods that are distinctive of science coalesced into the beginnings of the institution or practice that we now know as science. Of course, we couldn't have known that human reproductive cloning was illegal, back in those days, because without science such a thing would not even make sense as something to ban. But we could certainly have consulted statute books, lawyers, and so on about various matters of law.
Nor could I have found out that it is illegal by looking it up on the internet ... not without the technoscience that makes the internet possible. But if I just go and look on a site like AustLII I am not doing anything distinctively scientific. I'm just doing the equivalent of reading a statute book, which could have been done in medieval times.
Whether there are some things that are, in principle, always going to be science defeaters is another thing. I don't make that claim and I'm not sure how exactly it could be settled. But I do, for example, claim that it would be impractical and unnecessary to try to use any distinctively scientific activities to find out whether or not Macbeth really is the main character of Macbeth. Just read the text or go and watch a production, and you'll come to that conclusion. Or ask someone who is trustworthy on such things.
But be warned, once you start drawing conclusions about Macbeth that go beyond the least sophisticated ("Macbeth is the main character of Macbeth"), you may need quite a bit of education in the English of the time, in the historical context, in the literary forms that had existed up until then, and so on. It's possible to pick that up in various ways, but there's a lot to be said for actual guided and formalised study with teachers who know what they're talking about.
Some of the other claims I've made are trickier: e.g. is it really true that cheetahs are beautiful animals?
Well, they sure look it to me. But there's still a nagging question about whether cheetahs are really beautiful and what the claim even means. Does it just mean that they have (perhaps unspecified) characteristics that strike "us" (whoever "we" are in this context) in a certain "aesthetic" way - or what? Some people may believe that cheetahs possess a rather spooky property of strongly objective beauty such that any rational being in the universe somehow makes an error about reality if it fails to see a cheetah as beautiful. I doubt, though, that many of us really think anything like that, even if people in some culturally-closed societies (and perhaps not just them) have thought that way.
Note that science may be able to do a great deal to clarify how many people really see cheetahs as beautiful, what is going on at the level of the functioning of the brain, what it is about cheetahs that produces these responses, etc. I'm not at all claiming that science has nothing to say about the beauty of cheetahs, or, indeed, about any of the issues raised by the statements I listed above. Still, these are all things that are known to me, if they really do constitute knowledge, with nothing distinctively scientific being involved in the way I found out.
I keep talking about "distinctively scientific", because I am well aware that scientists are able to look outside and see whether the sun is shining or whether it's a cloudy day. Or they can use indirect methods of various kinds to infer, "The sun is currently shining in Newcastle, Australia." However, I don't rely on anything especially scientific; I just look out the window.
Obviously I am talking about "science" in a sense that is narrower than "rational inquiry" (I'm prepared to assume that looking out the window is, or can be, an example of rational inquiry). As usual, I can't give you a precise definition. I don't think that phenomena such as science lend themselves to sharp definitions; they are inherently fuzzy, as are many of our concepts. However, it is possible to have a fairly rich conception of science that definitely covers some things and not others. For example, if I simply read Macbeth I am definitely not doing science in the sense under discussion. If I'm trying to work out stuff about the play as I read it (Who is the main character? Could it be Macbeth?), I am engaging in a form of rational inquiry, but it's a humanistic form of rational inquiry, not a distinctively scientific form. On the other hand, it seems to me that these can merge into each other. For example, humanistic scholars are quite capable of using more distinctively scientific approaches on occasion - computerised word frequency analyses provide one example where a scientific instrument is being used by textual scholars to assist with humanistic research. And of course, scientists can read texts, watch plays, learn languages and so on. No one has a monopoly on any of this.
I object to the expression "other ways of knowing" because people who use it tend to countenance methods, such as divine revelation and mystical insight, that seem to me to be pretty damn dubious as ways of finding out stuff. But there are obviously many things I can do to find stuff out - e.g. I can just go and look, in some cases, or read a novel in others, or rely on my memory. Scientists rely on a mix of these things as does everyone else. But not everyone else puts so much emphasis on studying phenomena that are very small, very ancient, or very distant, or using theoretical propositions about these phenomena as causal explanations, and not everyone else relies so heavily on such things as mathematical models, controlled experiments and hypothetico-deductive reasoning, and scientific instruments.
I view science as a distinctive phenomenon that arose in a recognisable form around the start of the seventeenth century, though of course it had precursors. Seventeenth-century thinkers realised that they were confronted by something new and powerful, though they did not invent the word "scientist" (that came much later).
The thing about science is that it's continuous with rational inquiry more generally. We need a word for the phenomenon that I've sketched in the last couple of paras, and "science" is the word we've inherited. So I prefer not to use the word to mean simply "rational inquiry" and I think there's at least some fuzziness about, for instance, whether what we call political science is really best classified as science (usually, in fact, it isn't for pedagogical purposes). The main thing, however, is to make sure that we're consistent in any given context as to how we conceive of science and use the word "science". As so often, we'll create confusion if we use the word in two or more diffferent ways in the same discussion, or within the same argument.
Now, I don't accuse Jerry Coyne of doing that over here. He makes clear that he is using a broad conception of science while I am using a narrow one. It follows that we are not necessarily disagreeing with each other on anything.
But if we're not all careful to define our terms - or at least give some indication of how broad our concepts are - we run quickly into the possibility of confusion. Let's all tread carefully here. Oh, and it might be a good idea to apply the principle of charity and assume that our interlocutors are not saying weird or extreme things.
"Right now, as I type this, the sun is shining outside my window."
"Macbeth is the main character in Macbeth."
"In Macbeth, Macbeth murders Duncan."
"Macbeth is a tragic hero."
"Macbeth loves Lady Macbeth, at least at the start of Macbeth."
"As I type, the US and Australian dollars are roughly at parity."
"Human reproductive cloning is illegal in Australia."
"I had spaghetti for lunch yesterday."
"Cheetahs are beautiful animals."
"MedellÃn was once racked by political instability and guerilla warfare."
"Maxwell Perkins was a great editor with a superb (if not unerring) ear for literature."
I've stolen the last couple of these, with minor modifications for concision and to make sure I think they are true, from Jerry Coyne's blog.
The first point I want to make about all the above is that, if I actually know these things to be true, it's not through science. My claim isn't necessarily that they are science defeaters, as if scientists are helpless to find out these sorts of things. I simply say that it wasn't through any distinctively scientific process that I found out any of the above. Nor did I find out these things by asking someone who did use a distinctively scientific process (or who in turn ... etc.). These are all things of a kind that could have been found out long before the methods that are distinctive of science coalesced into the beginnings of the institution or practice that we now know as science. Of course, we couldn't have known that human reproductive cloning was illegal, back in those days, because without science such a thing would not even make sense as something to ban. But we could certainly have consulted statute books, lawyers, and so on about various matters of law.
Nor could I have found out that it is illegal by looking it up on the internet ... not without the technoscience that makes the internet possible. But if I just go and look on a site like AustLII I am not doing anything distinctively scientific. I'm just doing the equivalent of reading a statute book, which could have been done in medieval times.
Whether there are some things that are, in principle, always going to be science defeaters is another thing. I don't make that claim and I'm not sure how exactly it could be settled. But I do, for example, claim that it would be impractical and unnecessary to try to use any distinctively scientific activities to find out whether or not Macbeth really is the main character of Macbeth. Just read the text or go and watch a production, and you'll come to that conclusion. Or ask someone who is trustworthy on such things.
But be warned, once you start drawing conclusions about Macbeth that go beyond the least sophisticated ("Macbeth is the main character of Macbeth"), you may need quite a bit of education in the English of the time, in the historical context, in the literary forms that had existed up until then, and so on. It's possible to pick that up in various ways, but there's a lot to be said for actual guided and formalised study with teachers who know what they're talking about.
Some of the other claims I've made are trickier: e.g. is it really true that cheetahs are beautiful animals?
Well, they sure look it to me. But there's still a nagging question about whether cheetahs are really beautiful and what the claim even means. Does it just mean that they have (perhaps unspecified) characteristics that strike "us" (whoever "we" are in this context) in a certain "aesthetic" way - or what? Some people may believe that cheetahs possess a rather spooky property of strongly objective beauty such that any rational being in the universe somehow makes an error about reality if it fails to see a cheetah as beautiful. I doubt, though, that many of us really think anything like that, even if people in some culturally-closed societies (and perhaps not just them) have thought that way.
Note that science may be able to do a great deal to clarify how many people really see cheetahs as beautiful, what is going on at the level of the functioning of the brain, what it is about cheetahs that produces these responses, etc. I'm not at all claiming that science has nothing to say about the beauty of cheetahs, or, indeed, about any of the issues raised by the statements I listed above. Still, these are all things that are known to me, if they really do constitute knowledge, with nothing distinctively scientific being involved in the way I found out.
I keep talking about "distinctively scientific", because I am well aware that scientists are able to look outside and see whether the sun is shining or whether it's a cloudy day. Or they can use indirect methods of various kinds to infer, "The sun is currently shining in Newcastle, Australia." However, I don't rely on anything especially scientific; I just look out the window.
Obviously I am talking about "science" in a sense that is narrower than "rational inquiry" (I'm prepared to assume that looking out the window is, or can be, an example of rational inquiry). As usual, I can't give you a precise definition. I don't think that phenomena such as science lend themselves to sharp definitions; they are inherently fuzzy, as are many of our concepts. However, it is possible to have a fairly rich conception of science that definitely covers some things and not others. For example, if I simply read Macbeth I am definitely not doing science in the sense under discussion. If I'm trying to work out stuff about the play as I read it (Who is the main character? Could it be Macbeth?), I am engaging in a form of rational inquiry, but it's a humanistic form of rational inquiry, not a distinctively scientific form. On the other hand, it seems to me that these can merge into each other. For example, humanistic scholars are quite capable of using more distinctively scientific approaches on occasion - computerised word frequency analyses provide one example where a scientific instrument is being used by textual scholars to assist with humanistic research. And of course, scientists can read texts, watch plays, learn languages and so on. No one has a monopoly on any of this.
I object to the expression "other ways of knowing" because people who use it tend to countenance methods, such as divine revelation and mystical insight, that seem to me to be pretty damn dubious as ways of finding out stuff. But there are obviously many things I can do to find stuff out - e.g. I can just go and look, in some cases, or read a novel in others, or rely on my memory. Scientists rely on a mix of these things as does everyone else. But not everyone else puts so much emphasis on studying phenomena that are very small, very ancient, or very distant, or using theoretical propositions about these phenomena as causal explanations, and not everyone else relies so heavily on such things as mathematical models, controlled experiments and hypothetico-deductive reasoning, and scientific instruments.
I view science as a distinctive phenomenon that arose in a recognisable form around the start of the seventeenth century, though of course it had precursors. Seventeenth-century thinkers realised that they were confronted by something new and powerful, though they did not invent the word "scientist" (that came much later).
The thing about science is that it's continuous with rational inquiry more generally. We need a word for the phenomenon that I've sketched in the last couple of paras, and "science" is the word we've inherited. So I prefer not to use the word to mean simply "rational inquiry" and I think there's at least some fuzziness about, for instance, whether what we call political science is really best classified as science (usually, in fact, it isn't for pedagogical purposes). The main thing, however, is to make sure that we're consistent in any given context as to how we conceive of science and use the word "science". As so often, we'll create confusion if we use the word in two or more diffferent ways in the same discussion, or within the same argument.
Now, I don't accuse Jerry Coyne of doing that over here. He makes clear that he is using a broad conception of science while I am using a narrow one. It follows that we are not necessarily disagreeing with each other on anything.
But if we're not all careful to define our terms - or at least give some indication of how broad our concepts are - we run quickly into the possibility of confusion. Let's all tread carefully here. Oh, and it might be a good idea to apply the principle of charity and assume that our interlocutors are not saying weird or extreme things.
Wednesday, November 10, 2010
Scientism
Wikipedia has an article on scientism which contains a lot of information, but also shows that the word "scientism" is used in a wide range of ways, usually pejorative.
Is there a phenomenon that could be called "scientism" and which deserves to be denounced? I suppose there is. I suppose there are people who think the humanities are worthless, or that public policy favours starving arts faculties of funds, or perhaps that we could somehow understand, let's say Macbeth, without developing any sensitivity to Shakespeare's language ... perhaps by applying the methods distinctive of science (though how you would use controlled experiments, for example, to interpret Macbeth is far from clear). The idea that science (defined narrowly in contradistinction to humanistic forms of inquiry) could answer every question would, in my view at least, be untenable. I don't see how science, narrowly defined, can tell you how sympathetic you should be to Macbeth when he learns of his wife's death and replies, "She should have died hereafter." The distinctive techniques of narrowly-defined science are not going to tell a literary scholar, an actor, or a director how that line should be spoken.
If "scientism" refers to some of the more extreme or loony viewpoints mentioned in the previous paragraph, then I think it's a bad thing. I don't think we should be closing faculties of arts or humanities, or that distinctively scientific techniques are much use in understanding or staging Macbeth, or that studying great literature is useless anyway, or anything remotely in this ballpark. But then again, I don't see many other people expressing those sentiments either. Once again, someone may think these things, but if I see Richard Dawkins, for example, accused of thinking these things I'm not going to be too impressed.
The thing is, if you're going to denounce someone for "scientism" or complain that her ideas lead to "scientism", or are somehow reliant on "scientism" - and if this is meant to be a serious criticism - you must be using the word "scientism" in a sense that denotes something horrible or foolish or otherwise worthy of denunciation. It's no use denouncing someone for "scientism" and then, when called on it, explain that you were using the word in some other, more technical, non-pejorative sense (perhaps that the person takes a logical empiricist approach to philosophy). That's equivocation. It's cheating to apply the word in some non-pejorative sense that you secretly have in mind while at the very same time trying to get the pejorative connotations of other senses of the word.
A word like "scientism" lends itself too readily to this kind of argumentative cheating. So much so that I think that intellectually honest people should stop using the word; and, frankly, when I see people using it in current debates I am automatically suspicious of their intellectual honesty. I'm pretty sure I'm not the only one.
Is there a phenomenon that could be called "scientism" and which deserves to be denounced? I suppose there is. I suppose there are people who think the humanities are worthless, or that public policy favours starving arts faculties of funds, or perhaps that we could somehow understand, let's say Macbeth, without developing any sensitivity to Shakespeare's language ... perhaps by applying the methods distinctive of science (though how you would use controlled experiments, for example, to interpret Macbeth is far from clear). The idea that science (defined narrowly in contradistinction to humanistic forms of inquiry) could answer every question would, in my view at least, be untenable. I don't see how science, narrowly defined, can tell you how sympathetic you should be to Macbeth when he learns of his wife's death and replies, "She should have died hereafter." The distinctive techniques of narrowly-defined science are not going to tell a literary scholar, an actor, or a director how that line should be spoken.
If "scientism" refers to some of the more extreme or loony viewpoints mentioned in the previous paragraph, then I think it's a bad thing. I don't think we should be closing faculties of arts or humanities, or that distinctively scientific techniques are much use in understanding or staging Macbeth, or that studying great literature is useless anyway, or anything remotely in this ballpark. But then again, I don't see many other people expressing those sentiments either. Once again, someone may think these things, but if I see Richard Dawkins, for example, accused of thinking these things I'm not going to be too impressed.
The thing is, if you're going to denounce someone for "scientism" or complain that her ideas lead to "scientism", or are somehow reliant on "scientism" - and if this is meant to be a serious criticism - you must be using the word "scientism" in a sense that denotes something horrible or foolish or otherwise worthy of denunciation. It's no use denouncing someone for "scientism" and then, when called on it, explain that you were using the word in some other, more technical, non-pejorative sense (perhaps that the person takes a logical empiricist approach to philosophy). That's equivocation. It's cheating to apply the word in some non-pejorative sense that you secretly have in mind while at the very same time trying to get the pejorative connotations of other senses of the word.
A word like "scientism" lends itself too readily to this kind of argumentative cheating. So much so that I think that intellectually honest people should stop using the word; and, frankly, when I see people using it in current debates I am automatically suspicious of their intellectual honesty. I'm pretty sure I'm not the only one.
Tuesday, November 09, 2010
Voting out judges
Nothing much to say here, but this piece by Derek C. Araujo is worth a look if you don't usually frequent the Center for Inquiry's site. It's about Supreme Court judges getting voted out in Iowa in the recent American elections.
Sample:
The notion that "the will of the people" rules supreme in the United States is seductive, but wrong in an important sense. Under our system of government, political majorities cannot use the ballot box to trample the rights of minorities. And the only people standing between minority rights and a tyranny of the majority are the men and women in black robes so frequently maligned by conservative forces.
Conservative critics of the judiciary make a great show of respecting "the Constitution." In my judgment, their stance is unprincipled. Judicial decisions they agree with must be respected under the Constitution and the rule of law; when shown a judicial decision they don't like, however, they fulminate against "activist" judges and "robed masters" who are "unaccountable to the people." Never mind that unaccountability to political forces is an essential component of the system of government enshrined in the Constitution.
The idea that (at least some) judges are elected in 39 of the American states is incredible. Araujo again:
This practice is virtually unknown in the rest of the world, and with good reason. As the Founders of the U.S. well knew, if judges are to render objectively fair decisions based on law rather than public opinion, the judiciary must be independent. If judges are to prevent the majority from trampling the rights of minorities and unpopular parties, they must be shielded from electoral politics, special interests, the need to raise ever-increasing amounts of money to fund elections, the force of public opinion, or fear of rebuke by powerful political forces. The Founders therefore guaranteed that federal court judges would not be elected, but would be appointed for life, subject to removal only on impeachment and conviction of "high crimes and misdemeanors."
Sample:
The notion that "the will of the people" rules supreme in the United States is seductive, but wrong in an important sense. Under our system of government, political majorities cannot use the ballot box to trample the rights of minorities. And the only people standing between minority rights and a tyranny of the majority are the men and women in black robes so frequently maligned by conservative forces.
Conservative critics of the judiciary make a great show of respecting "the Constitution." In my judgment, their stance is unprincipled. Judicial decisions they agree with must be respected under the Constitution and the rule of law; when shown a judicial decision they don't like, however, they fulminate against "activist" judges and "robed masters" who are "unaccountable to the people." Never mind that unaccountability to political forces is an essential component of the system of government enshrined in the Constitution.
The idea that (at least some) judges are elected in 39 of the American states is incredible. Araujo again:
This practice is virtually unknown in the rest of the world, and with good reason. As the Founders of the U.S. well knew, if judges are to render objectively fair decisions based on law rather than public opinion, the judiciary must be independent. If judges are to prevent the majority from trampling the rights of minorities and unpopular parties, they must be shielded from electoral politics, special interests, the need to raise ever-increasing amounts of money to fund elections, the force of public opinion, or fear of rebuke by powerful political forces. The Founders therefore guaranteed that federal court judges would not be elected, but would be appointed for life, subject to removal only on impeachment and conviction of "high crimes and misdemeanors."
Subscribe to:
Posts (Atom)