About Me

My photo
Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).

Friday, December 31, 2021

My essay "The Making of a Cancel Culture" in TPM

My essay "The Making of a Cancel Culture" has appeared online in The Philosophers' Magazine. Check it out.

Sample:

In this essay, largely aimed at academic philosophers, I focus on university campuses. However, the present-day culture and praxis of cancellation extend much further.

In many cases, we’re entitled (relative to widespread norms of free and candid speech) to express ideas that are not especially scholarly, or not scholarly at all, but have a place in the rough-and-tumble of everyday debate. Some kinds of vilification of individuals or groups, or violations of personal privacy, might lie beyond the pale of democratic toleration, but wherever, exactly, the boundaries lie, this should still leave a vast zone of expressive freedom. When the stakes seem high enough, however, it’s tempting to contract the zone of what feels tolerable, and to excuse cruel behaviour to people who seem like our enemies.

Tuesday, December 14, 2021

Submission to Human Rights Committee now appearing

Submissions to the Parliamentary Joint Committee on Human Rights, relating to the Australian federal government's legislative package on religious discrimination, are now appearing on the relevant parliamentary site. So far, this includes my own submission as well as others by several academics and organizations.

Saturday, December 11, 2021

My submission to the latest inquiry on the religious discrimination Bills

 TO: Committee Secretary

Parliamentary Joint Committee on Human Rights
Department of the Senate
PO Box 6100
Parliament House
CANBERRA ACT 2600
AUSTRALIA

FROM: Dr Russell Blackford

30 Birchgrove Drive
Wallsend, NSW 2287

E-mail: russell.blackford@newcastle.edu.au

Phone: [redacted]

Inquiry regarding Religious Discrimination Bill 2021 and related bills

Introduction

1. I refer to the current inquiry relating to the government’s religious discrimination legislative package, including the Religious Discrimination Bill 2021 (“the Bill”), and thank you for the opportunity to make this submission.

2. I am an academic philosopher with a specialization in legal and political philosophy, including issues relating to liberal theory, secular government, and traditional civil and political liberties such as freedom of religion and freedom of speech. I have published widely on these topics. In particular, my published books include Freedom of Religion and the Secular State (Wiley-Blackwell, 2012) and The Tyranny of Opinion: Conformity and the Future of Liberalism (Bloomsbury Academic, 2019). My formal qualifications include an LLB with First Class Honours from the University of Melbourne and a PhD in philosophy from Monash University, where my doctoral dissertation applied ideas from liberal theory and philosophy of law to certain topical issues in bioethics.

3. I also have extensive practical experience as an industrial advocate working in the federal jurisdiction and as a workplace relations solicitor with a major commercial firm in Melbourne. I have considerable expertise in workplace relations and employment law, and in anti-discrimination law.

4. I currently hold an appointment as Conjoint Senior Lecturer in Philosophy at the University of Newcastle, though I do not, of course, purport to represent the views of the university.

Scope of submission

5. The draft Bills are complex and much of their content deals with issues arising from tensions between different strands of public policy. As a result, there is much room for argument about the values and priorities that have shaped the current legislative package. It is noteworthy that the Bills do not generally deal with the topic of freedom of religion, which is a freedom from persecution or imposition of religion by state power. They do not, for example, seek to strengthen and extend the protection given by s. 116 of the Australian Constitution. Instead, they are a contribution to anti-discrimination law.

6. In this brief submission, I will confine myself to just two key areas of concern: first, the definition in the Bill of “religious belief or activity”; second, the nature of a “statement of belief” and the importance of allowing vigorous public discussion and debate about religion.

Religious belief or activity

7. Fundamental to the legislative package is protection against discrimination in employment, and in various other domains of public life (education, accommodation, provision of goods and services to the public, etc.), based on religious belief or activity as defined. The definition of “religious belief or activity” is as follows: 

(a) holding a religious belief; or

(b) engaging in religious activity; or

(c) not holding a religious belief; or

(d) not engaging in, or refusing to engage in, religious activity.

 8. The first problem with this definition is that it does not clearly include the communication (or expression) of religious beliefs. An employer might, for example, claim that it has not unlawfully discriminated against an employee because of the mere fact that she is known or understood to hold a certain belief, or because of her participation in clearly religious activities such as ritual and worship. The employer might argue that it has lawfully discriminated against the employee because of her communication of her belief, or because of some aspect of her communication of it, such as its time, place, tone, or manner. In response, a court might hold that the communication of religious beliefs falls within “religious activity” or that it is implicit within “religious belief”. However, that is not clear and it cannot be assumed.

9. For reasons that are unclear to me, the Bill currently protects communication of religious beliefs in relation to the rules of qualifying bodies, but not in relation to areas such as employment. Compare s. 15 with, for example, s. 19. At best, this is confusing.

10. The legal effect of this difference is open to more than one interpretation. On one construction, however, it suggests that communicating religious beliefs is not included within the definition of religious belief or activity, but is a separate topic. If so, s. 15 provides that the rules of a qualifying body cannot generally forbid communication of religious beliefs, but it seems that an employer’s code of conduct probably can prevent communication of religious beliefs, even outside the workplace (or to use the language of the Bill, outside of practising the employee’s profession, trade, or occupation). In that case, this anomaly should be corrected.

11. Even if it were clear that communicating religious beliefs falls within “religious belief or activity”, consider the situation of a person who does not hold any religious belief or engage in any religious activity, but who does hold philosophical beliefs that are critical of religion and/or provide a non-religious alternative worldview, such as some form of secular humanism or philosophical naturalism. This person might communicate her beliefs about religion in public discussion and might engage in other activities that are aimed at undermining the credibility of religious doctrines, or at opposing the social and political influence of religious organizations. For example, she might be affiliated with a secular humanist organization, or the like, and take part in its activities.

12. This person should receive the same protection for her relevant beliefs, communications, and lawful activities as an adherent to a religion receives for her religious beliefs and communications and her lawful religious activities. Any other approach would be intolerably discriminatory. However, despite what is stated in paragraph 41 of the explanatory notes to the Bill in the Explanatory Memorandum, the current definition does not appear to have that effect. As worded, it protects only passively not holding a religious belief and passively not engaging in (or refusing to engage in) religious activity.

13. Accordingly, the definition of religious belief or activity needs to be modified so that it clearly includes communicating religious beliefs, and so that it includes holding and/or communicating beliefs that are actively critical of religion or are philosophical alternatives to religious beliefs. Furthermore, the definition needs to be modified to include not just non-participation in religious activity but also positive engagement in activity related to worldviews that are critical of religion and/or stand as alternatives to religious beliefs.

14. All of the problems identified under the current heading can be solved by adding the following to the current definition of religious belief or activity (perhaps with consequential amendments elsewhere in the Bill):

[(d) …]; or

(e) communicating a statement of belief; or

(f) engaging in any activity reasonably connected with a lack of religious belief, or of a particular religious belief, or reasonably connected with a critical attitude to religious belief generally or to a particular religious belief.

Statements of belief and public discussion of religion

15. If enacted, the legislative package will have the effect that a statement of belief is deemed not to be, solely in itself, discrimination under any of a list of federal and state anti-discrimination statutes. As far as it goes, this is welcome. It provides a valuable protection for one kind of speech, namely (subject to certain conditions) speech that expresses or communicates a religious belief, and speech that communicates a belief that the individual concerned genuinely considers related to his or her not holding a religious belief.

16. I expect that the courts would interpret the definition of a statement of belief broadly to include speech that communicates a critical attitude to religious belief or to a particular religious belief. Here, paragraphs 171 and 172 of the relevant section of the Explanatory Memorandum appear to be correct. Although this issue should be kept under review as case law develops, the proposed definition is probably broad enough to be workable and acceptable.

17. However, Note 1 inserted after sub-s. 12(2) is a matter of concern. This note also appears after sub-s. 15(3) (and see also paragraph 192 of the relevant section of the Explanatory Memorandum). It states: “A moderately expressed religious view that does not incite hatred or violence would not constitute vilification.” As far as it goes, this statement is correct. However, it is seriously misleading.

18. First, even an anti-religious view, or a view severely critical of religion or a particular religion, would not constitute vilification unless it incited hatred or violence. Though not defined in the Bill, hatred is an extreme emotion involving animosity, detestation, and calumny. Second, and more importantly, even statements of belief that are discourteous, disrespectful, satirical, mocking, or uncivil, or otherwise immoderate in their expression, would not constitute vilification unless they rose to the level of inciting either the extreme emotion of hatred or outright violence. While that much is clear as a matter of statutory interpretation, it is important not to create confusion with a note that conveys a contrary and misleading impression.

19. Thus, the note should be reworded to reflect the intention and meaning of the Bill. The note would be accurate – and more reassuring – if it stated as follows: Robustly expressed statements of belief that do not incite hatred or violence do not constitute vilification. This guarantees a broad zone for vigorous public discussion of religion.

20. In that regard, compare the broad zone for academic discussion and debate recently identified by a unanimous High Court in Ridd v. James Cook University (13 October 2021). Here, the judges explained that ideas of academic or intellectual freedom provide a broad zone for vigorous discussion that rightly includes much that inevitably cannot be expressed with courtesy and respect.

21. Outside the relatively genteel environment of the academy, this idea applies even more strongly to certain kinds of discussion and debate conducted in the public sphere. These include political, cultural, moral, and, most importantly for current purposes, religious discussion and debate.

22. To expect that public discussion and debate about religion should, or could, typically proceed in a “moderately expressed” way is to fail to take the issues of disagreement seriously. For example, adherents of some religions sincerely regard other religions as not merely false but actually demonic. Some religions sincerely view themselves as engaged in a cosmic struggle of good versus evil against other religions and/or against unbelief. Some religions sincerely regard a wide range of conduct as sinful, and hence conducive to spiritual damnation or an equivalent, even though the conduct might be essentially harmless in its visible effects, and thus not a good candidate for legal prohibition or for ordinary kinds of social condemnation. Religious leaders and adherents often feel called upon by God to speak prophetically, using forceful rhetoric to call their society back to its traditional moral ideas and forms of worship. Conversely, many people with non-religious or anti-religious philosophies view religious beliefs as ill-founded, false, socially harmful, and damaging to the welfare of individuals in the everyday, empirical world. Such people might well be motivated to engage in satire, ridicule, and denunciation in the tradition of Voltaire.

23. It follows that, even more than with academic discussion and debate, there is a limit to how far public discussion and debate about religion can be universally, or typically, moderate in its expression. There is, for example, a limit to how courteously, respectfully, and otherwise moderately religious leaders can express the view that certain conduct is wicked, sinful, and abhorrent to God. There is a limit to how moderately rival views can be identified and opposed as heresy, or as the products of malevolent spiritual intelligences active in the universe. Likewise, there is a limit to how moderately one could affirm that some or all religious beliefs are illusory and harmful. Public disputation over these and similar issues is inevitably passionate, robust, and marked by a sense of great urgency.

24. While some viewpoints might lead to ugly and hostile speech appearing within the sphere of public discussion and debate, the general policy that has developed in recent centuries, as part of the emergence of Western liberal democracies, has been to tolerate rival viewpoints and their vigorous assertion. Since the seventeenth century, supporters of secular government and freedom of religion have hoped that the harshest attitudes would soften in an environment where, at least, no one need fear persecution with “fire and sword” for holding and communicating their religious or philosophical views. By and large, that approach has been successful, and there has been a discernible softening of attitudes over the past, say, 350 years, and even within current lifetimes. It remains prudent to allow vigorous discussion and debate to continue in the public sphere, with minimal interference from the government or from others with lawful authority such as employers. Participants in public discussion and debate about religion should not have to fear legal sanctions, or adverse social outcomes such as termination of their employment, for insufficiently “moderately expressed” views.

25. This is not to suggest that statements of religious or philosophical belief should lie entirely beyond the law, allowing a total free-for-all in this area. Although it is difficult to identify with exactitude, there is an outer boundary to toleration of vigorous discussion and debate about religion or anything else.

26. Within the present Bill, the boundary is set by reference to statements of belief that are malicious, threatening, intimidating, harassing, or vilifying, or which incite serious crime. It is worth emphasizing that nothing in the Bill protects anyone from a civil suit for defamation, should she communicate a statement of belief that includes defamatory content. Nothing protects an employee who has confronted a workmate, colleague, customer, client, patient, etc., with a statement of belief that is, in context, malicious, threatening, intimidating, or harassing. Again, nothing in the Bill protects an individual who has committed one of the crimes in s. 80 of the Commonwealth Criminal Code – see especially ss. 80.2A and 80.2B, where the essence of the relevant offences is intentional urging of force or violence against groups or their members. Such boundaries provide more than adequate limits to toleration of vigorous discussion and debate about religion.

Conclusion

27. In summary, I have offered and defended two specific recommendations for amendment of the Religious Discrimination Bill 2021:

·       First, amend the definition of religious belief or activity as I have set out in paragraph 14 above. This may require some consequential amendments.

·       Second, as per paragraph 19 above, delete the first note to sub-ss. 12(2) and 15(3) of the Bill, and replace it with more accurate wording as follows: “Robustly expressed statements of belief that do not incite hatred or violence do not constitute vilification. This guarantees a broad zone for vigorous public discussion of religion.”

 Yours sincerely,

 Russell Blackford

9 December 2021

Tuesday, November 02, 2021

What is philosophy? A useful post and thread at Leiter Reports

This post and thread over on Brian Leiter's blog provides a good resource for folks interested in metaphilosophical discussion. I was pleased to see one person giving a shout-out to my 2017 book with Damien Broderick,  Philosophy's Future: The Problem of Philosophical Progress.

Thursday, September 09, 2021

Paul Gilster on the science fiction of Edgar Allan Poe

Paul Gilster, a technology writer and specifically an expert on the prospects of interstellar travel and exploration, has published a superb essay on the science fiction of Edgar Allan Poe over on his Centauri Dreams site. It doesn't hurt that he's cited my "Science Fiction as a Lens into the Future" piece, which I have recently republished here. Check out both of them if this topic interests you.

I was not aware of Centauri Dreams until Gilster kindly contacted me today to let me know he'd cited my work. However, it looks like a wonderful resource, and I'll be browsing it further. If you have any interest in space travel you would probably benefit from doing likewise.

It was slightly galling to find that on the site's "about" page Gilster uses the trope of building star-faring craft as an intergenerational task like building a medieval cathedral. It's an excellent image, so the galling part of it is that he beat me to it! In my latest book, At the Dawn of a Great Transition: The Question of Radical Enhancement, I used the same image for the construction of a post-human, star-faring civilization. In future writings where I make that point, I'll be sure to cite Gilster's prior use of a similar idea!

In all, though, the point is just: do check out Centauri Dreams and particularly the newly published essay on Poe.

Tuesday, September 07, 2021

That's controversial!

My essay, "Oh No, That's Controversial!", has been published online by The Philosophers' Magazine. I discuss the recent launch of the Journal of Controversial Ideas, which is an initiative that I support for reasons that I explain in some detail. This does not mean that its performance has been perfect so far, but it has good intentions and has made a pretty good start. In my view, this journal does have a role to play - perhaps an important one.

Near the end of my essay, I observe: "I’m sympathetic to this project; for many reasons, I wish it success. One reason is that it seems to stand for an important point that’s seldom well understood. We can distinguish between abusive conduct toward individuals, on one hand, and, on the other hand, scholarly discussion of ideas that some people might find upsetting. It is one thing to mock, or taunt, or deeply denigrate individuals for their personal characteristics or aspects of their self-presentation, or, indeed, for their ideas. It’s an entirely different thing to communicate opinions on topics of general importance, even if they challenge others’ self-conceptions or dissent radically from a local consensus. Within very broad limits, advancing unpopular or dissenting opinions should not be seen as inviting abuse, censorship, or harmful consequences such as derailed careers and tainted reputations."

Monday, September 06, 2021

Submissions to the Australian government's current consultation on freedom of expression

The Senate Standing Committee on Legal and Constitutional Affairs has now published the submissions it has received on a proposed constitutional amendment to protect freedom of expression in Australia. These submissions include my own, which I earlier published on this blog.

This exercise involves consultation with interested parties over a specific proposal originally introduced as a Bill in 2019. The views of the responding individuals and organizations vary considerably. However, I was pleased to see some other parties making points similar to mine about: 1. the need to protect the use by everyone of communication technologies to receive and impart ideas and opinions (or some similar formulation), and not just to protect large news and media corporations and their employees; and 2. the need for governments to meet a stronger test than provided in the Bill if they are to justify encroachments on freedom of expression, i.e. the formula "reasonable and justifiable" is too weak, and could permit the constitutionality of too many laws that restrict the freedom.

On the second point, there seems to be almost a consensus that some tougher test is required to justify government encroachments on freedom of expression, though parties making this point have offered a variety of formulas to toughen up the requirement. The most popular view seems to be that the word "necessary" - or even the words "demonstrably necessary" - should be included in some way. My own proposal was that for most laws, outside of protecting individual reputation and privacy, the test should be the very strong one of "demonstrably necessary for the viability of an open, free and democratic society."

The words "for the viability of" are important here, as they make clear that I don't merely mean necessary to achieve some governmental purpose, but necessary for the viability of the society itself. Some laws would doubtless meet that test, such as laws against leaking military plans in advance, or against disseminating means of easily producing weapons of mass destruction. A law against genocidal hate propaganda would, I think, also survive, as campaigns using such propaganda can notoriously rip a society apart. But many existing laws would be in deep trouble under my version of the proposed new section of the Australian Constitution, which, in my opinion, they should be.

Friday, September 03, 2021

A terrific article in The Atlantic, and some observations on "cancel culture"

This piece, by Anne Applebaum, in The Atlantic, is important and excellent. It concerns what is now called "cancel culture" - not a term that I find particularly apt, but it's the one that seems to have stuck.

On a daily basis, I see many people - often people whom I otherwise like and respect - denying that this phenomenon exists, even though it is all around them. There are constant pressures to conform in act and (especially) speech. 

This is not entirely new. Alexis de Tocqueville famously observed nearly two hundred years ago that there was no freedom of opinion in, specifically, the United States. This was not because of government censorship, but because of a culture where anyone who dissented from the popular view would be subjected to so much opprobrium that it would not be worth their while to say what they really thought. This is now very much the environment within what could be called the academic and cultural Left. The slightest dissent from the current package of popular ideas within that environment will lead to a vicious backlash that will upend your life.

I've experienced this myself, if only in a relatively mild way - mainly in the years from 2011 to 2013, when I expressed some views that dissented from those of my online "liberal" (in the strange American sense) peer group. I have not had my actual, real-life friends go after me, but I can report that what I experienced in 2011 in particular was emotionally devastating at the time. Many people have experienced far worse, and have been subjected to punishments massively disproportionate to anything wrong that they might have done or were accused of doing.

Individuals who minimize the impact of cancel culture have, I suspect, never been on the receiving end. Frankly, though, if they haven't experienced it that makes me suspicious. As I've seen, and experienced myself, it's so easy to be on the receiving end of cancel culture, even for very mild dissent from locally popular positions, that if you haven't ever been, I can only assume that you've never publicly expressed even a mildly heretical thought on social and political issues.

Tocqueville notwithstanding, it doesn't have to be like this. Perhaps I was fortunate in coming of age in the 1970s, a period when we actually made fun of the idea that we might be "politically correct" or "ideologically sound" in a Maoist or Stalinist way (note that leading Marxist thinkers such as Mao always emphasized the need for a so-called correct political line - this wasn't something just made up by 1990s right-wing culture warriors).

There was a general assumption back in the 1970s that a variety of opinions was a good thing and that we could and should enjoy political and philosophical arguments going late into the night - with no hard feelings attached. Perhaps we were naive and innocent in that way, but I do think something was lost when this freewheeling, tolerant culture was largely erased in the 1980s. In frightens me that most people under the age of, say, 55 have never experienced it.

Over the past 10 or 15 years, the use of social media to enforce conformity has made the environment much worse. It is now far easier for a mob to engage in a concerted campaign to go after an individual, and thus enforce conformity. We are back with a vengeance to what Tocqueville wrote about all those years ago. The environment is now scary, and even I am somewhat afraid to say - beyond my most intimate circle of friends - what I really think on important social and political issues, even though I think I have a contribution to make and despite the fact that I am fairly well buffered (financially and otherwise).

The victims, of course, are not the real enemies of the Left, such as fascists and right-wing culture warriors, who have their own supporters and are largely immune to cancellation. The victims tend to be humane people whose politics are at least somewhat liberal or left wing, but who are independent thinkers and don't go along with every view that happens to be fashionable within their milieu at the particular moment. Those people have valuable things to say and appropriately complex arguments to make. But it is dangerous for them to raise their head above the parapet. When they do so - and they are conspicuously shamed and punished - others with similar ideas learn that it's safest to conform.

This is an intolerable situation, and I wish we didn't have so many people minimizing it or denying that it exists.

The Economist on "The Threat from the Illiberal Left"

This article in The Economist is very good. It could almost be a summary of my 2019 book, The Tyranny of Opinion. You can read the whole thing with a free account with The Economist.

Thursday, August 26, 2021

My book review of Tosi and Warmke's book, Grandstanding

In case you missed this book review in The Philosophers' Magazine, earlier this year, you can find it online here.

Monday, August 02, 2021

My submission to the current Senate consultation on freedom of expression

TO: Committee Secretary
Senate Legal and Constitutional Affairs Committee
PO Box 6100
Parliament House
Canberra ACT 2600

FROM: Dr Russell Blackford

30 Birchgrove Drive
Wallsend, NSW 2287

E-mail: russell.blackford@newcastle.edu.au

Consultation regarding Constitution Alteration (Freedom of Expression and Freedom of the Press) 2019

1. I refer to the above public consultation, and thank you for the opportunity to make this submission.

2. I am an academic philosopher with a special interest in legal and political philosophy, including issues relating to traditional civil and political liberties such as freedom of religion and freedom of speech. I have published widely on these topics. In particular, my published books include Freedom of Religion and the Secular State (Wiley-Blackwell, 2012) and The Tyranny of Opinion: Conformity and the Future of Liberalism (Bloomsbury Academic, 2019). My formal qualifications include an LLB with First Class Honours from the University of Melbourne and a PhD in philosophy from Monash University, where my doctoral dissertation applied ideas from liberal theory and philosophy of law to certain topical issues in bioethics.

 3. In short, I am an academic expert on issues to do with liberal theory and philosophy of law, including issues relating to freedom of speech. I am currently Conjoint Senior Lecturer in Philosophy at the University of Newcastle, though I do not, of course, purport to represent the views of the university.

 4. I have studied the proposed Bill to amend the Australian Constitution, and I can express my response to it quite briefly.

 5. I am generally supportive of the idea of giving greater constitutional protection to freedom of speech and expression.

 6. I doubt, however, that it is helpful to make specific reference to “the press”, since this is an ambiguous expression. Historically, “the press” referred to a technology, i.e. the printing press, and not to a social institution such as what is sometimes called the “institutional press” as it exists today, i.e. large news and media corporations. Thus, “freedom of the press” historically referred to the freedom not only of newspapers and professional journalists but also of lone – often scurrilous – pamphleteers. Today, this original meaning is often forgotten, and it is often assumed that “freedom of the press” means a special freedom (or set of privileges) for professional journalists and broadcasters employed by large news and media corporations. In fact, the freedom that should be protected is a freedom for everyone to use the media of mass communication to address their ideas and opinions to the public.

 7. It is worth noting that the large news and media corporations operating in Australia already possess enormous wealth, influence, and power, and that this can enable them to harm individuals whose reputations are smeared or whose privacy is invaded. The psychological and financial cost of legal action to protect individual reputation or privacy against these corporations is prohibitive for most individuals, and it should not be made even more difficult.

 8. In short, any constitutional amendment should clearly enhance the power of all citizens to use the media of mass communication to communicate ideas and opinions to the public, without further increasing the power of large news and media corporations relative to that of individual citizens.

 9. At the same time, the proviso in the proposed Bill is too lax in how far it allows legislatures to interfere with the proposed freedom. The current wording which uses a formula of “reasonable and justifiable” could allow many legislative provisions that restrict individual freedom of expression more than is strictly necessary.

 10. Accordingly, I propose that the current substantive provision in the draft Bill be replaced with the following words, based in part on concepts in the European Convention on Human Rights and the Canadian Charter of Rights and Freedoms:

The Commonwealth, a State or a Territory must not limit freedom of expression, which includes the freedom to hold and express ideas and opinions, and in particular to receive and impart ideas and opinions by means of present and future communication technologies.

However, a law of the Commonwealth, a State or a Territory may limit the freedom of expression only if, and only to the extent that, the limitation is

(a) reasonable and justifiable to protect individual reputation or privacy; or

(b) for any other reason, demonstrably necessary for the viability of an open, free and democratic society. 

Yours sincerely,

 Russell Blackford

2 August 2021

Wednesday, July 14, 2021

How free is the will? Sam Harris misses his mark

by Russell Blackford

First published, in slightly abridged and different form, ABC Religion and Ethics Portal, 26-27 April 2012. Available at URL http://www.abc.net.au/religion/articles/2012/04/26/3489758.htm

The long conversation

For thousands of years, myth-makers, poets, philosophers, theologians, novelists and others have wrestled with a daunting question: whether, or to what extent, our lives are in our own hands (or minds).

To many people, future events seem to be laid down independently of any say that we might have in the matter. This characterized the worldview of Greek mythology. In Homer’s Iliad, the will of Zeus is depicted as binding on men and women, however much we mortals struggle and complain. Human attempts at rebellion are futile.

Oedipus is one mortal who struggles against his fate: he flees Corinth for Thebes, seeking to escape a terrible prophecy that he will kill his father and marry his mother. But, once again, his attempt is futile. The very act of journeying to Thebes brings him to a fatal (in every sense) confrontation with his true father, Laius, and to marriage to his true mother, Jocasta.

This theme in the long history of recorded thought finds popular expression even today, whenever we hear talk of how some important life event – when Jack meets Jill, perhaps – was “meant to be” and in the widespread, though perhaps only half-believed, idea that our days are numbered, with an appointed date of death for each of us (decided, perhaps, by God).

It finds expression in many ways in literary narrative and popular culture, as well, though most often the idea is resisted, as when Sarah Connor in Terminator 2: Judgment Day uses a knife to inscribe the defiant words, “NO FATE.”

Behind this lies the fear (or the assurance, depending on your disposition) that the future comes about in a way that is uncoupled from our deliberations, choices and actions. How might this be?

Well, an obvious way – or, at least, one that has seemed obvious to many people and cultures – is that our choices or their outcomes are controlled by overriding forces, such as the stars or the gods or a reified Fate or Destiny.

There are, however, other ways in which our deliberations, choices and actions could be, as it were, bypassed by events (to borrow from Eddy Nahmias). Thus, our deliberations would be futile on some portrayals of the relationship between mind and body.

If (as I tend to think) mental events just are, at another level of description, physical events in our brains, they can have causal efficacy much like any other physical events. Similarly, an interactionist form of mind-body dualism does not entail any scary sort of bypassing of our deliberations and choices.

However, the situation is rather different on other dualist theories of the mind. If our thoughts, emotions, and choices are mere epiphenomena, they cannot affect our bodies or the external world; nor can they do so if, as Leibniz believed, the mind and body are like parallel clocks in a pre-established harmony, but exercising no causal influence on each other. On either of these views, deliberation and choice cannot reach out to touch the physical world. So it goes.

To some thinkers, the very fact that there are facts (by which I simply mean true statements) about future events is sufficient reason for a kind of fatalism. Imagine that I am sick, for example – should I call for my doctor? Well, presumably there is a fact as to whether or not I will recover from my illness. If I am going to recover, then this will happen (so the doctor has nothing to do with it). Conversely, if I am not going to recover, it will happen this way (so the doctor has nothing to do with it). In either case, events will reach their conclusion whether I call the doctor or not. Thus, it is pointless for me to call her, as it cannot affect my fate! By similar reasoning, it is pointless to take any action at all.

Such is the conclusion of what was known in antiquity as the Lazy Argument. Fatalism about the future leads to a recommendation of passivism, though, really, we have no say in that either.  

Although some philosophers deny that there are facts about the future, the idea seems intuitively appealing. Moreover, Einsteinian relativity theory seemingly entails that the universe is a four-dimensional space-time manifold; in that case, there are facts about events that are (relative to us) in the future. Such facts also seem to be entailed by any theory of comprehensive causal determinism, since the current state of the world precisely determines all states of the world in the future.

Ancient philosophers knew nothing about Einsteinian theory, of course, but they were familiar with many such considerations, and they produced sophisticated responses. In particular, some Stoic philosophers developed arguments to counteract fatalist thinking and show that our actions really are “up to us.”

This is not the place to explore all the tendencies of Stoic thought, but the general idea was to accept causal determinism, and even to refer to the causal series as “fate” – and yet argue that our actions proceed by means of our individual characters.

On this approach, our beliefs about the world, our desires, our ingrained dispositions – in short, the motivating elements in our individual psychological make-ups – are the features that bring about our choices and actions, and thereby affect the course of events. If I am offered food, for example, my action of eating it (or else declining it) will be brought about partly by the circumstances of the offer, but also by the way I respond, which will result from my beliefs, desires, character, etc.

The point of all this is that humankind has engaged in a sophisticated discussion over many centuries as to whether, in some sense, our choices and actions, and the predictable consequences of our actions, are up to us, or whether, at some point in the order of events, we are bypassed, leaving our efforts essentially futile. The great cultural conversation certainly did not end with the Stoics, and it has continued to the present day. Throughout the medieval period, the issue became entangled with theological considerations, and to some extent this remains so.

Not surprisingly, recent analyses by professional philosophers have become increasingly fine-grained and technical, particularly, though not solely, where arguments about the role of causal determinism are involved.

What is “free will”?

Although we sometimes say to each other, “That’s up to you,” the usual English term for the up-to-us-ness discussed by Greek philosophers is “free will.” Whether or not this expression is very apt, it does capture one aspect of the long conversation – the idea that we are free from the control of Fate or other spooky forces.

More generally, it might be said that we are “free” if we choose on the basis of our own beliefs, desires and characters. If these things are unencumbered, then we can be said to be acting as we want to act, at least within the limits of our opportunities, physical and cognitive capacities, resources and so on.

Like many expressions that have become subjects of philosophical inquiry, “free will” may seem rather vague and elusive. However, there has now been a fair bit of empirical research (by Nahmias and his colleagues, among others) that provides some clues about the way that the idea of free will relates to the concerns of ordinary people who are not philosophers or theologians (“the folk,” as philosophers like to say).

Unfortunately, the published research is open to interpretation and, at this stage, it appears unlikely that there is a single “folk” conception. The folk, or many of them, do seem to have concerns about causal determinism, but mainly when it is described so that it sounds like fatalism or epiphenomenalism.

Speaking very generally, ordinary people are most likely to deny the existence of free will when they see our deliberations, choices and actions being overridden or bypassed in some way or another. For the folk, or most of them, the dominant idea in attributing free will to themselves and others seems to be a denial of fatalism.

Ordinary people are likely to affirm that “we have free will” if they dispute our subservience to forces such as the gods, the stars and Fate; and they are likely to say, in a particular situation, that “X acted of her own free will” if, in addition, X was not subjected to some more earthly kind of coercion (perhaps a gun at her head) or certain other kinds of frightening pressure (perhaps being required to make an important and complex decision in a very short time). Note, however, that I do not claim that these are the only ideas of free will existing among the general population.

Contemporary philosophers have what may or may not be a more stringent concept of free will: we possess free will only insofar as we can be morally responsible for our conduct. It is the capacity to act with moral responsibility. This conception of free will has dominated modern Western philosophy – though,  again, I don’t claim that it is the only philosophical conception of free will (for more, see the Stanford Encyclopedia of Philosophy entry on the subject).

Moral responsibility for our actions may well require the absence of spooky controlling forces, coercion, unusual pressures and so on, but perhaps it also demands something more. The current debate among philosophers revolves around the circumstances under which people are, or should be held to be, morally responsible agents.

If we are very demanding about what capacities are required for moral responsibility, we may find ourselves denying that human beings possess free will in the philosophers’ more technical sense, but we might still mislead the folk if we say to them, evoking their understanding of the expression, “You do not have free will.” This is likely to convey the false – and perhaps demoralizing – claim that fatalism is true.

Could I have acted otherwise?

Sometimes the question of whether I had free will when I acted in a certain way is said to be a matter of whether I could have acted otherwise – say, could I have had coffee this morning instead of tea? In this example, it seems clear enough: yes, I could have chosen tea instead of coffee. Intuitively, the “could have acted otherwise” formulation works here.

However, this formulation is problematic because we sometimes seem, intuitively, to be “free” even when we cannot act otherwise, provided that we actually acted as we wanted to. At the very least, any intuitions to the contrary are rather murky.

Suppose I was offered coffee or tea, and I chose coffee (for no particular reason other than my preference for the taste of coffee). Unknown to me or to her, however, my host had accidentally filled both pots with coffee, so my counterfactual choice of tea would have been ineffectual: I did not have a live option of acting otherwise, in the sense of actually drinking tea (perhaps there was no time to make some tea even if we’d discovered what went wrong).

Let’s consider the situation: I thought about it briefly – “coffee or tea?” – and I chose what I wanted. Furthermore, I received the coffee I asked for, and I happily drank it. I don’t know about your intuitions, but it looks to me as if I acted of my own free will, in the everyday sense of that expression. Unsurprisingly, philosophers have developed many other examples of situations where I had no actual prospect of acting other than as I did, though intuitively I seemed to be acting freely.

What if my choice is about something morally significant, rather than about whether to choose coffee or tea for breakfast? Should I, or shouldn’t I, save this nearby drowning baby? Imagine that I don’t like babies, so I deliberately let the baby drown – but I would have failed to save it in any event, since, unbeknown to me, an invisible monster with a taste for adult humans was swimming between me and the baby. I might still be considered morally responsible for my conduct, since I actually acted as I wanted.

The upshot is that the idea that notions of free will are best conveyed by talk of “could have acted otherwise” is out of fashion among philosophers, though we still see this sort of talk in many popular discussions. Accordingly, it is worth saying a little more about it.

Note that the idea that I could have acted otherwise in a situation is ambiguous. In one sense, it is merely a formula that conveys roughly the following: I acted as I wanted, given the opportunities available to me. Here, saying that I “could” have acted in some other way means that I had whatever capacities, resources, equipment, proximity to other people and things, and so on were needed to do so (and, of course, I was not constrained by, say, another person’s coercion or an overriding spooky force).

Hence, what action I took depended upon my beliefs, desires, character and so on. If these psychological aspects of me had been different in some relevant way, my choice would also have differed. This idea does seem to capture much of what is meant by ordinary talk about “acting of your own free will,” and to reflect much of what is found in the cultural conversation dating back to the Stoics and beyond.

Alas, this first conception of “could have acted otherwise” has problems when tested against sufficiently ingenious counterexamples, perhaps involving people who suffer from inner compulsions, phobias, or special blocks on their ability to form certain desires. In some of these cases, we seem to get a “wrong” (that is, unintuitive) answer to whether someone acted of her own free will.

Still, this weak form of “could have acted otherwise” talk may be a harmless way of thinking about free will for many purposes. It conveys a rough idea of ordinary conceptions of what it is to act freely, though it probably can’t be used as a strict definition. At the least, it would need to be refined.

In another sense, however, to say that I “could have acted otherwise” might mean that, at the time of my action, there was some non-zero probability of my acting otherwise, even given my beliefs, desires, character and so on. That is, there is some non-zero probability of something different happening even if everything that could conceivably influence the outcome is the same, including everything that seems relevant about me and my motivations.

On this interpretation of “could have acted otherwise,” it might be argued that I am never able to act otherwise. I fail to do so even when I act as I want, perhaps with alternative opportunities available to me, and in circumstances where ordinary people would say unhesitatingly that I acted “of my own free will.”

But is this idea of “could have acted otherwise” even intelligible? What could possibly make a difference to how I act when seemingly everything that could affect my choice has been stipulated as being exactly the same? This is very mysterious.

On the weak interpretation, then, the idea of being able to act otherwise has problems as a strict definition of when actions are “up to us” or “of our free will.” However, unusual examples are needed to bring out the problems, and perhaps the idea will do as a rough indication of what we’re getting at with free will talk.

What we must not do, however, is adopt “could have acted otherwise” as our definition of free will, attracted by the common sense in the first, weak, meaning of the phrase, and then claim that human beings lack free will because (we triumphantly point out) no one can never “act otherwise” in the far stronger, more mysterious, perhaps unintelligible, second meaning. We’d be tying ourselves in logical knots.

Free will and Free Will

All this brings me to the short book Free Will by Sam Harris. Throughout, Harris argues two things.

  • We can never act otherwise in the strong and mysterious sense, mainly because this is precluded by the fact, as he sees it, of causal determinism.
  • We are not “the conscious source of most of our thoughts and actions in the present.”

Harris appears, then, to think that free will means acting (1) in circumstances such that I could have done otherwise (in the strong, mysterious sense), and (2) by means of a process of deliberation that is entirely conscious. Since, this does not happen, he concludes, we do not have (what he calls) free will.

As always, Harris writes clearly, persuasively, and with a certain rhetorical flair. In particular, he has an enviable gift for describing opposing views in ways that make them sound ridiculous – whether they are or not. Free Will – the book, that is – is entertaining and easy to read, and I’m sure it will sell plenty of copies.

However, I submit that the views Harris ridicules are not, in all cases, ridiculous at all, and that readers of his new book should subject it to sceptical scrutiny. Free Will provides neither any useful historical context (it ignores the long cultural conversation) nor any state-of-the-art analysis of the current philosophical positions and their respective problems (it ignores most of the professional literature).

Importantly, the concept of free will that Harris attacks so relentlessly bears little resemblance to either the dominant folk ideas (roughly speaking, that fatalism is false, and that we commonly act without coercion, with adequate time to think) or the technical concept used by most philosophers (we have the capacity to act in such a way that we are morally responsible for our conduct).

In fairness to Harris, some philosophers who define free will as a capacity to act with moral responsibility think that we can have moral responsibility only if we can “act otherwise” in the mysterious sense that I’ve discussed, one that makes us the ultimate sources of our own actions (somehow preceding or transcending even our own beliefs, desires and characters). However, this is neither a consensus view among philosophers today nor a view that especially dominates the long conversation about free will and related concepts. Nor does Harris himself offer much of an argument as to why anything like this is part of the very definition of free will.

To be fair, again, some philosophers do seem to resist the idea of unconscious decision-making. However, the idea that we are the conscious source of most of our thoughts and actions is not the standard philosophical definition of free will. Nor does it seem to dominate the thinking of the folk.

In fact, no plausible story could be told in which we make any of our decisions entirely consciously. Many are probably made entirely unconsciously. And I find it difficult even to make sense of consciously choosing my next thought. How is that supposed to work?

Harris defines “free will” as he does because he thinks that it is “the popular conception” or “the free will that most people feel they have” while offering no evidence to support these bold assertions. Thus, even if he succeeds in showing that “free will,” as he defines it, does not exist – I agree that it doesn’t – that will not entail that either philosophers or the folk are incorrect when, employing their definitions or conceptions, they claim that we have free will.

Let me be clear on this: Harris may, indeed, have isolated one tendency in the thinking of some philosophers and some ordinary people. Perhaps he has met people who think about free will in a way that matches up with his definition, and I’m sure some readers will find that the definition rings true for them (the evidence suggests, remember, that ordinary people do not all think alike about free will – and philosophers certainly do not).

But Harris does not claim to be attacking one tendency, perhaps a dangerous one, in ordinary thinking or the philosophical literature. Nor does he limit himself to claiming (against the evidence to date) that it is the dominant tendency.

As far as he is concerned, he is writing about the true conception of free will, and anyone who disagrees is changing the subject. They are not talking about free will, he thinks, but only about “free will” – about an intellectual construction of their own making. That is almost the reverse of the truth, and if anything it is Harris who wants to change the subject by insisting on his own pet definition.

Harris on compatibilism

When Harris turns to the views of compatibilists, philosophers who think that the existence of free will is logically consistent with causal determinism, he accuses them of changing the subject, but that is unfair and untrue.

Rather, the views of the compatibilists, whether correct or not, have been a key component of the conversation for over two thousand years. He accuses them of producing a body of literature that resembles theology, primarily aimed, he suspects, at “not allowing the laws of nature to strip us of a cherished illusion.”

Just how offensive compatibilists ought to find this will depend, in part, on how they regard theology. If I think that theological discourse frequently contains clever, distracting rhetoric, duplicitous manipulations of standards (such as interpreting texts literally when it suits the theologian’s agenda to do so, but interpreting them in some other way whenever this is more convenient to her), and other slippery methods of argument, I might well think that I’ve been insulted if told that I write like a theologian.

Harris does not spell out the features of theological writing that he has in mind. However, it seems clear enough that he does not intend the comparison as a compliment, and that the thrust of his remarks at this point of his book is to accuse compatibilist philosophers of some form of intellectual dishonesty or at least wishful thinking.

That, however, is unfounded. From ancient times to the present day, compatibilist philosophers – whether Stoics, early modern thinkers such as Thomas Hobbes, Enlightenment figures like David Hume, or contemporary successors to the tradition such as Daniel Dennett – have attempted to do what philosophers do at their best: they have tried to reason clearly and carefully about a deep but elusive topic of general importance.

In doing so, they have needed to make distinctions, examine and refine concepts, and reveal nuances (and possible inconsistencies) in everyday thought. Unfortunately, this process can lead to a thicket of new conceptual problems.

Inevitably – so it goes – compatibilist theories have become more detailed and ramified than the relatively brief pronouncements from earlier writers such as Hobbes. At a minimum, recent compatibilists have had to elaborate and qualify their views as they’ve encountered objections, baffling science-fictional examples, and troubling classes of cases (such as whether psychiatric patients have free will if they are physically unimpeded when acting on their delusions).

But incompatibilists must face the same sorts of problems, and the resulting garden of competing analyses, each one making finer points than its predecessor, is no different from what we see in any other area of philosophy. If compatibilist philosophers write like theologians in some (unexplained) sense, so do their opponents.

Nor is there anything that, to a fair critic, “seems deliberately obtuse” about the idea of someone acting freely on her desire to commit a murder. Harris is correct that we often have conflicting desires, some of which we would rather not have (that is, we have second-order desires about our desires). But this is old news and exhaustively discussed in the philosophical literature.

A related point about psychic fractures could indeed be developed into a problem for free will, so let me flag that for later, but there is nothing obviously obtuse (let alone “deliberately” so) about assuming that the murderer acted on her strongest desire, and that her action revealed something important about what she was like as a person.

In the same passage, Harris goes on to claim that the deeper problem for compatibilists is that there is no freedom in doing as I want, such as when I reach for a glass of water to quench my thirst.

But really, where is the unfreedom? Where, as Hobbes would have put it, is the stop? Where is the thing that impedes me from doing what I want to do? Are there any spooky forces (the gods, the stars, Fate) in the vicinity, preventing me from acting as I want? How has anything in the situation, as described, led to my efforts being bypassed or blocked, or to my desires being frustrated?

Unless, I am employing my own pet definitions, or unless, perhaps, I am one of those philosophers who is haunted by a very demanding notion of what is required for moral responsibility, I will have no good reason to see my action in drinking the water as other than having been up to me – or as a “free” one.

Perhaps compatibilism is false, but the various attempts by Harris to dismiss it with little argument should not convince anybody – however robustly or amusingly he words them (“changing the subject,” “resembles theology,” “seems deliberately obtuse,” “a bait and switch,” “nothing to do with free will”). This sort of dogmatism and abuse can be fun, but it does little to advance philosophical understanding.

The future of free will

Allow me to confess, at this late stage, that I think that the concept of free will has problems, perhaps many of them, although I am not at all persuaded that causal determinism is the important issue.

One problem relates to the nature of coercion. How do I draw a principled line between actions that are coerced, or otherwise brought about in circumstances that seem to overwhelm me, and those that are not?

To some extent, this looks like a moral or even political judgment, and it is very arguable that, although not simply arbitrary, these sorts of judgments are not objectively binding. In some cases, at least, there may be no determinate answer as to whether I was coerced or acted freely.

Furthermore, the folk (and perhaps philosophers) are not worried only by outright coercion but also by other circumstances, such as whether there was adequate time to think. But where do we draw the line with something like that – for example, how much time is “adequate”? Again, how should we handle such things as compulsions and phobias – are they just another part of our desire-sets, or are they more analogous to external barriers to our actions?

Another problem relates to the largely-unconsciousness nature of our decisions. No one should doubt this, and Harris is correct to emphasise it and discuss the actual phenomenology of choice. Still, taken by itself it is not necessarily very threatening.

Imagine for a moment that my unconscious mind makes decisions in accordance with the same beliefs and desires that I endorse consciously, and imagine, more generally, that my unconscious and conscious minds are closely “in character” with each other. If that is so, delegating a great deal of decision-making to unconscious processes might even be an efficient use of scarce time for conscious thought.

The issue that Harris ought to press more strongly – and I foreshadowed this earlier – is that our unconscious minds may be rather alien to our conscious egos. I suspect that Freudian theory is largely bunk, but a large body of social psychology literature can be interpreted as confirming that our psyches are more fractured, and some of our true motivations stranger to us, than we like to think.

If this is so, we may be at the mercy of alien forces after all, at least to an extent – these are not external powers, and not exactly spooky ones, but actually components of ourselves.

But even if we press such points as hard as possible, folk ideas of free will might survive. Perhaps whether we act freely becomes a matter of judgment and degree, and the question of whether we do so in various particular cases does not have an entirely compelling answer.

Nonetheless, it might remain more false than true if we tell the folk, “You do not have free will.” On the other hand, philosophical ideas of moral responsibility might be in more trouble as we insist on the difficulties. Much more needs to be considered here.

Finally, I acknowledge that some intuitions may favour incompatibilism. On the other hand, it remains the case – doesn’t it? – that we are not controlled by spooky powers, that our beliefs, desires and characters are not bypassed in some other way (as they would be if epiphenomenalism were true), and that these aspects of us appear to have causal power: they lead to choices, actions and consequences.

There is nothing especially arcane about these key points, and they are consistent with causal determinism as far as it goes. The worst problems for free will, I suggest, come from elsewhere.

After some two thousand years, the basics of a compatibilist approach remain attractive, and the burden of going forward seems to fall on opponents of free will, and particularly on incompatibilists such as Harris.

Harris himself needs to do more work, particularly in understanding and responding to the strengths in his opponents’ arguments. Until then, we should take his pronouncements on the topic of free will with a few grains of salt. So it goes.

Monday, June 28, 2021

Science Fiction as a Lens into the Future

(This is the written version of a talk presented to the Australian Defence College’s “Perry Group”, Canberra, 7 June 2019. It has previously been published by Bruce Gillespie in his excellent magazine SF Commentary. The following is almost identical to the SF Commentary text, but I’ve taken the opportunity to make a few small amendments and corrections.)

 

I.

 

First, thanks to all concerned at the Australian Defence College for organising this event, and especially to Professor Michael Evans for thinking of me and inviting me as your speaker. I’m honoured to be here and delighted to discover that science fiction is studied by a collection of people such as the college’s Perry Group.

 

It has been said (by the British novelist L.P. Hartley) that the past is a foreign country – that they do things differently there. A lesson from relatively recent human history is that the future is also a foreign country.

 

When I say “relatively recent”, we can put this in a broad perspective. Our species, Homo sapiens, is some 300,000 years old, and earlier human species from which we descended go back much further, millions of years, indeed, into the past. Homo sapiens has continued to evolve since the earliest fossilised specimens that we know of, becoming more gracile – or light-boned – in anatomically “modern” humans. The rise of agriculture dates back about 12,000 years, and something recognisable as civilisation, with large cities, writing, and bureaucratic social organisation, emerged in the Middle East and other locations about 5,000 years ago, give or take.

 

By contrast, what we now call European modernity is historically recent. If we could travel back to Europe in, say, 1500, or even 1600, CE, we’d find societies in which there was little sense of ongoing social change, though of course there had always been large-scale changes from specific events such as wars and conquests, plagues and famines, and various other kinds of human-caused and natural disasters. But changes in technology, work methods, social organisation, transport, and so on, happened too slowly to be transformative within a single lifetime. People were more aware of the daily, seasonal, and generational cycles of time than of gradual, progressive change driven by technology.

 

In the past, some religious and mythological systems described grander cycles of time than seasons and generations, some societies looked back to a lost golden age from which they thought they had degenerated, and Christian writings prophesied an eventual end of worldly things to be brought about by the intervention of God. But none of this resembles our contemporary idea of the future, in which human societies are continually transformed by advances in scientific knowledge and new technologies.

 

That said, the sixteenth century in Europe was an extraordinarily volatile period – it immediately followed the invention and development of the printing press, with all that that entailed for distributing ideas widely, and the European discovery of the New World. Exploration and colonialism brought the cultures of Europe into contact with what seemed like strange – sometimes hostile – environments and peoples. For some European intellectuals, this provoked a sense of the historical contingency and precariousness of existing cultures and civilisations. The practices and beliefs of particular cultures, including those of Europe, increasingly appeared at least somewhat arbitrary, and thus open to change.

 

The sixteenth century began with festering religious discontent that quickly led to the Protestant Reformation, whose beginning we could date from Martin Luther’s famous proclamations against Church practices, the “95 Theses”, in 1517. Europe was soon wracked by the great wars of religion that extended, in one form or another, deep into the seventeenth century (the Thirty Years’ War from 1618 to 1648 left much of the continent in ruins). The sixteenth century also saw the beginnings of modern science, including the radically transformative astronomy of Copernicus.

 

By the early decades of the following century, science had reached a form much more like we’d recognize today, especially with Galileo’s observations, experiments, and reflections on scientific methodology. (Galileo was active 400 years ago – he first demonstrated his telescope, and turned it to the heavens, in 1609, and it was in 1633 that he was interrogated by the Inquisition and placed under permanent house arrest for supporting the Copernican claim that the Earth revolves around the Sun.) The rise, consolidation, and extension of science, throughout the seventeenth-century Scientific Revolution, and beyond, challenged old understandings of humanity’s place in the universe. It was the early success of modern science, more than anything else, that led European thinkers of the eighteenth-century Age of Enlightenment to imagine future states of society with superior knowledge and wisdom.

 

Enlightenment ideas of progress involved intellectual – especially scientific – and moral advances, though with little of our emphasis today on new technology when we try to imagine the future. Enlightenment thinkers hoped, and worked, for societies that might be better than their own. They looked to continued intellectual progress accompanied by social reform. This way of thinking nourished the great political revolutions at the end of the eighteenth century – the American Revolution and the French Revolution – and the upheavals that these produced inspired even more conjectures and schemes involving future societies.

 

Even when we look at the work of great utopians and social thinkers from the early nineteenth century, however, in the wake of the Enlightenment, there is little emphasis on technological transformations of society. In 1800, let’s say, that thought was only in its infancy. The idea of the future that we possess today developed slowly and gathered force, responding to the Industrial Revolution, which commenced during the second half of the eighteenth century, at first in Britain, but then in other European societies. As the Industrial Revolution continued and renewed itself, with its steam engines, factories, and railroads, Europe and its colonies experienced something altogether new: continual – and visible – social change that was driven and shaped by advances in science and, above all, technology. As the nineteenth century rolled on, changes in the ways that things were done happened on a large scale and at a pace that could not be ignored. You could say that the nineteenth century was when humanity discovered the future.

 

II.

 

Much later, writing in the 1920s, the scientist and social commentator J.D. Bernal observed that human beings normally take accidental features of their own societies to be axiomatic features of the universe, likely to continue until supernaturally interrupted. Bernal added: “Until the last few centuries this inability to see the future except as a continuation of the present prevented any but mystical anticipations of it” (Bernal, The World, The Flesh and the Devil: An Inquiry in the Future of the Three Enemies of the Rational Soul, 1929). Humans might previously have imagined supernatural events in the future, such as the second coming of the Messiah, but they did not imagine events such as the invention of the steam engine, the spread of the railways, electricity, the telegraph, motor cars, and aviation. But, as Bernal goes on to elaborate, the assumption of a relatively static society ceased to be tenable. This provided the social ground to fertilise science fiction.

 

In his fascinating, if polemical, book A Short History of Progress (2004), the archeologist and historian Ronald Wright makes the point that a citizen of London from 1600 CE would have felt reasonably at home two hundred years later, in the London of 1800. The city would have looked rather familiar. But, says Wright, warnings of threats to humanity from the rise of machines “became common in the nineteenth century, when, for the first time ever, wrenching technical and social change was felt within a single lifetime.” Wright immediately adds:

 

In 1800, the cities had been small, the air and water relatively clean – which is to say that it would give you cholera, not cancer. Nothing moved faster than by wind or limb. The sound of machinery was almost unknown. A person transported from 1600 to 1800 could have made his way around quite easily. But by 1900, there were motor cars on the streets and electric trains beneath them, movies were flickering on screens, earth’s age was reckoned in millions of years, and Albert Einstein was writing his Special Theory of Relativity.

 

Yet, it took visionaries like H.G. Wells to grasp this, spell it out, and incorporate it in a new kind of fictional narrative. In addressing the great changes of the nineteenth century, Wright refers to the misgivings of many Victorians as they confronted the rise of industrial machinery, and viewed its comprehensive social impact. This leads him to an observation about the beginning of what was originally called “scientific romance”:

 

As the Victorian age rushed on, many writers began to ask, “Where are we going?” If so much was happening so quickly in their century, what might happen in the next? [Samuel] Butler, Wells, William Morris, Richard Jefferies, and many others mixed fantasy, satire, and allegory, creating a genre known as the scientific romance. (Wright, A Short History of Progress, 2004)

 

In this passage and the discussion that follows in A Short History of Progress, Wright is concerned with both scientific knowledge and the industrial uses of technology. The latter greatly altered and increased production while also transforming work and its organisation, the means of transportation, and the landscape – not in all respects, by any means, for the better.

 

I have emphasised technology to this point, but we should not lose track of science itself, which continued to advance and to shape understandings of the world. As the sciences developed, their practitioners were able to study a great range of natural phenomena that had previously resisted human efforts. These included very distant and vastly out-of-scale phenomena such as those investigated by astronomers, very small phenomena such as the detailed composition and functioning of our bodies, and (somewhat later, with the advent of scientific geology) phenomena from deep in time before human artifacts, buildings, or written records. By the early decades of the nineteenth century, the sciences were starting to imagine, and communicate, the extreme depth of time as well as the vastness of space.

 Nineteenth-century geology suggested that we live on the surface of an incomprehensibly old planet, with the implication of a similarly incomprehensible number of years still to come. As you will know, this idea has since been confirmed, elaborated, and expanded by scientists from numerous disciplines, and, all in all, a new understanding of the cosmos has emerged.

 To sum up at this point, the revolutions in science and technology during the centuries of European modernity introduced new ideas about the universe, ourselves, and the future. All of this amounted to a revised world picture.

 

As a result, it is now established – and was known in outline to educated Europeans in the second half of the nineteenth century – that we inhabit a vast universe whose origins lie deep in time. Like other living things, we are the product of natural events taking place over many millions of years. In all meaningful ways, so Darwinian evolutionary theory revealed, we are continuous with other animal species. Anthropocentrism and human exceptionalism have been challenged from all directions. Furthermore, our particular societies and cultures are significantly mutable. Human societies have changed dramatically in the past – and we can be sure that this will continue.

 

III.     

 

All known social and cultural forms, and specifically those we have experienced in our individual lifetimes, are now revealed as contingent and temporary. Technological developments continually revolutionise the ways we work, play, plan, organise ourselves, and move from place to place. Even the relatively near future may turn out very strange by the standards of those now living. Not only is our origin as a species deep in time, our eventual destiny is unknown and perhaps lies in the very remote future (assuming we don’t find a way to destroy ourselves more quickly, or perhaps fall foul of a disaster such as a collision with an asteroid). This set of claims is the new worldview embraced, since the era of Queen Victoria, by most educated people  in Europe, the Anglosphere, and other industrially advanced countries. It seems almost commonsensical, when considered by secular-minded people from the vantage point of 2019. But let me make two important points about that.

 

The first is that these claims are not pre-scientific common sense. The overall picture constitutes a dramatic historical shift in human understanding of the universe and our place in it. Not so long ago, historically, such ideas would have been viewed within European Christendom (and most other parts of the world) as intolerably radical and heretical. They met with much resistance, and they still meet with resistance from some quarters.

 

The second point is that even now we tend to live without being fully aware of the implications of deep time and the new worldview that we’ve inherited from the Victorian generation. We live from day to day, and consider politics, social issues, and the like, forgetful of the deep past behind us, and we ignore the implication of a similarly deep future ahead of us. Indeed, what can we even do with that sort of knowledge in everyday situations?

 

Nonetheless, as Wells knew, the rapid changes of the nineteenth century implied the likelihood of rapid – perhaps more rapid – changes to come. That reasoning applies equally to us. We should assume that the current century, and the many centuries to follow, will see great changes to the world and to human societies. Our own society has not reached a point of stability, though again it’s not obvious what we can do with that sort of knowledge. Historically, this was all difficult to digest – and it remains difficult. But it offered new opportunities for storytelling.

 

In more than one sense, science fiction is the fiction of the future. In his 1975 book, Structural Fabulation: An Essay on Fiction of the Future, the American critic Robert Scholes produced a short account of science fiction that influenced me when I was young and remains, the best part of half a century after it was published, a remarkably shrewd introduction to the genre. Scholes covers some of the ground that I am dealing with in this paper, in describing how science fiction relates to human history, and especially to the history of how we’ve conceived of time and history themselves.

 

Scholes writes of science fiction as a kind of fiction that is about the future, but he also explains why that kind of fiction is inevitable in a world with a new conception of time, history, and progress, one in which the future will be, as it were, a country foreign to us, one where they do things differently. For Scholes, it seems, science fiction will thrive in the future, perhaps become a dominant narrative form, and produce great things. Science fiction has become far more visible and popular since 1975, and in my assessment Scholes has turned out to be right.

 

IV.

 

When did science fiction begin? Some proto-science fiction narratives appeared even in the seventeenth century, such as a strange little book by Johannes Kepler, called Somnium, Sive Astronomia Lunaris (this was completed around 1608–1609, but not formally published until 1634). Somnium is sometimes called the first science fiction novel, but it has none of the characteristics that we normally associate with novels, such as telling a complex story and including characters with at least some appearance of psychological plausibility. It is really just a geography (if that’s the right word) of the Moon’s surface, based on the best observations that had been made prior to astronomical use of telescopes. The scientific lesson is framed by a thinly developed fictional narrative that showcases the discoveries of the time and allegorises the scientific quest for knowledge.

 

Somnium is not a fully fledged science fiction novel, but it foreshadows themes that SF writers have explored ever since. There is a trust that science can obtain knowledge of kinds that had previously eluded human efforts. At the same time, there is the sense that Kepler wants to portray a physically greater cosmos than was previously imagined. Along with this goes a recognition of our relative smallness in the total scheme, and of our limited understanding. Kepler seems to suggest that things are not always as they appear to us from our vantage point on Earth.

 

Notwithstanding Somnium and some other early works, science fiction is very much a child of the nineteenth century. As has been said by others, it could not have existed as a field “until the time came when the concept of social change through alterations in the level of science and technology had evolved in the first place” (Isaac Asimov, Asimov on Science Fiction, 1981). As a result, we see little or nothing in the way of recognisable science fiction novels and stories until the nineteenth century, beginning with works such as Mary Shelley’s Frankenstein; or, The Modern Prometheus in 1818. Frankenstein famously depicts Victor Frankenstein’s use of scientifically based technology to create something entirely new in the world: a physically powerful, but unfortunately repulsive, artificial man. As is well known, the actual term “science fiction” was not coined for another century or so, with the rise of specialist SF magazines in the United States in the 1920s and 1930s.

 

Meanwhile, some of Edgar Allan Poe’s stories from the 1830s and 1840s have science fiction elements, and the SF author and critic James Gunn regards Poe’s “Mellonta Tauta” (1848) as possibly the first true story of the future (Inside Science Fiction, second ed., 2006). Unlike earlier narratives of future disasters, such as Mary Shelley’s The Last Man (1826), it portrays a future society with unfamiliar ideas and practices. “Mellonta Tauta” is set in the year 2848 – thus, one thousand years after its date of composition – and its Greek title can be translated as “future things” or “things of the future” (or it might, I dare say, with H.G. Wells in mind, even be translated as “things to come”). It’s a very peculiar story, even by Poe’s standards, taking the form of one character’s rambling, gossiping, speculation-filled letter to a friend. In fact, it is more like a series of diary entries, beginning on April 1 – April Fools’ Day, of course – and it is composed by a well-educated but deeply misinformed individual, who reveals that she is on a pleasure excursion aboard a balloon.

 

In Poe’s version of the future, humanity has explored the Moon and made contact with its diminutive people. However, much knowledge from the nineteenth century has become garbled and (at least) half lost. The story thus sheds doubt on historians’ confident interpretations of the practices of other peoples living in earlier times. It is full of jokes, many of which are puzzling for today’s readers, and even when they’re explained it is often difficult to be sure exactly what ideas Poe is putting forward and which he is satirising. (Other material that Poe wrote about the same time suffers from the same problems of interpretation.) Nonetheless, Poe laid a foundation for the development of satirical science fiction set in future, greatly altered societies.

 

A more substantial body of work that resembles modern science fiction emerged around 1860, particularly with the French author Jules Verne, who is best known for novels in which highly advanced (for the time) science and technology enable remarkable journeys – to the centre of the Earth, around the Moon and back, beneath the sea, and so on. H.G. Wells’s career as a writer of what were then known as scientific romances commenced two or three decades later, with a group of short stories that led up to his short novel The Time Machine (1895). The importance of this work for the later development of science fiction cannot be overstated. That great theorist of the genre, Darko Suvin, writes, without hyperbole, that “all subsequent significant SF can be said to have sprung from Wells’s The Time Machine” (Metamorphoses of Science Fiction, 1979). Wells followed up with his first full-length scientific romance, The Island of Dr Moreau (1896), and his extraordinary career was underway.

 

In the late nineteenth and early twentieth centuries, science fictional elements appeared in many utopias, dystopias (such as Wells’s When the Sleeper Wakes (serialised 1898–99)), and lost-world novels set in remote locations or even beneath the ground. The use of interplanetary settings took the idea of lost worlds and races a step further. The first published novel by Edgar Rice Burroughs, A Princess of Mars (originally in serial form in 1912), epitomised the trend. Planetary romance of the kind favoured by Burroughs defined one pole of early science fiction, emphasising action and adventure in an alien setting.

 

Another approach was the near-future political thriller. Works of this sort, most notably “The Battle of Dorking: Reminiscences of a Volunteer”, by George Tomkyns Chesney (not a full-length novel, but a novella originally published in Blackwoods Magazine in 1871), were a prominent component of the literary scene in the late nineteenth and early twentieth centuries. They portrayed future wars and invasions, often involving racial conflict. These political thrillers typically contained melodramatic and blatantly racist elements, but they are noteworthy as serious speculations about near-future possibilities.

 

All of these forms of early science fiction have continued, in one way or another, to the present day. Literary scientific romances, particularly inspired by those of Wells, and by those of authors who reacted to him, have maintained a pedigree partly independent of, and parallel to, what I call “genre science fiction” (or “genre SF”), by which I mean science fiction aimed at a relatively specialist audience of SF fans and aficionados. Genre science fiction is a phenomenon dating from the 1920s, and there is an interesting story to tell about its development under the leadership of its first great editors – Hugo Gernsback and John W. Campbell – through to the present day. But for current purposes, we’ll have to skip over that. For more, see the opening chapters of my 2017 book, Science Fiction and the Moral Imagination: Visions, Minds, Ethics. Suffice to say that the pace of social, scientific, and technological change continued to accelerate. In response, as the twentieth century unfolded and segued into the twenty-first, narratives of technological innovation and humanity’s future prospects became even more culturally prominent.

 

I’ll also make short work here of the much-debated question of how we should define science fiction, and how, if at all, we can fence it off from other narrative genres or modes such as technothrillers, horror stories, and fantasy. In summary – see Science Fiction and the Moral Imagination again if you want more – I identify science fiction as combining three elements that we may call “novelty”, “rationality”, and “realism”.

 

I intend each of these in a specific and rather narrow sense: novelty, in that the narrative depicts some kind of break with the empirical environment of the author’s own society and historically recorded societies (this is what Darko Suvin refers to as the novum); rationality in the sense that whatever is novel is nonetheless imagined to be scientifically possible (at least by the standards of some future body of scientific knowledge), rather than magical or otherwise supernatural; and realism in the minimal sense that the events described are imagined as actually happening within the internal universe of the story – that is, the events, including the problems confronted by the characters, are to be interpreted literally, even if they have a further allegorical or metaphorical level of meaning. In other senses, of course, science fiction is not a variety of literary realism, but nor does it have the qualities of straightforward allegory, dream, or psychodrama.

 

Science fiction, then, is a kind of fictional narrative that is characterised by novelty, rationality, and realism. It typically and centrally imagines future developments in social organisation, science, and/or technology, though I hope I’ve said enough for it to be clear why it sometimes depicts amazing inventions in the present day, present-day invasions from space, or events that happened in the deep past, in prehistoric times. Science fiction can take many forms, but at its core it is fiction about the future.

 

V.

 

Although science fiction has a central concern with future societies, SF writers are not prophets and they cannot simply provide a transparent window that opens upon the future. Hence, the title of this paper refers to a lens into the future: something more probing – and perhaps more difficult to use, requiring more activity, interpretation, and skill – than a window overlooking a future vista. In some cases, setting narratives in the future (much like the use of extraterrestrial settings) merely provides writers with exotic locales for adventure stories, something that came in handy as a plot device during a time when the surface of the Earth was increasingly being explored and mapped. To be clear, there’s nothing terrible about adventure stories in exotic locales – I love them as much as anybody – but science fiction writers often engage more meaningfully than that with ideas of the future, or of possible futures.

 

Wells certainly thought – at least for most of his career – that it was possible to consider and imagine the future of humanity with some prospect of making successful predictions. He discussed exactly this topic in a famous lecture that he delivered to the Royal Institution of London in January 1902. This lecture, entitled “The Discovery of the Future”, helped to establish his reputation, and it was published as a small book not long after he delivered it. In “The Discovery of the Future”, he put the problem like this: “How far may we hope to get trustworthy inductions about the future of man?” (We’d now say something more like: “How far can we have a reliable science of the future of humanity?”)

 

For Wells, speaking and writing in 1902, the present had arisen from the past through the deterministic operation of scientific laws, and the future would follow from the present in the same deterministic way. However, he suggested that there was an asymmetry between the past and the future, or at least in how we perceive them. That is, we can be certain about many events that happened to us personally in the past, and which we remember clearly, whereas we do not know what lies in store for us, as individuals, in the future. We have no future-oriented equivalent of personal memory.

 

However, Wells said, things are different when it comes to future events involving large populations. By analogy, he argued, we can’t predict where individual grains of sand will fall if we shoot them from a cart, or even the shapes of the individual grains, which will vary greatly. But we can predict which grains – of what sizes and shapes – will tend to be found in different parts of the resulting heap. Wells considered the possibility that individual people of great energy and ability might be less predictable, and have greater effects on human destiny, than exceptionally large grains of sand. Nonetheless, he was strongly inclined to think that larger forces operating in history determined broad historical outcomes. For example, if Julius Caesar or Napoleon had never been born, someone else would have played a similar role in the history of Europe.

 

On this basis, Wells concluded that we have evidence available to us in the present that can help us to reconstruct the past, and that we also have information available to us now to help us predict how humanity’s future will unfold on a large scale. He was very conscious of human origins in deep time, and with that in mind he placed a special emphasis on humanity’s long-term destiny, the deep future of our species:

 

We look back through countless millions of years and see the will to live struggling out of the intertidal slime, struggling from shape to shape and from power to power, crawling and then walking confidently upon the land, struggling generation after generation to master the air, creeping down into the darkness of the deep; we see it turn upon itself in rage and hunger and reshape itself anew; we watch it draw nearer and more akin to us, expanding, elaborating itself, pursuing its relentless, inconceivable purpose, until at last it reaches us and its being beats through our brains and arteries, throbs and thunders in our battleships, roars through our cities, sings in our music, and flowers in our art. And when, from that retrospect, we turn again toward the future, surely any thought of finality, any millennial settlement of cultured persons, has vanished from our minds.

 

This fact that man is not final is the great unmanageable, disturbing fact that arises upon us in the scientific discovery of the future, and to my mind, at any rate, the question what is to come after man is the most persistently fascinating and the most insoluble question in the whole world. (“The Discovery of the Future”, 1902)

 

In “The Discovery of the Future”, Wells repudiated any idea of a static human society, even as part of some utopian blueprint:

 

In the past century there was more change in the conditions of human life than there had been in the previous thousand years. A hundred years ago inventors and investigators were rare scattered men, and now invention and inquiry are the work of an unorganized army. This century will see changes that will dwarf those of the nineteenth century, as those of the nineteenth dwarf those of the eighteenth. […] Human society never has been quite static, and it will presently cease to attempt to be static.

 

Wells made certain predictions about the nearer future, before our species is eventually superseded, such as the emergence, perhaps not for hundreds of years, or even for “a thousand or so” years, of a great world state. Toward the end of his lecture, he granted that humanity might be destroyed by a cataclysm of some kind, if not by the eventual death of the Sun itself, but he expressed his fundamental rejection of these outcomes and his belief in what he called “the greatness of human destiny”. He claimed to have no illusions about human failings, but he saw a path of ascent from the deep past to the deep future:

 

Small as our vanity and carnality make us, there has been a day of still smaller things. It is the long ascent of the past that gives the lie to our despair. We know now that all the blood and passion of our life were represented in the Carboniferous time by something – something, perhaps, cold-blooded and with a clammy skin, that lurked between air and water, and fled before the giant amphibia of those days.

 

For all the folly, blindness, and pain of our lives, we have come some way from that. And the distance we have travelled gives us some earnest of the way we have yet to go.

 

He concluded “The Discovery of the Future” with a radically optimistic sentiment that later found expression in much twentieth-century science fiction, and, I venture to add, in much current thought from transhumanists and similar thinkers about the human future:

 

It is possible to believe that all the past is but the beginning of a beginning, and that all that is and has been is but the twilight of the dawn. It is possible to believe that all that the human mind has ever accomplished is but the dream before the awakening. We cannot see, there is no need for us to see, what this world will be like when the day has fully come. We are creatures of the twilight. But it is out of our race and lineage that minds will spring, that will reach back to us in our littleness to know us better than we know ourselves, and that will reach forward fearlessly to comprehend this future that defeats our eyes.

 

All this world is heavy with the promise of greater things, and a day will come, one day in the unending succession of days, when beings, beings who are now latent in our thoughts and hidden in our loins, shall stand upon this earth as one stands upon a footstool, and shall laugh and reach out their hands amid the stars.

 

VI.

 

Let’s return, in conclusion, to one of Wells’s key questions in “The Discovery of the Future”: “How far may we hope to get trustworthy inductions about the future of man?” I conspicuously have not provided an answer, although I’ve reported Wells’s claim that we have considerable ability to predict the broad outlines, if not the detail, of humanity’s future. Wells certainly did not think that the future for individuals was predictable – alas! – but it was possible, he thought, to work out the future’s broad outlines for very large numbers of people, including humanity as a whole.

 

This idea seems to have been accepted, in large part, by the science fiction writers of the following several decades. You can find something like the same idea in Isaac Asimov’s Foundation series, begun in 1942, with its science of psychohistory developed by the main protagonist, Hari Seldon. Asimov even grapples with the impact of a truly remarkable human being – a kind of super-Napoleon – in the person of the Mule, a mutant with the extraordinary power to bend others’ emotions to his wishes. During the so-called Golden Age of science fiction, from the late 1930s to the end of the 1940s, something of a consensus picture of the long-term human future seems to have been shared by Asimov, Robert A. Heinlein, and others. They embraced a vision, much like that offered by Wells in “The Discovery of the Future”, of a destiny in the stars for humanity and whatever beings might descend from us.

 

However, this vision has become considerably less popular in genre science fiction since the 1950s, and it might now be disputed by many professional SF writers. Also, there is an obvious alternative to this way of thinking about science fiction. The alternative is that the point is not to reveal the actual human future, or even an approximation of it, so much as to investigate many possible futures. In short, science fiction is not predictive. On this approach, we could think of the future not as something determinate, but as something that could, at least as far as our practical knowledge ever extends, take many forms or go down many paths. If science fiction is a lens into this sort of future, it is a way for us to probe a dimension of possibilities, and to consider their implications. Science fiction can help us prepare for the real future by portraying possibilities. It is a lens into an indeterminate, but multiply imaginable, future.

 

Another approach, perhaps the dominant one in the tradition of scientific romance – that is, once again, in science fiction narratives outside of, and parallel to, genre SF – is to view imagined futures as most relevant and compelling when they are distorted pictures of the present, or its trends, created for the purpose of social commentary. If we think of it in this way, science fiction is not so much a lens into the future as a narrative form that uses imaginative pictures of the future to provide a lens into the present.

 

When we consider these models of science fiction and how it approaches the future, we might ponder H.G. Wells’s own enormous contribution to SF. Wells made some impressive predictions, not least about armoured military vehicles, the importance of aviation for future warfare, and, in The World Set Free (1914), the development of massively destructive atomic bombs (admittedly rather different in operation from those that were dropped on Japanese cities three decades later). Did Wells offer “trustworthy inductions” about humanity’s future? Perhaps he did to some extent, though by 1945, the year before his death, he’d become despairing about the future’s predictability. Was his science fiction a lens into the future in some sense, or even into the present, or into our world and the human situation in some other way?

 

This, I hope, gives us plenty to talk about, so let’s open up the discussion about science fiction and the future of our species.

 

Russell Blackford is a Conjoint Senior Lecturer in Philosophy at the University of Newcastle, NSW. He is the author of numerous books, including Strange Constellations: A History of Australian Science Fiction (co-authored with Van Ikin and Sean McMullen, 1999) and Science Fiction and the Moral Imagination: Visions, Minds, Ethics (2017).