Via David Chalmers' blog,I came across this review by Jerry Fodor of a new book by Galen Strawson and others: Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?. As Chalmers puts it, "Fodor comes surprisingly close to endorsing a form of property dualism with fundamental laws connecting physical processes and consciousness."
Now, philosophy of mind is not my research field. I teach some introductory philosophy of mind materials in a couple of the first-year-level subjects that I'm involved in from time to time, but I make no claim at all to being a philosopher of mind or to keeping up with the detail of the debates. I am, however, strongly inclined to an overall position of metaphysical or philosophical naturalism, in which all that exist are the phenomena (entities, properties, forces, space-time geometry, etc.) investigated by science. However, I've never understood why the sort of position that Fodor and Chalmers are describing is not compatible with philosophical naturalism.
The position for which Fodor expresses some sympathy is this:
I suppose one can imagine a world where all the big things are made out of small things, and there are laws about the small things and there are laws about the big things, but some laws of the second kind don’t derive from any laws of the first kind. In that world, it might be a basic law that when you put the right sorts of neurons together in the right sorts of way, you get a subject of consciousness. There would be no explaining why you get a subject of consciousness when you put those neurons together that way; you just do and there’s the end of it. Perhaps Strawson would say that in such a world, emergence would be a miracle; but if it would, why isn’t every basic law a miracle by definition?
Indeed, what we do seem to know is that consciousness actually exists: more precisely, I know that I am conscious, and I am prepared to assume that you are; the world simply makes much more sense if I do not adopt a position of solipsism, but work on the basis that at least those other beings who are much like me are conscious, though it is an open question how far down this extends. Are chimps conscious? Are cows? Alligators? Oysters? Paramecia? Still, the fact that I and the people I find myself associating with have conscious experience does indeed appear to be undeniable.
The next point is that it is difficult to explain how consciousness can emerge from matter merely by way of the kinds of physical laws that we know or are developing. It is hard to see how they can explain consciousness while making no reference to it. As Strawson and Fodor both discuss, it does not seem to be like liquidity, where, in principle, we seem to be able to explain the behaviour of liquid substances via an understanding of how molecules behave, which can be explained by how atoms behave and so on. Liquidity itself need never be mentioned: the basic laws will tell us how certain kinds of substances will coalesce and flow, etc. We can "eliminate" liquidity in a way that consciousness seems to be ineliminable.
All of this seems to entail that laws relating to the circumstances in which consciousness emerges from the functioning of some kinds of complex material substrates will have to refer to consciousness itself. Consciousness is not something that can be eliminated from the most basic equations. This, in turn, suggests that there are fundamental psychophysical laws that cannot be reduced to laws that do not mention consciousness.
But why is that so counterintuitive? It looks as if consciousness depends on matter, as if the nature of the dependence is lawful, and as if laws that never actually mention consciousness could not describe the dependence all by themselves. Does this not suggest that the laws governing the natural universe include irreducibly psychophysical ones?
Perhaps the worry is that these laws will be nothing like the fundamental laws of physics relating to, say, quantum events or the shape of space-time, but I'm not sure that that would trouble us if we were better placed to know more about what the psychophysical laws actually are. If we had some kind of handle on that, they might not seem any more intrinsically bizarre than anything else that science has discovered over the past 400-odd years.
The great difficulty that we face is that no one is in a position to observe consciousness directly (except his or her own). That makes it pretty much impossible to conduct experiments in which we predict that consciousness will be brought into being by such and such physical systems (in accordance with such and such conjectured psychophysical laws). Isn't that, however, just an epistemic limitation that we contingently labour under, no different in principle from the obvious fact that we are not well placed, epistemically, to determine such things as whether an alligator or an oyster is conscious? Yet, we accept the latter limitation on our knowledge and our ability to make progress. We may not like it, but we accept that it exists.
If all this reasoning goes through, the so-called Hard Problem of how to explain consciousness is, indeed, very difficult, but the difficulty is not that mind-boggling, entirely unknown, metaphysical concepts are needed. It is simply that, as an empirical fact, we are poorly situated to investigate (at least in any systematic manner) what regularities apply to the emergence of consciousness. Consciousness may be as much a part of the natural world as anything else, and as open, in principle, to causal explanation in terms of general laws. And yet, we might be in a situation where we are not well-placed to conduct the investigation and work out exactly what those laws are.
That would be unfortunate, of course, since it would mean that some scientific investigations would have to be recognised as very difficult for human beings to pursue to finality (indeed, it is hard to see how they could ever be pursued to finality, at least by us ... but who knows, in advance, the limits of our ingenuity?). It's unfortunate, yes. However, it doesn't seem especially counterintuitive or spooky. It is not letting in the supernatural, or anything as hard to conceive of at all as objective prescriptivity. It does not, for example, entail that full consciousness and all the associated cognitive capacities spring back into existence after a badly damaged brain is totally destroyed, that we are immortal, that there are substantial things ("minds") with no location in space, or more generally that consciousness is a substance that could survive independently of any material substrate (such as organised masses of neurons).
It would just be admitting that consciousness, though a part of the natural universe, cannot be eliminated from our most basic descriptions of how the universe operates, while also admitting something that seems plain anyway: the great contingent difficulty in working out for sure what physical systems are actually conscious and what systems are not. I could live with that. In fact, those admissions seem reasonably intuitive to my particular scientifically-aware sensibility.
This is a rare foray into philosophy of mind by someone who is relatively naive about the field, but I wonder what is actually wrong with the above analysis. Even if what I have sketched is a kind of dualism - a property dualism in that a new category of properties is being considered basic within the most fundamental laws governing the universe - I'm not sure that that is beyond the pale for philosophical naturalists. I am, of course, saying that there could be different degrees to which we are well- or ill-placed to investigate certain kinds of observed phenomena, but that does not seem very surprising, really, and it does not seem to constitute a betrayal of naturalism or of anything that I'd want to build on it.
16 comments:
Explaining how "liquidity" arises from intermolecular interactions is not, in fact, easy, as anybody who has studied kinetic theory can tell you. There are plenty of counterintuitive results in statistical mechanics, particularly when you get into critical point behavior.
Briefly put, there's a temperature above which liquid and gas are indistinguishable; you can turn one into the other by varying the pressure, but you never see a sharp phase transition (like boiling or condensation). Near this "critical point" of temperature and pressure, many different materials are observed to behave in exactly the same way. All their graphs collapse onto the same curve, and the details of their molecular compositions don't matter anymore. The set of all phenomena which have identical critical-point behavior is called a universality class.
(Aside: Some people are tempted to say that the renormalization-group methods used to study "critical phenomena" constitute a non-reductionistic science, or at least a science of reduced reductionism. I don't get what they're on about. Philosophically, when we talk about an electron, for example, we're using an idea which has proved useful in describing the results of many, many experiments; all these phenomena have the same equation, roughly speaking. The same holds true with critical phenomena. We're still "reducing" many disparate, complicated situations to a comparatively brief common description. The only difference is that the common description happens not to involve certain tiny pieces.)
Now, if I met a philosopher who said that the renormalization group and conformal field theory of critical-point behavior were "obvious" consequences of the ways molecules behave, I'd think that philosopher was smoking teh crack.
Fodor, Strawson and company need a course in statistical physics, or if that is too much to ask, a little Hofstadter. To wit:
We can conceptually describe a system — say a leak-proof box full of gas atoms — by specifying the position and velocity of every darn atom. This description would have something like 10^24 variables for a gram of hydrogen gas (even something so simple as that!). Working with 10^24 variables is not, to say the least, easy. So, we try to make progress by making a few simplifying assumptions. One extremely useful trick is to assume that all configurations of equal energy are equally likely. Start with that, and boom! you're in business, although justifying that initial statement is actually not so easy.
The goal of statistical physics is not exactly to "eliminate" notions like "liquidity" or "gaseousness". After all, those notions are bloody damn useful; we'd just like to understand where they come from and, maybe, where their applicability has its limits. So, what we're really doing when we look at micro- or nano-scale behavior is trying to open the black box. If we understand what's going on inside, that's great, but after gaining this new insight, we often put the fine-scale behavior back in the black box and deal with the large-scale description, having availed ourself of a new confidence in that "abstracted" description and the limits of its viability.
A concept like "liquidity" emerges with a certain weighty inevitability from the microscopic laws of physics — indeed, from laws which do not speak of "liquidity" themselves — but the fact of that emergence doesn't make us stop talking about liquids.
"Sure, an engineer could write down untold numbers of equations describing the position and momentum of every single molecule of gasoline and air in a car's cylinder, but 'pressure' isn't about this molecule or that one. It's an epiphenomenon that a very large number of different microstates give rise to. If we talk about statistical mechanics instead of thermodynamics we're missing the forest and the trees alike for the leaves," says John Armstrong.
Then come the issues that we can modify different parts of our psychology by damaging different parts of the brain or eating different mushrooms. While we do not understand the full implications of these actions, scarring Broca's area or severing the corpus callosum is predictably different from a prefrontal lobotomy, and a 2C-B trip is consistently different from an LSD one. Consciousness is so contingent!
And, of course, there is the nagging worry which stems from realizing just how small a part of the Cosmos our "consciousness" truly is. Even the kind of matter of which we are made comprises the merest fraction of the Universe's total energy content, and the overbearing majority of our kind of matter is not engaged in anything resembling conscious activity. Furthermore, our kind of matter did not exhibit this particular emergent behavior until, conservatively speaking, billions of years had passed. Can we claim "basic law" status for something so provincial?
Personally, I think the emphasis on "consciousness" in the philosophy of mind is misplaced. So much is going on in the brain which isn't "consciousness"! For a quick introduction to the experiments which support this, I'd recommend Timothy Ferris' The Mind's Sky (I'm not sure if there's a newer book on the same topics).
Exercise: In the following paragraphs, "consciousness" has been replaced by "photosynthesis". Which version, if either, is more compelling? Discuss.
All of this seems to entail that laws relating to the circumstances in which photosynthesis emerges from the functioning of some kinds of complex material substrates will have to refer to photosynthesis itself. Photosynthesis is not something that can be eliminated from the most basic equations. This, in turn, suggests that there are fundamental botano-physical laws that cannot be reduced to laws that do not mention photosynthesis.
But why is that so counterintuitive? It looks as if phtosynthesis depends on matter, as if the nature of the dependence is lawful, and as if laws that never actually mention photosynthesis could not describe the dependence all by themselves. Does this not suggest that the laws governing the natural universe include irreducibly botano-physical ones?
To be honest, I see Fodor, Strawson and company as arguing from ignorance. They think they understand a physical description of matter (although I find their take on statistical physics quite mistaken), but because we can't put the ill-defined notion of "consciousness" on the same (mistaken) footing, they go veering off to panpsychism.
But Blake, you say (despite everything) that "A concept like 'liquidity' emerges with a certain weighty inevitability from the microscopic laws of physics — indeed, from laws which do not speak of 'liquidity' themselves ..." That is all I mean when I talk about elimination in this context (whatever anyone else means in other contexts): there is no reason to talk about "liquidity" in the fundamental laws, however useful the concept is for many other purposes and however much we keep talking about liguids for all those purposes.
The philosophical problem is that we do not seem to be able to replace the word "liguidity" (where it appears in what I just quoted from you and placed in italics) with the word "consciousness" and still have a true proposition. Or at least it's very difficult to see how that could be the case.
If that is right, then I don't think your photosynthesis example works: the claims that I made about consciousness in my original post, which still seem to me to be plausible even within a scientific view of the world, do not seem to be plausible when it comes to something like photosynthesis. If it's not right, I'd like to be enlightened on why it's not. I'd be very happy to be enlightened in that way, because that would actually suit my materialist instincts.
I like your point about provincialism, though.
Philosophy and I are like ships that pass in the night (with the odd broadside exchanged now and then), so I'm probably bound to get into terminological trouble. For example, take your definition of "elimination".
That is all I mean when I talk about elimination in this context (whatever anyone else means in other contexts): there is no reason to talk about "liquidity" in the fundamental laws, however useful the concept is for many other purposes and however much we keep talking about liguids [sic] for all those purposes.
I would have employed "elimination" in a stronger sense. At the scale and in the circumstances where Aristotle used "natural" and "violent" motions to try explaining things, Galileo, Newton and all that lot showed that forces and inertia were a better way to go; Aristotelian concepts of motion were forcibly expelled. In the circumstances where Maxwell thought the luminiferous aether was necessary to transmit light, Einstein showed that it wasn't. I would have said that Einstein eliminated the aether from our description of the physical world. (And a good thing, too, because as Hunter S. Thompson reminds us, there is nothing so helpless and depraved as a man in the depths of a luminiferous aether binge.) It's not that we found the aether a reasonable description on human or planetary distance scales, but at the atomic level we found the aether to be made of lots of tiny particles — no, we found that it ain't there. Even in the circumstances where it was originally invoked, it's not necessary to get the right answers and in fact it leads you into getting the wrong answers.
I had thought this was the sense in which eliminative materialists used the word, but like I said, "ships that pass in the night". From now on, I'll try to stick to your definition, so we can at least have the illusion of progress. ;-)
So, now that we have a definition of "eliminate", we should move on to codifying what we mean by fundamental and law. I think there's a big point here; I'll try expanding on my "photosynthesis" example first before moving on to more general concerns.
Given a thorough training in physics, one could be handed a sketch of a chlorophyll molecule, do some calculations and say, "Okay, this molecule will absorb light at such-and-such wavelengths, and when it absorbs energy, the electrons will move in this fashion, and that absorbed energy could be stored for later use in thus-and-so a way." That's a rough version of a physicist's take on chlorophyll.
One could not start with the rules of quantum mechanics and the properties of the chemical elements and thence derive the existence of chlorophyll molecules! Physics allows many ways of harvesting light for energy, but nothing (in our current understanding) selects the particular way that flowers, algae and so forth do it on this Earth. We certainly couldn't predict the particular mix of chlorophylls, xanthophylls and carotenoids found in our biosphere. In fact, a physicist from another world, told the basic facts about our local star, would probably try to construct a photosynthetic apparatus which was some other color than green, to more effectively absorb the light from a class-G sun.
Physics allows many possibilities; biology chooses one or a few using the myriad accidents of evolution.
Contrast this with the case of fluid flow. Something like hydrodynamics arises in a very simple situation: an aggregate of atoms, weakly interacting with one another but largely free to move without hindrance. Next to the case of no interactions at all, this is about as simple as you can get: all the atoms are the same, none of them have internal structure, and so forth. This is the sort of scenario one would invent to test the most basic predictions of a theory, and for a wonder, we do see it represented in the physical world.
No doubt there's a better way to put it, but I think there's much more inevitability in the latter situation than in the former. And I think the former is much more analogous to the study of human mentality.
Next, I'd like to poke the concept of "liquidity" a little more thoroughly. Let's say, hubristically, that we understand liquidity, or that we have a good grasp on "fluid flow". I could give several reasons why we don't — solving the Navier-Stokes Equations is a million-dollar riddle, and life at low Reynolds number is deliciously weird — but let's say that fluid flow is a rather well-understood, tractable problem.
One fact about life is that lots of materials can "flow". Water gushes out of a pipe, sure, but sand can slip through fingers or an hourglass, and jellybeans can pour from a scoop into a plastic bag, ready to be eaten. The flow of sand is, in a certain regime, like the flow of cashews, or jellybeans, or even water. Although sand is more likely to "jam" or to make piles on a flat surface, we can recognize that a description of flowing water may also, under some conditions, be applicable to sand or to snack food.
This was one of the first scientific applications of parallel computing. As Feynman says in Most of the Good Stuff, he and his colleagues at Thinking Machines created a simulation of fluid flow by starting with a virtual "ball bearing", an object which could rotate in any one of six directions. By putting a whole bunch of these bearings in a regular array, they could mimic fluid-flow behavior in a simulation. It's interesting and useful because the flow of real ball bearings — and real water, real sand or real jellybeans — is like that of the simulated variety.
In other words, despite their differences on the small scale, all these objects can be "fluidified" when combined in aggregates and observed on a larger scale.
So, the laws of fluid motion are not just a large-scale description of one, particular system — say, a collection of water molecules — but are the laws governing the emergent behavior of many systems, each with their own distinct "atomic" properties.
Are those laws then "fundamental"?
In the parlance of most modern physicists, the answer would have to be "no", since "fundamental" is reserved for the laws governing the smallest-scale behavior. Equations such as those for fluid flow may be more inescapable, however, then lower-scale laws, since they apply to so many "fundamentally" disparate systems (perhaps in different regimes for each system).
Moving closer to the notion of "consciousness", let's consider computation. Back in the olden days, Alan Turing showed that anything which people wanted to call a "computation" could be done by a certain type of surprisingly simple machine (one whose most demanding characteristic was that its memory could always be expanded when necessary). Furthermore, Turing showed that all of these machines could be emulated, down to their last detail, by a "Universal" machine. That is, a Universal Turing Machine with a suitable program can do anything a particular Turing Machine can do.
Now, even in the abstract, there are many ways to build a Turing Machine. Feynman gives one possible UTM "state diagram" (essentially a flowchart for its operation) in his Lectures on Computation, and he points out that with sufficient cleverness, the design could be simplified. If one is allowed more states, one can use fewer symbols on the machine's memory tape, and vice versa. As Matthew Cook demonstrated, a certain fairly simple cellular automaton could be capable of universal computation.
All of the essential properties of Universal Turing Machines hold equally well for any of these conceptual instantiations. Knowing about any one of them, one could prove the undecidability of the halting problem or deduce the connection to Gödel's Incompleteness Theorem.
Then, when one tries to build a Turing Machine in the real world, the possibilities proliferate even more! We can try to make a UTM out of Tinkertoys, for example, or vacuum tubes, perhaps even out of silicon. If we prefer the cellular automaton, we can try to make a CA board game out of mechanical pieces, as in David Brin's Glory Season (1993). There exist lots of ways to do the job, but if we succeed at any of them, we'll be caring about the computations being done, not the particular motions of matter necessary.
(We could even go Plato one better and idealize the ways our physical system fails to match its "ideal form", using perhaps Claude Shannon's theory of communication to understand errors in transmissions.)
Now, in what sense are the behavior patterns of a Universal Turing Machine "fundamental"? They certainly don't depend directly upon the smallest-scale physical laws. As somebody (Ken Wilson?) once said, "You don't need to know quantum gravity to bake a pie." In fact, trying to consider the system at a low level can be counterproductive, precisely because many different configurations only distinguishable at the microscopic scale correspond to the same state of the computer. Many microstates belong to the same macrostate, in physics jargon.
Furthermore, knowing about UTMs doesn't let you make predictions about too many physical systems. It's a very provincial kind of knowledge, if you're talking in terms of the number of biological, geological or astrophysical objects you can understand.
My best guess is that "consciousness" is like a UTM, but more so.
Oops, hit "publish" too soon.
Short version: a "fundamental" assertion or central theorem in the study of computation, such as the claim that a certain set of interacting abstract components can perform any possible computation, is not the same thing as a "fundamental" physical law, like the conservation of energy.
It's going to take me awhile to absorb all that, Blake. While I'm doing so, I should say that my use of the word "eliminate" and its cognates is not intended to be standard and may therefore be confusing. I just need a word for the concept that some properties arise in the way that you attribute to liquidity (as I understand you) but not to photosynthesis (as I understand you), i.e. a word to describe what I took to be the common situation where the deepest explanation of how we find that property in the world will involve physical laws that do not refer to the property itself.
As a bystander (pretty much) to arguments about consciousness, it seems to me that the crucial question is whether consciousness differs from other higher-level, readily-observable properties in that regard. You seem to be saying that it differs from some, but not from others - e.g. from liquidity but not from photosynthesis. If that's right, it seems that there is a sense in which properties such as photosynthesis are, indeed, fundamental, i.e. they cannot ultimately be explained by properties that are "lower down", and so they are emergent in a quite strong way. If that is true, the whole consciousness problem seems to be defused, if not strictly solved. Consciousness turns out to be no more problematic than, say, photosynthesis and we just need to know what to say about the general class of such properties. We may be left with two classes of properties but not a mental-physical property dualism. To me, that would be a surprising outcome, as it seems all too simple, but I've been surprised before and I'm open to such an account.
It's good to have a real physicist here to talk to. I wish I could tempt a real philosopher of mind to come here and talk about it. I'd like to see what's really on their minds, as it were, with all this, as I may have set up the issue in totally the wrong way from the point of view of someone like David Chalmers, in which case I may be wasting your time.
Russell wrote:
"Even if what I have sketched is a kind of dualism - a property dualism in that a new category of properties is being considered basic within the most fundamental laws governing the universe - I'm not sure that that is beyond the pale for philosophical naturalists."
Quite right, there's no antinomy between naturalism and dualism, and indeed Chalmers describes himself as a naturalistic dualist. Naturalism simply accepts what the best science has to offer, and if there are two sorts of fundamental entities or properties in nature, so be it.
But psycho-physical laws will be unsatisfying since they won't be able to transparently connect the categorically mental to the categorically physical. They'll get stuck with the problem of all dualisms, property or otherwise: how does the mental affect (or arise from) the physical, and visa versa? They will only describe correlations. So a naturalistic dualist like Chalmers won't supply what I'd find to be a satisfying explanation of consciousness.
A satisfying naturalistic explanation must somehow transparently connect the mental and physical, perhaps subsuming them as special cases of some third thing. One candidate for this third thing is representation, in that qualitative phenomenal consciousness might be what it's like to exist as a sufficiently ramified but still recursively limited representational system (see www.naturalism.org/kto.htm for one shot at this thesis with lots of references). And what's physical is of course what science traditionally represents as the fundamental constituents of nature. Thus the mental and physical can be understood as special cases of representation, one personal and private, the other impersonal and public.
If such is the case, then there's nothing fundamental about consciousness; rather it emerges, like liquidity, but not at the physical level of description, but the representational. Likewise, there's no ultimate, fundamental nature of the physical, only a set of concepts within our best unifying theory, a theory that, of course, is in the business of representation. Science cum philosophy might eventually show it’s representation all the way down and all the way up, inside, and outside.
Or so a unificationist like myself sometimes supposes…
Thanks for your input, Tom. Btw, I bought a copy of your book, and read it last week. (For anyone who may be interested, I'll give it a plug: Encountering Naturalism: A Worldview and Its Uses. I enjoyed it, think I generally agree with it, and will return to it.
On the immediate topic, your comments about interaction are important. It's one thing to say that certain physical systems have the property of being conscious and that there is some kind of lawful regularity (at whatever "level") about which systems have that property. It's another thing to see just how consciousness has any causal powers that can feed back and affect matter. It looks to me as if the kind of position Fodor is talking about will have some tendency to push us towards epiphenomenalism, which may seem wildly counterintuitive ... though what this discussion is tending to confirm for me is that it's difficult to see any intuitive position in philosophy of mind. Part of me thinks that that could make the whole field almost a waste of energy, which is pretty uncharitable of me, of course, but I suppose a natural response for someone who wants to do applied philosophy with political implications; however, another part of me thinks that it might be a reason to take more interest in its puzzles than I have done in the past.
Thanks for the plug!
The mind-body problem is the last redoubt of dualism, so it's worth tangling with on those grounds alone, plus it's endlessly fascinating and frustrating. But I'm with you on wanting to apply philosophy to social issues.
Note that it’s the *folk* theory of the mental vs physical that gives us the hard problem. The final theory should dissolve it, but of course it might do so by replacing or modifying or eliminating elements of the folk theory. We can’t suppose that our original, familiar concepts of the mental and physical will necessarily survive, in which case it may well *seem* as if the hard problem hasn't really been solved since those concepts are still operative in our everyday thinking. Thus, as you say, it may be that the solution to the hard problem won’t ever seem intuitive. We’ll still have the conflict between the conception of the qualitative/conscious and the conception of the extended/material and there’s no immediate or obvious or intuitive way to reduce one to the other, *given those concepts*. But the final theory might involve concepts that show how the folk notions do in fact reduce or get explained by something that is common to or underlying both.
Note also that it’s the *theory* that adjudicates between concepts, so that if there’s a dissonance between the pretheoretical mind-body distinction and what the theory says, the pretheoretical concepts have to go. Very much like our commonsense notions of space and time have to yield to the theoretical conceptions. So the ordinary mental-physical distinction might not be fundamental, but merely an artifact of how we’re constructed to experience and understand the world. Same thing, perhaps, for our firm conviction about the causal role of consciousness.
OK, this was apt (and unexpected) enough that I figured I should mention it here. Washington Post journalist Joel Achenbach writes in an online addendum to his most recent column,
The secret of everything is geometry. The structures have properties. A protein molecule performs its functions not because of any intent or 'power,' but because of how it is folded in three dimensions, and how, in that shape, it fits into various receptors in the body, like a key in a lock. If enough of these properties are in place, and are dynamic in certain ways, we can venture that we're dealing with something that meets our definition of being alive. Life, the exobiologist David Des Marais told me, is "an emergent characteristic that is fundamentally structural in nature." By "emergent" he means that life has no singular cause, like a life force, but is, rather, an overall trait of the system. A good analogy is that, when you get a bunch of water molecules together, you have something that has the property of being wet. No single molecule is even damp....
Coming here late, but this seems to be just the classic reductionist/emergent issue with conciousness as the topic. It is more general than that. The issue is deciding whether certain physical rules can be used to predict all others. Is there some small subset of rules like the Standard Model of Particle Physics that can be used to predict all the other rules? It has been seen that these small sets of rules (laws) cannot be used this way.
As a metaphor, consider the case of high level computer languages with the machine language (ML) of the particluar computer you are using as the equivalent of the standard model. You could NOT use the ML to predict the syntax of those high level languages ever, since there are infinite possibilities and the actual ones taken are contingent on history. The thing is the high level languages and rules must be CONSISTANT with the machine language.
Similarly high level physical rules, like photosynthesis must be consistent with lower level rules. You have to be able to go down the levels and be consistant, but you could not have predicted, in many cases what the higher levels rules would have been, since they are contingent on history. This idea is is built into physical science models all the way down to spontaneous symmetry breaking in particle physics.
I see no reason we cannot in theory, understand conciousness in the same way. Suppose we get better and better tools to "see" the brain activity at smaller and smaller scales approaching the 10's of neuron level with ability to imitate or affect synapse firing. We do experiments and see exactly what activity happens whenever a subject says she is thinking of a triangle. There is a given pattern of firings, - it may not be in the same set of cells, it may have all sorts of different probabilistic rules associated, but there is a pattern we can pick out that 95% of the time means the person is thinking of a triangle. Now as experimenters we start using our devices to create patterns that meet our rules and wow in 95% of the cases the subject reports they suddenly are thinking of triangles. Continuing, we find patterns of firing that must be there whenever someone says they are awake. we disrupt them and there is always loss of conciousness. We learn to subtly change then and we find we can manipulate concious state, memory, etc.
Would we now understand consciousness, and would there be a reason to talk about duality or emergence in a different sense than photosynthesis anymore? Conciousness would have physical equivalents. The conciousness "particles" would be patterns and sets of signals. All would be consistent with the standard model (lets say anyway). I don't see any duality left in this scenario.
I don't have any trouble agreeing with Strong AI -- you could wire up lots of x86 boxes, put the right software on them, and they'd perfectly emulate human behavior. (Or other complex phenomena like liquidity or a Red Sox game.)
I'm also aware that the physical stuff going on in the brain is linked with thoughts and behaviors; I discover it anew every time I take in some caffeine. It's reasonable to say the mind is a product of the brain.
But I know I have subjective experience. I don't have any good reason to think it's necessary to produce my behavior, or that it's useful for anything at all. Other people can't observe it in me, because I could act just the same without it.
So I can reasonably wonder whether a particular x86 simulation of me has subjective experience, just like I wonder whether it has other traits of mine that aren't essential for emulation, like consisting mostly of water.
In principle, subjective experience could be made easy to observe, toy with, and study. Going in the direction of markk's comment, we could get really good at manipulating brains and discover a targeted way to flip on and off subjective experience, so I could be fully functional but whatever process currently determines that I do have subjective experience would determine that I didn't.
Or we might reach the time when we have multiple implementations of autonomous, self-interested AI based on wildly different strategies. We could discover that some implementation strategies produce critters that claim subjective experience and others don't. Notably, that could happen even if subjective experience is deeply tied in with how brains work and can't be "switched off" without making the brain nonfunctional.
I'm not trying to speculate about what will actually happen here; I'm just saying that subjective experience could in principle be observed and toyed with, giving us a better path than we have now to explaining it in terms of atoms or whatever.
So, my conclusion: we can't treat subjective experience as nearly as "solved" as Strong AI, even though a lot is known about computation and some is known about brains. But nor is subjective experience something inherently metaphysical that we can never observe and understand.
If that's right, it seems that there is a sense in which properties such as photosynthesis are, indeed, fundamental, i.e. they cannot ultimately be explained by properties that are "lower down", and so they are emergent in a quite strong way.
The words "fundamental" and "emergent" pretty much mean "I don't understand scientific explanation and reduction". We don't explain phenemena by lower down properties, but rather in terms of lower down properties. What's missing in the misunderstanding is structure -- higher level entities result from the structuring of lower level entities, and that structuring must be included in the explanation, not just the components and their properties. Car travel is not "fundamental" or "emergent" but rather can be explained in terms of components and their relationships. Photosynthesis is no different in this regard.
I don't have any good reason to think it's necessary to produce my behavior, or that it's useful for anything at all.
That strongly suggests that you haven't spent a lot of time trying to come up with reasons and haven't read much of the cognitive science literature. You might try Marvin Minsky's new book, "The Emotion Machine", as one source of good reasons.
I don't have any trouble agreeing with Strong AI -- you could wire up lots of x86 boxes, put the right software on them, and they'd perfectly emulate human behavior.
That's not Strong AI. Strong AI is the thesis that the box (or some box) would have the same mental states as a human -- "subjective experience" and all that.
It's one thing to say that certain physical systems have the property of being conscious and that there is some kind of lawful regularity (at whatever "level") about which systems have that property. It's another thing to see just how consciousness has any causal powers that can feed back and affect matter.
That's a category mistake. Properties of objects do not, per se, have causal powers, but the object has the causal powers it does as a consequence of its properties.
Post a Comment