Australian philosopher, literary critic, legal scholar, and professional writer. Based in Newcastle, NSW. My latest books are THE TYRANNY OF OPINION: CONFORMITY AND THE FUTURE OF LIBERALISM (2019); AT THE DAWN OF A GREAT TRANSITION: THE QUESTION OF RADICAL ENHANCEMENT (2021); and HOW WE BECAME POST-LIBERAL: THE RISE AND FALL OF TOLERATION (2024).
It's not common to see an argument that X is unlikely by analogy to unlikely Y when Y, if true, would entail X, (among other things Y entails). It's more common to see a straightforward fallacy of division.
An example of the fallacy of division: the fleet can't go faster than 30 knots, therefore no ship in it can go faster than 30 knots.
Hanson does not commit the fallacy outright, but uses an analogy that is weak in the same way firm conclusions would be fallacious.
But what if our meagre success was because we just didn’t have the right grand unified theory of betterness? What if someone someday discovered the basics of such a theory? Well then this person might use his basic betterness theory to make himself better in health, wealth, sexiness, organization, work ethic, etc. More important, that might help him make his betterness theory even better....we seem to have little reason to expect there is a useful grand unified theory of betterness to discover...But a bunch of smart well-meaning folks actually do worry about a scenario that seems pretty close to this one. Except they talk about “intelligence” instead of “betterness.”...I put the word “intelligence” in quotes to emphasize that the way these folks use this concept, it pretty much just means “betterness.” (Well, mental betterness, but most of the betterness we care about is mental.)"
There is no reason to believe there is a theoretical concept of "betterness" one could harness to improve itself, as "betterness" seems too ill-defined and diffuse, as well as dependent on choices that may be arbitrary. However, if there were such a theory, it would entail as a component the sub-theory of "better intelligence", that is "betterness" as applied to intelligence.
If there is no theory of better intelligence, i.e. no general theory of inductive reasoning, then there can be no general theory of betterness. Were there a general theory of betterness, there would certainly be a general theory of better intelligence. We may take the complete implausibility of the notion of theories of general betterness as weak evidence against each specific theory of "better X". However, there may still be a valid theory of "better X" for each X.
There happens to be a theory of best inductive learning, AIXI, that is unimplementable in practice. However, it can be approximated and it can also be estimated how far approximations are from the ideal. There is a "better X" when the X is intelligence, if not when the X is other things.
So it seems that the analogy falls apart once the details of intelligence are examined, just as the conclusion that no ship can exceed the fleet speed falls apart once one examines what it means to be a ship in a fleet.
It just seems very terse, Robin, and at a very high level of abstraction. Are you working on a longer version where the specific positions you have in mind are identified and analysed, specific premises and alternative interpretations of them are identified, etc.?
I actually have some sympathy for the argument, not least because I think I share you views about "betterness" to an extent. But at this stage I'm not confident that the analogy goes through.
3 comments:
What exactly would you like to be more fleshed out?
It's not common to see an argument that X is unlikely by analogy to unlikely Y when Y, if true, would entail X, (among other things Y entails). It's more common to see a straightforward fallacy of division.
An example of the fallacy of division: the fleet can't go faster than 30 knots, therefore no ship in it can go faster than 30 knots.
Hanson does not commit the fallacy outright, but uses an analogy that is weak in the same way firm conclusions would be fallacious.
But what if our meagre success was because we just didn’t have the right grand unified theory of betterness? What if someone someday discovered the basics of such a theory? Well then this person might use his basic betterness theory to make himself better in health, wealth, sexiness, organization, work ethic, etc. More important, that might help him make his betterness theory even better....we seem to have little reason to expect there is a useful grand unified theory of betterness to discover...But a bunch of smart well-meaning folks actually do worry about a scenario that seems pretty close to this one. Except they talk about “intelligence” instead of “betterness.”...I put the word “intelligence” in quotes to emphasize that the way these folks use this concept, it pretty much just means “betterness.” (Well, mental betterness, but most of the betterness we care about is mental.)"
There is no reason to believe there is a theoretical concept of "betterness" one could harness to improve itself, as "betterness" seems too ill-defined and diffuse, as well as dependent on choices that may be arbitrary. However, if there were such a theory, it would entail as a component the sub-theory of "better intelligence", that is "betterness" as applied to intelligence.
If there is no theory of better intelligence, i.e. no general theory of inductive reasoning, then there can be no general theory of betterness. Were there a general theory of betterness, there would certainly be a general theory of better intelligence. We may take the complete implausibility of the notion of theories of general betterness as weak evidence against each specific theory of "better X". However, there may still be a valid theory of "better X" for each X.
There happens to be a theory of best inductive learning, AIXI, that is unimplementable in practice. However, it can be approximated and it can also be estimated how far approximations are from the ideal. There is a "better X" when the X is intelligence, if not when the X is other things.
So it seems that the analogy falls apart once the details of intelligence are examined, just as the conclusion that no ship can exceed the fleet speed falls apart once one examines what it means to be a ship in a fleet.
It just seems very terse, Robin, and at a very high level of abstraction. Are you working on a longer version where the specific positions you have in mind are identified and analysed, specific premises and alternative interpretations of them are identified, etc.?
I actually have some sympathy for the argument, not least because I think I share you views about "betterness" to an extent. But at this stage I'm not confident that the analogy goes through.
Post a Comment