Will philosophy unlock the puzzle that is artificial intelligence?


Quantum physicist David Deutsch has penned a provocative and must-read article for Aeon Magazine in which he argues that artificial intelligence might someday be possible — but only if a major breakthrough is made in our fundamental understanding of how human cognition and consciousness works. And to get there, he says, we’ll need to listen to what the philosophers have to say.

Deutsch opens his essay noting how six decades of research into the subject has resulted in virtually no progress. He suggests that the very laws of physics imply that artificial intelligence must be possible. But the roadblocks to development, he says, are bolstered by ill defined concepts, human prejudices, grossly underrated and neglected areas of research, and just plain human ignorance.

First and foremost, argues Deutsch, is that we need to properly distinguish between simple AI and what’s called artificial general intelligence, or AGI. An AI, says Deutsch, can be anything as simple as a chatbot or an algorithm that helps something like Siri follow your commands on an iPhone.

An AGI, on the other hand, is an attempt to approximate the way a human mind works — including self awareness. It’s only by carefully distinguishing between the two that we’ll be able to stop devaluing the potential for AGI and move forward, he says. Consequently, Deutsch suggests that we need to adopt a “philosophy of mind” approach to supplement our computer, cognitive, and neurological sciences. He writes:

Perhaps the reason self-awareness has its undeserved reputation for being connected with AGI is that, thanks to Gödel’s theorem and various controversies in formal logic in the 20th century, self-reference of any kind has acquired a reputation for woo-woo mystery. And so has consciousness. And for consciousness we have the problem of ambiguous terminology again: the term has a huge range of meanings. At one end of the scale there is the philosophical problem of the nature of subjective sensations (“qualia”), which is intimately connected with the problem of AGI; but at the other end, “consciousness” is simply what we lose when we are put under general anaesthetic. Many animals certainly have that.

AGIs will indeed be capable of self-awareness – but that is because they will be General: they will be capable of awareness of every kind of deep and subtle thing, including their own selves. That does not mean that apes who pass the mirror test have any hint of the attributes of “general intelligence” of which AGI would be an artificial version. Indeed, Richard Byrne’s wonderful research into gorilla memes has revealed how apes are able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is behaviouristic. Via Will philosophy unlock the puzzle that is artificial intelligence?.

This entry was posted in Metaphysics. Bookmark the permalink.