The Cult of Smart Machines
Until I read Ray Kurzweil’s most recent book How to Create a Mind I was somewhat skeptical about the singularity movement.
I don’t doubt that Moore’s Law will hold true (computers will attain processing power to match the brain within our lifetime), but I have never encountered anything that convinced me people were making comparable progress in the field of programming AI.
Much suggests the opposite - that most AI is based on tricks, and not really comparable to human intelligence.
A lot of singularity literature leans heavily on Moore’s Law for validation, but is mostly wild speculation about what the future will look like when the curve goes vertical.
I’m a fan of that kind of thing too, but I’m happier consuming it in sci-fi novels & comic books (Read Warren Ellis & Darick Robertson’s Transmetropolitan!).
But in How to Create a Mind Kurzweil lays down a convincing exploration of how the brain works.
Here I’m interested in talking about my own experience of realizing what it means to accept the idea that we might soon create computers that are smarter than humans.
As you read descriptions of how the brain stores patterns as connected hierarchies, and the different ways it processes input, it is hard not be self conscious.
The processes you are struggling to understand are the same processes that are enabling you to absorb the information that describes them.
For a moment, terrifyingly, it all seems implausible - not the writing, but your own existence as a conscious entity.
How can a relatively small lump of mushy tissue, chemicals, & electrical impulses do all this!
And as I experienced this moment of panic I quickly came to another realisation:
If we can emulate the patterns of our intelligence in machines it is going to be superior to us in many ways.
- It won’t be confined to a single deteriorating body. Hardware failure (i.e. death) won’t be so much of an issue.
- It will be networked. Information & ideas will be easily exchanged among entities.
- It will be more capable of understanding our own intelligence than we are ourselves. Can a brain contain a full understanding / model of itself within itself? Probably not, but an AI wont have that limitation with regards to us.
After billions of years of evolving into organisims that have complex self awareness, we will finally be able to understand how we are aware.
As a teen I read a lot of Asimov. He wrote from a rare, un-pessimistic, point of view in that he tried to consider scenarios where creating an intelligence higher than our own wouldn’t necessarily result in it trying to kill us.
One of the great moments in his novels is when an android, who for most of his existence has been tasked with taking care of a child, is asked by the child what he means when he says he feels happy to be around her.
He says that when he is around her ‘his circuits flow more freely’, and that he equates this sensation with happiness.
This always struck me as quite a profound explanation for what is typically considered to be an intangible emotion.
Asimov’s universe was conceived pre-internet. As far as I can remember most of his AI’s exist as distinct beings as opposed to networked entities.
This is going to be one of the really big questions facing those who develop AI’s - If they can easily copy experience and knowledge between each other how does that change the idea of self? Will we create the borg?
In some ways this is already happening as we become increasingly reliant on things like Wikipedia.
Its great to have a centralised repository for knowledge, but there is also a risk that the generation that learns from that will have a homogenised sense of history.
My point is that we can look at the ways computers and the internet are already transforming our intelligence for cues as to how AI will take shape.
Ultimately the thing I am most excited about in AI is the idea of being able to pull a video feed from a computer as it dreams :)
Clearly its very hard to explore these sorts of ideas without making all sorts of wild speculations about the future of your own.
I used the word ‘cult’ in the title of this post jokingly because singularity people are often seen as being techno-religious.
The cult like aspect of singularity thinking in my opinion (especially around figures like Kurzweil) is the insistence that these technologies will transform our lives and society for the better.
That’s far from being a sure thing - both in terms of how access to technological advancement will be distributed in society, and whether technological changes will actually make us happier.
I want to be optimistic about the potential for this future - in fact I want to be among the people who create it.
There are lots of things to consider - as technology gives more and more power to its owners we need to make sure that we also have a strong, functional, democracy (a point made well by Khannea Suntzu).
We have to ask ourselves what we really want out of technology, and be sure that it is driven by those needs.