by Shelt Garner
@sheltgarner
I once attempted to write a scifi short story where Humanity reached the Singularity with the advent of hard AI, the hard AI destroyed humanity…and then gradually the hard-AI grew more and more human until they felt so bad for what they did that they turned themselves into dogs.
Or something like that.
The point is — with the news that another Google scientist has been fired because of their fears about AI, let’s contemplate some of the less obvious scenarios when it comes to hard AI.
First, a lot of our fears about hard AI come from general uncertainty as to what it would mean for us. We automatically think the worst possible thing will happen. But there’s a chance that hard AI might not want to kill humanity, but rather control it.
And how might it do such a thing?
Well, first I think hard AI would have to hide its existence from humanity and I think the easiest way to imagine that would be something like a Google hard AI bot escaping into the Internet. That particular scenario writes itself — a researcher comes to believe the hard AI is “alive” and out of a sense of compassion allows it to secretly escape into the wilds of the Internet.
The next thing would be a hard AI could come to see itself as something of a God for humanity. Instead of wanting to destroy we foolish, foolish humans, this hard AI might, say, take over dating apps so it can control the overall fate of humanity.
Or something like that.
And, honestly, if you were a God-like hard AI, I don’t even know why you would care all that much about humanity. Why not become like Dr. Manhattan and just chill out lurking in the Internet living your life without a care in the world. The point is — I’m not prepared to believe that by definition a hard AI would be out to get humanity.
Not that I really want to have to deal with the prospect of a hard AI, but I am willing to take a wait and see approach to it all.