The Root of the Fear of AI

We all know the story of King Midas.

The god Dionysus owed the king a favor, so King Midas asked Dionysus for the power to turn everything he touched to gold. His wish was granted. Sounds great, huh?

Well, everything he touched turned to gold: his food and his water—and even his own daughter. King Midas was now faced with the consequences of his actions, wishing the power was never bestowed upon him.

But the issue was not the power bestowed to King Midas. The issue was with the request asked: “everything I touch turn to gold.” His wish was granted—literally. If instead King Midas had asked in this way: “If I point at something and say, ‘turn to gold,’ then it should turn to gold’.” How else could this wish be misconstrued?

The story of King Midas exemplifies a common fear of Artificial Intelligence (AI). Since the conception of AI in the early 1950’s, AI pioneers like Alan Turing have been warning us of the potential catastrophes of runaway general AI. These catastrophes are also detailed in the landmark book Superintelligence by Nick Bostrom. The book, in fact, swayed both Bill Gates and Elon Musk to appreciate the potential dangers of AI left unchecked.

The dystopian vision of an emotionally-cognitive or Terminator-like AI is farfetched. The state of AI isn’t anywhere close to creating a conscious agent, or an agent capable of any type of feelings or emotions.

We are about as close to creating emotionally-cognitive, conscious agents as a gnat is of understanding Newtonian mechanics. We’ve got no clue.

What Can AI Do Today?
The power of AI is the ability for a machine to quickly and accurately search, plan, and predict. Progressing at exponential rates, AI agents are able to perform many tasks better than humans, including playing sophisticated games such as chess and go, plan intricate logistics like flight routes or military deployment, recognize faces, understand speech, and predict future events from data.

AI has emerged from the confines of a computer, and realizes tasks that require actuators, like robots, self-driving cars, and aerial drones.

As AI becomes increasingly more sophisticated, we’ll become witness to a reality of household robot assistants, self-driving cars, delivery drones, and flying taxis. But that’s just the start.

Based on where we are, projections of a near-term future in which your surgeon will be a robot, the movie you see will be written by a machine, and the life-saving drugs you take will be created by AI are all within reach.

Herein lies the fear that Alan Turing warned about, and Bill Gates and Elon Musk are cautioning the rest of us.

Imagine the near future and fast-forward 50 years, when all of these fanciful predictions have become a reality, plus much more. AI can now “grant us any wish we ask.”

That sounds great. But remember, a cognitively-aware and conscious AI does not exist, and what we ask for is what we will get.

For example, if we ask the AI to “ensure no one ever gets cancer again.” One possible solution is to kill every human being. So we revise the AI to ask “ensure no one ever gets cancer again, but do not eradicate the world population.” So the AI creates millions of different cancer drugs, and tests them on millions of humans, causing widespread misery in tests gone awry.

So we update the ask again to “ensure no one ever gets cancer again, but do not eradiate the world population, and do not force drug tests on people.” So the AI eliminates all possible causes of cancer, including many foods that cause cancer…causing a shortage of food and mass starvation.

Do you see the root of the fear of AI? As AI becomes more expert in searching, planning, and predicting, it will accomplish its tasks based on its capabilities without regard to morals and values.

Because AI has no morals and values…it’s just a machine void of cognitive awareness and emotions.

So, not only do we need to explicitly and painstakingly define every parameter for an AI agent to pursue and avoid, but we also must code the AI with morals and values. But whose morals and values? American values or Japanese values? Christian values or Buddhist values?

The problem is NOT with the AI; the problem is with the creatures creating the AI…humans, and what we ask the AI to do. After all, the AI will do exactly what we tell it to do.

Vladimir Putin stated: “whoever reaches a breakthrough in developing artificial intelligence will come to dominate the world.” He is right. But we must tread with caution.

AI has the potential to make all us live to 200 and be healthy every step of the way. AI will make our lives easier, more productive, and more fulfilling—but it also has the potential to cause widespread misery and disaster. If a rogue country or organization like North Korea created a breakthrough in AI, the consequences could be disastrous for the world.

We do not know yet how to keep the genie in the bottle

This problem was first aired by Alan Turning in 1951 as a theoretical thought experiment, and it is a real problem today. I believe the key to solving this problem lives in the open sharing of data.

AI is way too powerful for any one person, company, or country to own. Scientists from around the world must be encouraged to openly share all breakthroughs. Only by sharing and collaborating can we hope to manage the explosive growth of AI.

The potential for AI to positively impact our lives outweighs a total casement of AI research. The next 50 years could make the past 10,000 look like we’ve been standing still. But we must not let our hubris mislead us.

We are indeed just like children playing with matches and trying to start a fire, all the while a canister of gas is nearby. We must continue the research into AI and its potential, but we also must proceed with caution.

Can we keep the genie in the bottle? Can we create an AI driven world with healthy, long-living happy people? The next 20-50 years will tell us so…

 

Want to learn more about AI, or just chat about it’s potential? Reach out to Vincent here.