“Apocalyptic pronouncements from scientists and entrepreneurs have driven [a] surge in interest” in artificial intelligence, writes Guardian science editor Ian Sample. But is it reasonable to expect that machines will one day willfully turn on their human creators?

“It was the inventor Elon Musk who last year said artificial intelligence might be the greatest existential threat that humans faced,” Sample continues. “Stephen Hawking joined in the chorus, warning that the development of full artificial intelligence could spell the end of the human race. The same year, the Oxford scientist Nick Bostrom, published the thoughtful book Superintelligence, in which he made similarly gloomy predictions.”

Sample looks to the new British Channel 4 drama “Humans” to discuss the current interest in the possibilities of artificial intelligence. Citing the state of the art, he says the show “bolsters the misconception that human-like artificial intelligence is looming on the horizon.” Though scientists have made strides in artificial intelligence, their achievements are almost entirely limited to what researchers call “narrow AI” — that is, the creation of “smart algorithms for dedicated tasks.” These are the chat bots that can answer common sales inquiries. Assign one of them a task that falls outside of its programming and it falls flat.

Sample quotes Murray Shanahan, professor of cognitive robotics at Imperial College London and a scientific adviser on the recent Alex Garland film, “Ex Machina,” as saying: “We really have no idea how to make a human level AI.” Shanahan rates the odds of technology reaching that level of sophistication as “possible but unlikely” between 2025 and 2050. In the second half of the century, he says, it becomes “increasingly likely, but still not certain.” Sample adds: “A case of if, not when.”

Shanahan continues: “The big hurdles are endowing computers and robots with common sense: being able to anticipate the consequences of ordinary, every day actions on people and things. The other one is endowing them with creativity. And that is incredibly hard.”

Sample goes on:

The distinction between narrow and general artificial intelligence is crucial. Humans are so effective because they have general intelligence: the ability to learn from one situation and apply it to another. Recreating that kind of intelligence in computers could be decades away. Progress, though, is coming. Researchers at DeepMind, a London-based company owned by Google, made what they called “baby steps” towards artificial general intelligence in February when they unveiled a game-playing agent that could learn how to play retro games such as Breakout and Space Invaders and apply the skills to tackle other games.

But Nigel Shadbolt, professor of artificial intelligence at Southampton University, stresses that the hurdles which remain are major ones. “Brilliant scientists and entrepreneurs talk about this as if it’s only two decades away. You really have to be taken on a tour of the algorithms inside these systems to realise how much they are not doing.”

“Can we build systems that are an existential threat? Of course we can. We can inadvertently give them control over parts of our lives and they might do things we don’t expect. But they are not going to do that on their own volition. The danger is not artificial intelligence, it’s natural stupidity.”

— Posted by Alexander Reed Kelly.

Your support matters…

Independent journalism is under threat and overshadowed by heavily funded mainstream media.

You can help level the playing field. Become a member.

Your tax-deductible contribution keeps us digging beneath the headlines to give you thought-provoking, investigative reporting and analysis that unearths what's really happening- without compromise.

Give today to support our courageous, independent journalists.