© 2024 Blaze Media LLC. All rights reserved.
Prominent physicist says world government needed to save humanity from future robot holocaust
British theoretical physicist professor Stephen Hawking said that world government might be needed to save humanity from a robot holocaust. “The real risk with AI isn't malice but competence,” he said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble." (Desiree Martin/AFP/Getty Images)

Prominent physicist says world government needed to save humanity from future robot holocaust

Famed cosmologist Stephen Hawking, who is famous for his brilliant advances in the field of theoretical physics, says that mankind may perish under the hand of artificial intelligence unless a world government is formed to protect us.

The world-renowned physicist, who has spoken out about the dangers of artificial intelligence in the past, believes we need to establish a way of identifying threats quickly, before they have a chance to escalate.

“Since civilisation began, aggression has been useful inasmuch as it has definite survival advantages,” he told The Times.

“It is hard-wired into our genes by Darwinian evolution. Now, however, technology has advanced at such a pace that this aggression may destroy us all by nuclear or biological war. We need to control this inherited instinct by our logic and reason.”

Hawking opined that "some form of world government” would be needed to head off such a possibility. But he admitted that organizing such a government has its own drawbacks, including the chance that it might turn into a dictatorship.

But that might become a tyranny. All this may sound a bit doom-laden but I am an optimist. I think the human race will rise to meet these challenges.

Hawking said previously that super-smart robots might not destroy the human race because of nefarious reasons, but because we program them poorly from our own incompetence.

“The real risk with AI isn't malice but competence,” he said. “A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble."

He drew an analogy to how we treat ants, with the super-smart robots standing in for us, and the ants standing in for humanity.

You're probably not an evil ant-hater who steps on ants out of malice, but if you're in charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad for the ants. Let's not place humanity in the position of those ants.

In other words, future robots may not intend to destroy humanity, but may do it by accident, or because of poor design by their human creators.

Hawking is credited for advancing human knowledge in cosmology, but libertarians might reject his advice in politics given their reticence to instituting a one-world government.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?