Stephen Hawking is sounding the alarm bell on artificial intelligence, warning that the creation of autonomous robots and successful AI could be the last thing humans ever do.

Stephen Hawking

The award-winning theoretical physicist and mathematician says brushing artificial intelligence off as science fiction could be the “worst mistake in history.” (AP Photo/Evan Agostini)

Hawking — perhaps best known outside academia for his mega-popular “A Brief History of Time” — warned that dismissing the notion of highly intelligent machines as mere science fiction could be the “worst mistake in history.”

“Success in creating AI would be the biggest event in human history,” said an article jointly penned by Hawking and three colleagues in the UK’s Independent. “Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

In their piece, Hawking and his coauthors presented the scary side to all the proposed promises of artificial intelligence.

“One can imagine such technology … out-manipulating human leaders and developing weapons we cannot even understand.” 
Share:

“If a superior alien civilization sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here – we’ll leave the lights on’? Probably not,” the article said, “but this is more or less what is happening with AI.”

The award-winning theoretical physicist and mathematician said the inventors and researchers currently pushing the artificial intelligence boundaries aren’t considering the potentially massive negative outcomes if singularity is achieved.

Image via Honda

Hawking says the benefits could be huge for artificial intelligence, but that not enough care is given to how these inventions could be controlled in the future. (Image source: Honda)

“So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong,” Hawking said. “Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes.”

The 72-year-old Cambridge University director of research and Fundamental Physics Prize laureate quantum gravity research sees the potential good in artificial intelligence research, but insists the future is unknown when it comes to how these abilities will be manipulated.

“The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list,” he said, “There are no fundamental limits to what can be achieved.”

(H/T: Inquisitor)

Follow Elizabeth Kreft (@elizabethakreft) on Twitter