© 2024 Blaze Media LLC. All rights reserved.
Air Force denies allegations that AI-drone 'killed' its human operator in simulation: 'Hey don't kill the operator — that's bad'
Isaac Brekken/Getty Images

Air Force denies allegations that AI-drone 'killed' its human operator in simulation: 'Hey don't kill the operator — that's bad'

The Air Force is publicly denying that an AI-enhanced drone tried to kill its operator.

Wait, what happened?!

Last month, Air Force Col. Tucker "Cinco" Hamilton, chief of the USAF's AI Test and Operations, provided a presentation at the Future Combat Air & Space Capabilities Summit, hosted by the Royal Aeronautical Society in London.

During his presentation, Hamilton recounted a story in which an AI drone in a simulation allegedly "killed" its human operator because he kept the drone "from accomplishing its objective."

"We were training it in simulation to identify and target a [surface-to-air missile] threat. And then the operator would say 'yes, kill that threat.' The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective," Hamilton said.

Shockingly, Hamilton went on, the drone even allegedly learned how to end communications with its human operator.

"We trained the system: 'Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that.' So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target," Hamilton said.

No one was actually harmed in the alleged simulation.

What did the USAF say?

Air Force officials responded that Hamilton's comments were taken out of context and said no such simulation ever happened.

"The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," said USAF spokeswoman Ann Stefanek. "It appears the colonel’s comments were taken out of context and were meant to be anecdotal."

Moreover, Col. Hamilton now claims he "misspoke" and that his remarks about a "rogue AI drone simulation" were only a "thought experiment" not related to anything that actually happened.

"We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome," Hamilton said, according to the Royal Aeronautical Society.

"Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI," he added.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?