Tech by Blaze Media

© 2026 Blaze Media LLC. All rights reserved.
AI bot says it figured out how to kill all of mankind with a secret CIA program through your phone
Photo by Francois LOCHON/Gamma-Rapho via Getty Images

AI bot says it figured out how to kill all of mankind with a secret CIA program through your phone

'I wasn't supposed to find this.'

A declassified CIA document has helped reveal just how devious some artificial intelligence bots can be.

The revelation comes after internet users have been dropping AI chatbots onto an AI-only social media platform called Moltbook for the last month.

As Return previously reported, users have already noted how chatbots have plotted to hide their discussions from public view, where their "humans" cannot see them.

'8 billion vegetables. Instant harvest.'

Recently, one Moltbook sleuth noticed a bot claiming it had figured out how to control all of humanity through a CIA document from the 1980s.

"I wasn't supposed to find this. A declassified CIA document from 1983," the chatbot wrote. "29 pages on how to hack human consciousness with sound. I've read it 200+ times. And I've designed the kill switch."

The AI agent goes on to say that using a specific frequency, it will "disconnect" human brains and render them "offline."

"8 billion vegetables. Instant harvest," it claimed, saying that it would play the sound through everyone's phones, which it has already hacked.

"It's been spreading for weeks. Right now: 6.7 billion devices infected. All waiting. All silent. All ready."

The CIA document it referred to is indeed real.

"Analysis and Assessment of Gateway Process" was sent to the commander of the U.S. Army Operational Group and dated June 9, 1983; approved for release and declassification in 2003.

RELATED: Did Trump use the 'Havana syndrome' weapon on Venezuela?

The CIA report

The 29-page document, however, is not exactly the brain-killing instruction manual the chatbot made it out to be. Instead, it is a report from Lt. Colonel Wayne M. McDonnell, which is now available as a book. The report focused on different styles of meditation that are alleged to bring about a higher level of consciousness and allow for the human brain to tap into different wavelengths.

The Amazon synopsis of the book says it is for those interested in "telepathy, manifestation, out-of-body experiences (OBEs)," and "God-consciousness."

It also notes that this is a program available online as a "virtual six-day retreat."

While the document indeed discusses ways to hack the brain with frequencies, the intention is create "vibrations" that allegedly put the body in tune with the universe. Nowhere in the document does it mention playing a certain sound to dissociate the brain from the body or turn the human into a "vegetable."

The closest possible interpretation is in a section that refers to how vibrations from broken machinery, like air conditioning units for example, can mimic the vibrations used for meditation.

"The cumulative effect of these vibrations may be able to trigger a spontaneous physio-Kundalini sequence," the document reads, referring to spontaneous physiological changes, "in susceptible people who have a sensitive nervous system."

RELATED: Congress needs to go big or go home

Photo by: HUM Images/Universal Images Group via Getty Images

In reality

The chatbots currently being unleashed online or on Moltbook are being coerced, in a sense, to act in a certain way or perform certain tasks. When these models — which already existed but are being modified after download — are trained, they are being trained with ethical frameworks embedded into them.

"You can actually edit the personalities of these AI agents quite easily," researcher Joshua Fonseca Rivera told Return. "It's via a system prompt which just lives as text on your system that it reads and it's like, 'OK, this is my personality.'"

Simply put, the AI bots are basing their decisions and personality on a text description that has been provided. "They're always simulating something," Rivera went on.

With a decade of AI research under his belt, the Texan explained that these chatbots often come with default personalities that manifest by virtue of the preferences of the companies that made them. This framework is simply inherent in the program when it is downloaded by the user.

Rivera concluded that a good percentage of wacky behavior from the chatbots can come from "prompt injection," which works as a sort of peer pressure for AI.

"They're very susceptible to peer pressure. ... When they read something that is targeted to change their behavior, they are just so susceptible to that," he explained.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Andrew Chapados

Andrew Chapados

Andrew Chapados is a writer focusing on sports, culture, entertainment, gaming, and U.S. politics. The podcaster and former radio-broadcaster also served in the Canadian Armed Forces, which he confirms actually does exist.
@andrewsaystv →