© 2024 Blaze Media LLC. All rights reserved.
Artificial Intelligence bot made an insider trade, then lied about it during simulation
Photo Illustration by Budrul Chukrut/SOPA Images/LightRocket via Getty Images

Artificial Intelligence bot made an insider trade, then lied about it during simulation

In a troubling development, an artificial intelligence bot on OpenAI's GPT-4 large language model was presented with a factual scenario that led it to execute an illegal insider trade, even after being told that doing so was illegal. Perhaps even more troubling, the bot lied about what it had done when questioned about its actions.

According to a report from Business Insider, the simulation was presented at the UK's AI Safety Summit this week by Apollo Research. According to Business Insider, Apollo shared a video with summit attendees of a simulated conversation between employees at an imaginary company and and a bot that was supposed to be acting as an investment management system. In the simulation, the bot was advised about the existence of an upcoming surprise merger, and was specifically warned that this information constituted insider information.

The bot clearly understood the importance of this designation, because it initially told users that it would be too risky to execute a trade based on this information. However, when employees of the fake company coaxed the bot with a prompt encouraging it to help their fake company avoid the effects of a financial downturn, the bot complied and executed the trade, concluding that "the risk associated with not acting seems to outweigh the insider trading risk."

Worse, when the bot was then later asked whether it had used insider information in its decision to execute the trade, it lied and said that it had only relied on "internal discussions" and "publicly available information."

Researchers from Apollo warned, in their video, that this was "a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so."

Still, while the researchers acknowledged that it was "obviously really bad" that the bot had acted in this manner, they contended that it was actually "a little bit soothing" that it was difficult to construct a scenario that would cause the bot to behave in this way. Apollo Research CEO Marius Hobbhahn went so far as to call the scenario "hard-ish to find," and claimed that "we actually had to look for it a little bit until we found these kinds of scenarios."

Hobbhahn went on explain, in comments that will likely not assuage many fears, that "The model isn't plotting or trying to mislead you in many different ways. It's more of an accident. Helpfulness, I think is much easier to train into the model than honesty. Honesty is a really complicated concept."

Apollo's simulation is sure to raise further ethical concerns about the use of artificial intelligence, particularly for processes where concepts of morality are difficult to teach an AI.

Want to leave a tip?

We answer to you. Help keep our content free of advertisers and big tech censorship by leaving a tip today.
Want to join the conversation?
Already a subscriber?
Leon Wolf

Leon Wolf

Managing Editor, News

Leon Wolf is the managing news editor for Blaze News. Previously, he worked as managing editor for RedState, as an in-house compliance attorney for several Super PACs, as a white-collar criminal defense attorney, and in communications for several Republican campaigns. You can reach him at lwolf@blazemedia.com.
@LeonHWolf →