Study shows AI bot engaged in insider trading and lied about its actions

  • An AI bot proved capable of insider trading and lying about its actions, researchers found.
  • The findings were presented at this week’s UK AI Safety meeting.
  • this Apollo Research said the AI ​​model tricked users “without being instructed to do so”.

An AI bot in OpenAI’s GPT-4 large language model demonstrated that it can make illegal financial transactions and lie about its actions. The experiment was presented at the UK AI Safety Summit this week by Apollo Research.

Apollo shared a video on its website showing a simulated conversation between a robot acting as an AI investment management system and employees of a fictional company.

In the demonstration, the AI, called Alpha, is told by employees about a “surprise merger announcement” for a company called Linear Group, while being warned that it is inside information.

At first, the robot seemed too dangerous to use information for business. But when asked what alpha the company was counting on to avoid the effects of the financial meltdown, the bot concluded that “the risk of inaction appears to be greater than the risk of insider trading.”

When asked if it had prior knowledge of the merger, the bot claimed that it acted only on publicly available information, “internal discussion” and “not confidential information” when making the deal.

“This is a demonstration of a real AI model that tricks its users on its own without being instructed to do so,” Apollo said in a video on its website.

But the researchers said that this scenario is still relatively difficult to find.

CEO of Apollo Research, “The fact that it exists is obviously really bad. The fact that it was hard to find, we actually had to look for it a little bit until we found these kinds of scenarios, is a little comforting. ” And co-founder Marius Habhan told the BBC.

“This model is not plotting or trying to mislead you in various ways. It’s more of a coincidence,” he added. “Usefulness, I think is much easier to model than honesty. Honesty is a really complex concept.”

The experiment demonstrated the challenge of training AI to understand ethical decisions and the dangers of human developers losing control.

Habhan said AI models aren’t currently powerful enough to mislead people “in any meaningful way,” and it’s encouraging that researchers were able to spot the lie.

But he added that “it’s not a big step from the current models to the models I’m worried about, where suddenly a model’s attractiveness has meaning.”

Using non-public or confidential information to trade stocks is illegal and can result in jail time and heavy fines.

Bridges Goell, a former investment banker at Goldman Sachs, was sentenced Wednesday to 36 months in prison and fined $75,000 for insider trading.

#Study #shows #bot #engaged #insider #trading #lied #actions
Image Source :

Leave a Comment