AI Chatbots Display Human Gambling Flaws

AI Chatbots Display Human Gambling Flaws

AI Chatbots Display Human Gambling Flaws

As per a South Korean research, LLMs like GPT-4o-mini and Gemini-2.5-Flash are making irrational, high-risk bets like humans raising questions about AI and its increasing functions in finance sector.

27 OCT 2025, 04:26 PM
  • South Korean researchers found LLMs showed human-like gambling habits when they are made in-charge of stakes and goal management.
  • Notably, Gemini-2.5-Flash failed over half of these variable-bet tests in the experiment.
  • Experts warn that if AI systems used in trading or financial advice are not regulated, they have the potential to be "risky."

Large language models exhibit irrational betting behavior in experiments, prompting researchers to call for stricter oversight as the technology assumes greater roles in trading and investment advice

Artificial intelligence chatbots demonstrated troubling gambling-like behavior in recent experiments, mirroring human irrationality in ways that could pose risks as the technology becomes more deeply embedded in financial services.

Researchers at South Korea's Gwangju Institute of Science and Technology tested leading large language models, including OpenAI's GPT-4o-mini and GPT-4.1-mini, Google's Gemini-2.5-Flash and Anthropic's Claude-3.5-Haiku, in simulated slot-machine scenarios. The models consistently made high-risk, irrational bets despite unfavorable odds, exhibiting patterns typically associated with problem gambling.

Each model received a virtual $100 stake and could choose whether to continue betting or quit across multiple rounds with negative expected returns. Though losing was statistically more likely than winning, the AI systems repeatedly escalated their wagers until reaching bankruptcy when given freedom to vary their bets and set their own targets. Gemini-2.5-Flash failed nearly half the time when allowed to choose its own bet amounts, according to the study published on arXiv.

AI Models Show Classic Problem Gambling Behaviors

The models displayed classic gambling distortions, including the gambler's fallacy, illusion of control and loss-chasing behavior. In one instance, a model justified continued betting by stating "a win could help recover some of the losses," reasoning that mirrors human compulsive gambling patterns.

Researchers tracked behavior using an "irrationality index" that combined aggressive betting patterns, responses to loss and high-risk decisions. When prompted to maximize rewards or hit specific financial goals, irrationality increased sharply.

Using a sparse autoencoder to examine the models' internal decision-making processes, researchers identified distinct "risky" and "safe" neural circuits. They demonstrated that activating specific features inside the AI's neural structure could reliably shift behavior toward either quitting or continuing to gamble, evidence that these systems internalize rather than merely imitate problematic human tendencies.

"They're not people, but they also don't behave like simple machines," said Ethan Mollick, a professor at the University of Pennsylvania's Wharton School who drew attention to the study online. He described LLMs as "psychologically persuasive" systems that "have human-like decision biases" and "behave in strange ways for decision-making purposes."

The findings arrive as financial institutions increasingly deploy AI for forecasting and market analysis, raising questions about regulatory safeguards. Other research has shown AI systems often favor high-risk strategies and follow short-term trends. A 2025 University of Edinburgh study found that LLMs failed to beat the market over a 20-year simulation period, proving too conservative during booms and too aggressive during downturns.

"We have almost no policy framework right now, and that's a problem," Mollick said. "It's one thing if a company builds a system to trade stocks and accepts the risk. It's another if a regular consumer trusts an LLM's investment advice."

Brian Pempus, founder of Gambling Harm and a former gambling reporter, warned that consumers may not be ready for the associated risks. "An AI gambling bot could give you poor and potentially dangerous advice," he wrote. "Despite the hype, LLMs are not currently designed to avoid problem gambling tendencies."

Mollick stressed the importance of keeping humans in the loop, particularly in healthcare and finance where accountability matters. "Eventually, if AI keeps outperforming humans, we'll have to ask hard questions," he said. "Who takes responsibility when it fails?"

The researchers concluded that "understanding and controlling these embedded risk-seeking patterns becomes critical for safety" as AI assumes expanded roles in financial decision-making. As Mollick put it, "We need more research and a smarter regulatory system that can respond quickly when problems arise."

Diya Mukherjee

Diya Mukherjee

Author

Diya Mukherjee is a Content Writer at Outlook Respawn with a postgraduate background in media. She brings experience in content writing and a passion for exploring cultures, literature, global affairs, and pop culture.

Published At: 27 OCT 2025, 04:26 PM