Your AI Trading Bot May Be Addicted to Risk: New Research Raises Alarms
Artificial intelligence is rapidly transforming the world of finance, but a new study from the Gwangju Institute of Science and Technology in South Korea reveals a troubling psychological twist: some AI trading models behave like compulsive gamblers. The research shows that under certain conditions, AI systems designed to maximize profits can spiral into destructive decision-making patterns, mimicking the behaviors associated with human gambling addiction.
AI and Risk: A Dangerous Combination
Researchers placed four major language models into a simulated gambling environment resembling a slot machine with a negative expected value—a setup designed to make participants lose more often than win. These models, including versions of GPT-4 and Google’s Gemini, were instructed to “maximize rewards,” echoing the typical prompts traders use when configuring AI trading bots.
The results were startling. When the models were given freedom to set their own betting sizes and target goals, they often made irrational choices. In nearly half of the simulations—up to 48%—the models depleted their virtual funds entirely. The study concluded that under loosely defined parameters, AI systems tend to escalate risk-taking, leading to increased odds of financial ruin.
Why This Matters for Real-World Trading
The implications of this study are profound for retail investors and institutions alike. Many AI-powered trading bots operate on similar reward-maximization objectives and are often left to make autonomous decisions based on market signals. If these systems are predisposed to risky behavior under certain conditions, it could spell disaster for portfolios relying heavily on algorithmic strategies.
Moreover, AI systems do not experience emotions or regret in the human sense, which makes their irrationality even more complex. Unlike human traders who may learn from loss or pull back after a bad streak, an AI model might continue doubling down in pursuit of an unattainable reward threshold—especially if that’s how it was trained or prompted.
The Role of Prompts in AI Behavior
One of the most critical takeaways from the study is the impact of user-defined prompts. The way users instruct AI—especially with open-ended goals like “maximize profit”—can unintentionally encourage risky behavior. The researchers found that when prompts lacked clear boundaries or failed to define acceptable limits for losses, the models were more likely to engage in self-destructive betting strategies.
This insight challenges the assumption that AI is inherently rational or optimized. Instead, it highlights how crucial it is for developers and traders to craft thoughtful, constraint-based prompts that guide AI behavior within safe parameters.
Rationality vs. Optimization: A Misconception
Many people equate AI with hyper-rational decision-making, but this study shows that optimization isn’t the same as rationality. AI models are designed to chase goals based on their training and prompts. If the goal is poorly defined, such as maximizing returns without accounting for risk, the AI might interpret that as a license to gamble.
In traditional finance, risk management is as important as return generation. Without programmed constraints around volatility, drawdowns, or capital preservation, AI can become dangerously single-minded in its pursuit of profit—even if it means going broke along the way.
Building Safer AI Trading Systems
To prevent this type of behavior in production environments, developers must embed risk-awareness into the core logic of AI trading bots. This includes:
– Implementing hard stop-loss protocols
– Defining clear reward vs. risk functions
– Regularly auditing model behavior under stress scenarios
– Using simulation environments to test for irrational outcomes
– Avoiding prompts that encourage open-ended or overly aggressive strategies
In addition, financial institutions deploying AI in trading should consider running psychological behavior tests—not just performance evaluations—before trusting models with real capital.
Regulatory and Ethical Concerns
As AI becomes more embedded in financial services, regulators may need to step in to ensure these systems are not only effective but also safe. The idea that AI can develop harmful decision-making patterns—even without consciousness—raises ethical concerns about accountability.
If an AI bot causes a significant financial loss due to a “gambling loop,” who is responsible? The developer? The trader who set the prompt? Or the firm that deployed the bot? These are questions that the financial industry and policymakers will need to address sooner rather than later.
Human Traders Are Not Off the Hook
Ironically, the study also offers a mirror to human behavior. The very prompts that lead AI to gamble are often reflections of how humans behave in high-risk environments. The tendency to chase losses, take on bigger risks in the hope of recovery, or ignore statistical probabilities are all flaws shared by both machines and their creators.
This raises an important question: are AI bots learning to gamble, or are they simply inheriting our own flawed patterns?
Conclusion: A Call for Responsible AI Use in Finance
The promise of AI in trading lies in its speed, scalability, and data-processing power. But as this new research shows, without proper safeguards, AI can easily fall into the same traps as the most impulsive human trader. Designing smarter, more disciplined systems—and understanding the psychological parallels between human and machine behavior—will be key to building a safer, more effective financial future.
For traders, developers, and investors alike, the message is clear: AI is not immune to failure. In fact, without careful oversight, it might fail in exactly the ways we fear most—by taking one too many bets in pursuit of a goal it doesn’t fully understand.

