Ai in finance needs human oversight before taking full control, warns eliza labs founder

Shaw Walters, the founder of Eliza Labs, has offered a clear warning against entrusting artificial intelligence agents with the direct management of personal or institutional finances—at least for now. Speaking at the Token2049 conference in Singapore, Walters emphasized that while AI is rapidly transforming the trading and financial technology landscape, the current generation of autonomous agents is not equipped to handle full-fledged investment responsibilities.

According to Walters, the true strength of today’s AI systems lies not in their ability to generate alpha or yield independently, but rather in their capacity to process vast amounts of market data, extract meaningful signals, and enhance execution speed. “You probably don’t want to give an AI agent a bunch of money and expect it to make you more,” he cautioned. Instead, Walters sees these agents functioning best as intermediaries—interfaces that connect human decision-makers with quantitative tools and real-time social sentiment analysis.

Eliza Labs has been working to redefine the role of AI in Web3 and decentralized finance. In January, the company launched ElizaOS, an open-source operating system built on the Solana blockchain. This platform enables developers and researchers to design, test, and deploy AI agents in a transparent and collaborative environment. One of the key features of ElizaOS is its “marketplace of trust,” a novel mechanism that converts speculation and online promotion—sometimes referred to as “shill posts”—into simulated trades, allowing users to evaluate the credibility of these signals based on actual outcomes.

Walters explained that while AI is making impressive strides in natural language processing and market forecasting, its decision-making capabilities are still heavily reliant on the quality of the data it ingests and the parameters set by human developers. This makes it risky to allow these systems to operate independently with real capital. “We’re still in the early innings,” he said, implying that the industry needs more time to understand these tools fully and establish proper safeguards.

Another concern Walters raised was the lack of explainability in many AI-driven strategies. Unlike traditional quantitative models, which can be dissected and improved upon, some AI models—especially those built on deep learning—are opaque and difficult to audit. This “black box” nature makes it challenging to hold AI accountable in the event of a financial loss, a critical issue in sectors where transparency and compliance are non-negotiable.

Moreover, the volatility and unpredictability of crypto markets further complicate the use of autonomous agents. While AI can detect patterns and act on micro-movements faster than humans, it may also overreact to noise or fail to recognize broader macroeconomic shifts that require contextual understanding. Walters argues that until AI systems become more context-aware and capable of integrating diverse information sources reliably, they should remain tools—not decision-makers.

Another limitation comes from the regulatory environment itself. Financial regulators around the world are still grappling with how to categorize and oversee AI-driven financial services. Introducing autonomous agents into trading desks or portfolio management without clear legal frameworks exposes users to significant legal and ethical risks. Walters suggests that AI should be used to augment human strategies, not replace them, at least until clearer guidelines are established.

Despite these reservations, Walters is optimistic about the long-term potential of AI in finance. He believes that as data governance improves and agents become more interpretable, they may eventually play more direct roles in financial decision-making. In the meantime, his team at Eliza Labs is focused on building tools that help users experiment with AI safely and transparently.

Looking ahead, the future of AI in financial services likely hinges on a hybrid model—one where human intuition and oversight are complemented by the computational efficiency and pattern recognition of AI. By combining these strengths, it may be possible to build systems that are both profitable and trustworthy.

In addition, Walters highlighted the importance of community consensus and open-source collaboration in accelerating the safe development of AI for finance. ElizaOS allows contributors to audit code, simulate trades, and provide feedback in a decentralized environment, helping to reduce the risks associated with centralized black-box AI solutions.

There’s also a growing focus on creating AI agents that are more aligned with user intentions. This involves embedding ethical guidelines and risk controls directly into the agent’s logic, ensuring that even in high-frequency trading environments, the AI remains within predefined safety boundaries.

As the industry evolves, Walters anticipates a growing demand for “agent literacy”—the ability for users to understand, configure, and oversee the behavior of AI systems. This will require new user interfaces, educational tools, and perhaps even regulatory certifications for AI developers and operators.

Ultimately, while AI holds transformative potential for financial markets, Walters’s message is clear: we are not yet at the point where we can hand over our wallets to machines. For now, the most effective use of AI lies in augmenting human judgment, not replacing it.