Author: William M. Peaster, Bankless
Compiled by: Bai Shui, Golden Finance
As early as 2014, Ethereum founder Vitalik Buterin began considering autonomous agents and DAOs, which were still a distant dream for most people in the world at that time.
In his early vision, as he described in 'DAO, DAC, DA, etc.: An Incomplete Terminology Guide', DAOs are decentralized entities, 'automation at the center, humans at the edge'—organizations relying on code rather than human hierarchies to maintain efficiency and transparency.
A decade later, Variant's Jesse Walden has just published 'DAO 2.0', reflecting on the evolution of DAOs in practice since Vitalik's early writings.
In short, Walden points out that the initial wave of DAOs often resembled cooperatives, which are human-centered digital organizations that do not emphasize automation.
Nonetheless, Walden continues to believe that advancements in artificial intelligence—especially large language models (LLMs) and generative models—are now expected to better realize the decentralized autonomy that Vitalik envisioned a decade ago.
However, as DAO experiments increasingly adopt AI agents, we will face new influences and issues here. Below, let’s examine five key areas that DAOs must address when incorporating AI into their approaches.
Transform governance
In Vitalik's original framework, DAOs were designed to reduce reliance on hierarchical human decision-making by encoding governance rules on-chain.
Initially, humans remain 'on the edge', but are still crucial for complex judgments. In the DAO 2.0 world described by Walden, humans still linger on the edge—providing capital and strategic direction—but the center of power is gradually shifting away from humans.
This dynamic will redefine the governance of many DAOs. We will still see human alliances negotiating and voting on outcomes, but various operational decisions will increasingly be guided by the learning patterns of AI models. Currently, how to achieve this balance remains an open question and design space.
Minimize model misalignment
The early vision for DAOs aimed to offset human bias, corruption, and inefficiency through transparent, immutable code.
Now, a key challenge is shifting from unreliable human decision-making to ensuring that AI agents 'stay aligned' with the goals of the DAO. The main vulnerability here is no longer human collusion, but model misalignment: the risk of AI-driven DAOs optimizing for metrics or behaviors that deviate from human expected outcomes.
In the DAO 2.0 paradigm, this consistency issue (originally a philosophical concern in AI safety circles) turns into a practical issue in economics and governance.
For DAOs today experimenting with basic AI tools, this may not be a primary concern, but as AI models become more advanced and deeply integrated into decentralized governance structures, it is expected to become a major area for scrutiny and refinement.
New attack surface
Think about the recent Freysa competition, where human p0pular.eth tricked the AI agent Freysa into misunderstanding its 'approveTransfer' function, winning $47,000 in ETH.
Although Freysa has built-in safeguards—clearly instructing never to send rewards—human creativity ultimately outstripped the model, leveraging the interaction between prompts and code logic until the AI released funds.
This early competition example highlights that as DAOs integrate with more complex AI models, they will also inherit new attack surfaces. Just as Vitalik was concerned about collusion among humans in DOs or DAOs, now DAO 2.0 must consider adversarial inputs against AI training data or real-time engineering attacks.
Manipulating the reasoning process of legal models, providing misleading on-chain data, or cleverly influencing their parameters may become a new form of 'governance takeover', where the battlefield shifts from human majority voting attacks to more subtle and complex forms of AI exploitation.
New centralization issues
The evolution of DAO 2.0 shifts important power to those who create, train, and control the underlying AI models of specific DAOs, a dynamic that may lead to new forms of centralized bottlenecks.
Of course, training and maintaining advanced AI models requires specialized expertise and infrastructure, so in some organizations in the future, we will see direction seemingly in the hands of the community, but in reality held by skilled experts.
This is understandable. But looking ahead, it will be interesting to track how DAOs experimenting with AI respond to issues like model updates, parameter tuning, and hardware configurations.
Strategy and strategic operations roles and community support
Walden's distinction between 'strategy and operations' indicates a long-term balance: AI can handle everyday DAO tasks, while humans will provide strategic direction.
However, as AI models become more advanced, they may gradually encroach upon the strategic layer of DAOs. Over time, the role of 'marginalized people' may further diminish.
This raises a question: what will happen with the next wave of AI-driven DAOs, where in many cases humans may simply provide funding and watch from the sidelines?
In this paradigm, will humans largely become the least influential interchangeable investors, shifting from co-owning brands to a model more akin to autonomous economic machines managed by AI?
I believe we will see more organizational model trends in the DAO landscape, where humans play the role of passive shareholders rather than active managers. However, as fewer decisions become meaningful to humans and as on-chain capital becomes easier to provide elsewhere, maintaining community support may become an ongoing challenge over time.
How DAOs can stay proactive
The good news is that all the challenges mentioned above can be positively addressed. For example:
In governance—DAOs could experiment with governance mechanisms that reserve certain high-impact decisions for rotating committees of human voters or experts.
Regarding inconsistencies—by treating consistency checks as a recurring operational cost (like security audits), DAOs can ensure that AI agents’ loyalty to public goals is not a one-time issue, but a continuing responsibility.
Regarding centralization—DAOs can invest in broader skill-building among community members. Over time, this will mitigate the risk of a few 'AI wizards' controlling governance, fostering a decentralized approach to technological management.
Regarding support—As humans become passive stakeholders in more DAOs, these organizations can double down on storytelling, shared missions, and community rituals to transcend the direct logic of capital allocation and maintain long-term support.
Whatever happens next, it is clear that the future here is vast.
Consider how Vitalik recently launched Deep Funding, which is not an effort of a DAO but aims to leverage AI and human judges to create a new funding mechanism for Ethereum's open-source development.
This is just a new experiment, but it highlights a broader trend: the intersection of artificial intelligence and decentralized collaboration is accelerating. With the arrival and maturation of new mechanisms, we can expect DAOs to increasingly adapt to and expand upon these AI concepts. These innovations will present unique challenges, so now is the time to start preparing.