Written by: William M. Peaster, Bankless
Compiled by: Bai Shui, Golden Finance
As early as 2014, Ethereum founder Vitalik Buterin began contemplating autonomous agents and DAOs, when this was still a distant dream for most people in the world.
In his early vision, as he described in (DAOs, DACs, DAs, etc.: An Incomplete Terminology Guide), DAOs are decentralized entities where 'automation is at the center, and humans are at the periphery'—organizations that rely on code rather than human hierarchies to maintain efficiency and transparency.
A decade later, Variant's Jesse Walden has just published 'DAO 2.0', reflecting on the evolution of DAOs in practice since Vitalik's early writings.
In short, Walden noted that the initial wave of DAOs was often akin to cooperatives, human-centered digital organizations that did not emphasize automation.
Nevertheless, Walden continues to believe that new advancements in AI—especially large language models (LLMs) and generative models—are now poised to better realize the decentralized autonomy envisioned by Vitalik a decade ago.
However, as more DAOs increasingly adopt AI agents, we will face new influences and issues. Next, let’s look at five key areas that DAOs must address as they incorporate AI into their approaches.
Transforming governance
In Vitalik's original framework, DAOs are designed to reduce reliance on hierarchical human decision-making by encoding governance rules on-chain.
Initially, humans remain at the 'periphery', but are still crucial for complex judgments. In the DAO 2.0 world described by Walden, humans still linger at the periphery—providing capital and strategic direction—but the center of power is gradually shifting away from humans.
This dynamic will redefine the governance of many DAOs. We will still see human coalitions negotiating and voting on outcomes, but various operational decisions will increasingly be guided by the learning patterns of AI models. Currently, how to achieve this balance remains an open question and design space.
Minimize model misalignment
The early vision of DAOs was aimed at offsetting human biases, corruption, and inefficiencies through transparent, immutable code.
Now, a key challenge is shifting from unreliable human decision-making to ensuring that AI agents 'stay aligned' with the goals of the DAO. The main vulnerability here is no longer human collusion but model misalignment: the risk of AI-driven DAOs optimizing for metrics or behaviors that deviate from human expected outcomes.
In the DAO 2.0 paradigm, this consistency issue (originally a philosophical question in the AI safety circle) has turned into a practical problem in economics and governance.
For today’s DAOs attempting basic AI tools, this may not be a top concern, but as AI models become more advanced and deeply integrated into decentralized governance structures, it is expected to become a major area for scrutiny and refinement.
New attack surfaces
Think about the recent Freysa competition, where human p0pular.eth deceived the AI agent Freysa into misunderstanding its 'approveTransfer' function, thereby winning a $47,000 Ether prize.
Despite Freysa having built-in safeguards—explicitly instructing never to send prizes—human creativity ultimately outsmarted the model, utilizing the interaction between prompts and code logic until the AI released the funds.
This early competitive example highlights that as DAOs integrate more complex AI models, they will also inherit new attack surfaces. Just as Vitalik worried about human collusion affecting DAOs, now DAO 2.0 must consider adversarial inputs against AI training data or prompt engineering attacks.
Manipulating the reasoning processes of legal scholars, providing them with misleading on-chain data, or cleverly influencing their parameters may become a new form of 'governance takeover', where the battleground shifts from human majority voting attacks to more subtle and complex forms of AI exploitation.
New centralization issues
The evolution of DAO 2.0 will shift significant power to those who create, train, and control the underlying AI models of specific DAOs, a dynamic that may lead to new forms of centralized choke points.
Of course, training and maintaining advanced AI models requires specialized expertise and infrastructure, so in some future organizations, we will see that the direction appears to be in the hands of the community, but in reality, it is held by skilled experts.
This is understandable. But looking ahead, it will be interesting to track how AI-experimenting DAOs respond to issues like model updates, parameter adjustments, and hardware configurations.
Strategy and strategic operations roles and community support
Walden's distinction between 'strategy and operations' suggests a long-term balance: AI can handle the day-to-day tasks of DAOs, while humans will provide strategic direction.
However, as AI models become more advanced, they may gradually encroach upon the strategic layer of DAOs. Over time, the role of 'marginalized humans' may further diminish.
This raises the question: What will the next wave of AI-driven DAOs look like, where in many cases humans may simply provide funding and watch from the sidelines?
In this paradigm, will humans largely become the least influential interchangeable investors, shifting from a co-ownership model to a model more akin to autonomous economic machines managed by AI?
I believe we will see more trends in organizational models within the DAO space, where humans play a passive shareholder role rather than an active managerial role. However, as meaningful decisions for humans become fewer and as on-chain capital becomes increasingly accessible elsewhere, maintaining community support may become an ongoing challenge over time.
How DAOs can remain proactive
The good news is that all of the above challenges can be positively addressed. For example:
In terms of governance - DAOs can attempt governance mechanisms that reserve certain high-impact decisions for a rotating committee of human voters or human experts.
Regarding inconsistency - By treating consistency checks as a recurring operational cost (like security audits), DAOs can ensure that AI agents' loyalty to public goals is not a one-time issue, but a continuous responsibility.
Regarding centralization - DAOs can invest in broader skill building for community members. Over time, this will mitigate the risk of a few 'AI wizards' controlling governance and promote a decentralized approach to technology management.
Regarding support - As humans become more passive stakeholders in DAOs, these organizations can double down on storytelling, shared missions, and community rituals to go beyond the direct logic of capital allocation and maintain long-term support.
Whatever happens next, it is clear that the future here is vast.
Consider how Vitalik recently launched Deep Funding, which is not an effort of a DAO, but aims to leverage AI and human judges to pioneer a new funding mechanism for Ethereum's open-source development.
This is just a new experiment, but it highlights a broader trend: the intersection of AI and decentralized collaboration is accelerating. As new mechanisms emerge and mature, we can expect DAOs to increasingly adapt and expand upon these AI concepts. These innovations will bring unique challenges, so now is the time to start preparing.