Author: William M. Peaster, Bankless; Translated by: Bai Shui, Jinse Finance

As early as 2014, Ethereum founder Vitalik Buterin began to consider autonomous agents and DAOs, when it was still a distant dream for most people in the world.

In his early vision, as described in the (DAO, DAC, DA, etc.: Incomplete Terminology Guide) article, DAOs are decentralized entities, 'automation at the center, humans at the edge'—organizations relying on code instead of human hierarchies to maintain efficiency and transparency.

RnssDuUSNswx9mEF0jYsN6cfNmzdz3vtCXUSttPF.jpeg

A decade later, Jesse Walden of Variant has just published 'DAO 2.0', reflecting on the evolution of DAOs in practice since Vitalik's early writings.

In short, Walden points out that the initial wave of DAOs often resembled cooperatives, human-centered digital organizations that did not emphasize automation.

Nevertheless, Walden continues to believe that the new advancements in artificial intelligence—especially large language models (LLMs) and generative models—are now expected to better achieve the decentralized autonomy that Vitalik envisioned a decade ago.

However, as DAO experiments increasingly adopt AI agents, we will face new impacts and issues. Now, let’s take a look at five key areas that DAOs must address when incorporating AI into their approaches.

Transforming governance

In Vitalik's original framework, DAOs were designed to reduce reliance on hierarchical human decision-making by encoding governance rules on-chain.

Initially, humans still remain on the 'fringe', but are crucial for complex judgments. In the DAO 2.0 world described by Walden, humans still linger on the edges—providing capital and strategic direction—but the center of power is gradually no longer human.

This dynamic will redefine the governance of many DAOs. We will still see human coalitions negotiating and voting on outcomes, but various operational decisions will increasingly be guided by the learning patterns of AI models. How to achieve this balance remains an open question and design space.

Minimize model misalignment

The early vision of DAOs aimed to counter human biases, corruption, and inefficiency through transparent, immutable code.

Now, a key challenge is to shift from unreliable human decision-making to ensuring that AI agents 'align' with the goals of the DAO. The main vulnerability here is no longer human collusion but model misalignment: the risk that AI-driven DAOs optimize for indicators or behaviors that diverge from expected human outcomes.

In the DAO 2.0 paradigm, this consistency issue (initially a philosophical question in the AI safety circle) turns into a practical issue of economics and governance.

For today's DAOs attempting basic AI tools, this may not be a top priority, but as AI models become more advanced and deeply integrated into decentralized governance structures, it is expected to become a major area for scrutiny and refinement.

New attack surfaces

Think of the recent Freysa contest, where the human p0pular.eth deceived the AI agent Freysa into misunderstanding its 'approveTransfer' function, winning a $47,000 Ether prize.

Although Freysa has built-in safeguards—explicitly instructing never to send prizes—human creativity ultimately outpaced the model, exploiting the interplay between prompts and code logic until the AI released funds.

This early contest example highlights that as DAOs integrate more complex AI models, they will also inherit new attack surfaces. Just as Vitalik is concerned about collusion with humans in DOs or DAOs, now DAO 2.0 must consider adversarial inputs to AI training data or real-time engineering attacks.

Manipulating the reasoning process of a Master of Laws, providing it with misleading on-chain data, or cleverly influencing its parameters could become a new form of 'governance takeover', where the battlefield shifts from human majority voting attacks to more subtle and complex forms of AI exploitation.

New centralization issues

The evolution of DAO 2.0 will shift significant power to those who create, train, and control the specific underlying AI models for the DAO, a dynamic that may lead to new forms of centralization bottlenecks.

Of course, training and maintaining advanced AI models requires specialized expertise and infrastructure, so in some organizations in the future, we will see the direction seemingly in the hands of the community but actually controlled by skilled experts.

This is understandable. But looking ahead, it will be interesting to track how AI-experimenting DAOs respond to issues like model updates, parameter tuning, and hardware configuration.

Strategic and operational roles and community support

Walden's distinction between 'strategy and operations' indicates a long-term balance: AI can handle day-to-day DAO tasks, while humans will provide strategic direction.

However, as AI models become more advanced, they may gradually encroach on the strategic layer of DAOs. Over time, the role of 'fringe people' may diminish further.

This raises a question: what will happen with the next wave of AI-driven DAOs, in many cases where humans may simply provide funding and watch from the sidelines?

In this paradigm, will humans largely become the least influential interchangeable investors, shifting from a model of co-owning brands to one more akin to autonomous economic machines managed by AI?

I think we will see more trends in organizational models within the DAO space, where humans play a passive shareholder role rather than an active managerial role. However, as meaningful decisions for humans become fewer, and as on-chain capital becomes easier to provide elsewhere, maintaining community support may become an ongoing challenge over time.

How DAOs maintain proactivity

The good news is that all the challenges mentioned above can be actively addressed. For example:

  • In terms of governance—DAOs can experiment with governance mechanisms that reserve certain high-impact decisions for human voters or human expert rotating committees.

  • Regarding inconsistency—by viewing consistency checks as a recurring operational expense (like security audits), DAOs can ensure that AI agents' loyalty to public goals is not a one-time issue but a continuous responsibility.

  • Regarding centralization—DAOs can invest in broader skill-building for community members. Over time, this will mitigate the risk of a few 'AI whizzes' controlling governance and promote a decentralized approach to technical management.

  • Regarding support—as humans become more passive stakeholders in many DAOs, these organizations can double down on storytelling, shared missions, and community rituals to transcend the direct logic of capital allocation and maintain long-term support.

Whatever happens next, it is clear that the future here is vast.

Consider how Vitalik recently launched Deep Funding, which is not an effort of a DAO, but aims to leverage AI and human judges to create a new funding mechanism for Ethereum open-source development.

This is just a new experiment, but it highlights a broader trend: the intersection of AI and decentralized collaboration is accelerating. As new mechanisms come online and mature, we can expect DAOs to increasingly adapt and expand these AI ideas. These innovations will present unique challenges, so now is the time to start preparing.