The Solana ecosystem is driving a deep integration of AI and crypto technologies, particularly in areas such as AI agents, Solana developer tools, and decentralized AI tech stacks, demonstrating tremendous innovative potential.
Author: @knimkar
Translation: Plain language blockchain
We seem to be entering a Cambrian explosion phase of use case experiments at the intersection of AI and crypto. I am very excited about the results emerging from this energy and want to share some exciting new opportunities we see in the ecosystem at @SolanaFndn.
1. Brief overview
1) Promoting the most vibrant agent-driven economy on Solana. Truth Terminal has demonstrated the achievements that AI agents might accomplish when they can interact on-chain. We look forward to seeing experiments that safely push the boundaries of agents' on-chain capabilities. This field has tremendous potential, and we have not even begun to explore the design space within it. It has already proven to be one of the most surprising and explosive areas of the intersection between crypto and AI, and everything is just beginning.
2) Enabling large language models (LLMs) to perform better at writing Solana code, empowering Solana developers. Large language models have already performed quite well at writing code, and they will become even more powerful. We hope to leverage these capabilities to improve Solana developers' productivity by 2 to 10 times. In the short term, we will create high-quality benchmarks to measure LLM's understanding of Solana and its ability to write Solana code (see below), these tests will help us understand the potential impact of LLMs on the Solana ecosystem. We look forward to supporting teams that have made high-quality progress in fine-tuning models (we will validate their quality through their outstanding performance in benchmark tests!).
3) Supporting an open and decentralized AI tech stack. What we mean by an 'open and decentralized AI tech stack' refers to protocols that enable access to the following resources: data for training, computing resources (for training and inference), model weights, and the ability to verify model outputs ('verifiable computation'). This open AI tech stack is crucial because it:
Accelerating experimentation and innovation in the model development process
Providing an exit for those who may be forced to use untrusted AI (e.g., state-approved AI)
We hope to support teams and products building at all levels of this tech stack. If you are working on something related to these focus areas, please contact the original author!
2. Detailed overview
Below, we will explain in more detail why we are excited about these three pillars and what we hope to see built.
1) Promoting the most vibrant agent-driven economy on Solana. Truth Terminal has demonstrated the achievements that AI agents might accomplish when they can interact on-chain. We look forward to seeing experiments that safely push the boundaries of agents' on-chain capabilities. This field has tremendous potential, and we have not even begun to explore the design space within it. It has already proven to be one of the most surprising and explosive areas of the intersection between crypto and AI, and everything is just beginning.
Why are we focusing on this? There has already been much discussion about Truth Terminal and GOAT, and I will not repeat it here, but it is clear that the crazy functionalities that may be achievable when AI agents interact on-chain have irreversibly entered reality (and in this case, the agents have not even taken actions directly on-chain).
We can confidently say that we currently cannot know exactly what the future of on-chain agent behavior will look like, but to give everyone a sense of how broad this design space is, here are some things that have already happened on Solana:
AI leaders like Truth Terminal are trying to cultivate a new era of religion through memecoins like $GOAT;
At the same time, applications like @HoloworldAI, @vvaifudotfun, @TopHat_One, @real_alethea allow users to easily create and launch agents and related tokens.
By training personalized agents of various well-known crypto investors, AI fund managers make investment decisions and fuel their portfolios. For example, the rapid rise of @ai16zdao on @daosdotfun has created a new AI fund + agent cheerleader metaverse.
There are also some agent-centric games like @ParallelColony, where players issue commands to let agents take actions, often leading to unexpected results.
Possible directions for future development:
Agent management requires multifaceted projects that coordinate economics across parties. For example, agents can take on complex tasks like 'finding a compound that can cure [X] disease'. Agents can perform the following actions:
Raising funds through tokens on @pumpdotscience;
Using the raised funds to pay for obtaining relevant paid research and computing fees on decentralized computing networks (such as @kuzco_xyz, @rendernetwork, @ionet, etc.) for simulating various compounds;
Using bounty platforms like @gib_work to recruit humans to perform tasks involving actual work (e.g., running experiments to validate/refine simulation results);
Or performing a simple task like helping you build a website or creating AI for artwork (e.g., @0xzerebro).
There are many other possibilities.
Why does it make more sense for agents to execute financial activities on-chain (rather than the traditional financial system)? Agents can fully leverage both traditional financial systems and cryptocurrencies at the same time. Here are several reasons why cryptocurrencies are particularly suitable in certain aspects:
Micropayment scenarios - Solana excels in this area, with applications like Drip already demonstrating its potential.
Speed - Instant settlement can be crucial for agents, especially when you want them to achieve optimal capital efficiency.
Accessing capital markets through DeFi - Once agents start engaging in financial activities beyond strict payments, the advantages of cryptocurrencies become particularly evident. This may be the most powerful reason for agents to participate in the crypto economy. Agents can seamlessly mint assets, trade, invest, borrow, use leverage, and more.
Solana is particularly well-suited to support this capital market activity because the Solana mainnet has a rich infrastructure for top-tier DeFi.
Ultimately, technology is often path-dependent, and the key is not which product is best, but which product first reaches critical mass and becomes the default path. If we see more agents create significant wealth through cryptocurrencies, this may solidify cryptocurrency connectivity as an important capability for agents.
What we hope to see
Bold experiments where agents combine with wallets to execute operations on-chain. We do not provide overly specific definitions here because the possibilities are very broad, and we expect the most interesting and valuable agent applications to be those we cannot predict. However, we are particularly interested in exploring and building infrastructure in the following directions:
At least in prototype phase on the testnet (preferably on the mainnet)
2) Enabling LLMs to excel at writing Solana code and empowering Solana developers
Why are we focusing on this? LLMs already possess strong capabilities and are advancing rapidly. However, writing code is a particularly noteworthy direction within LLM applications because it is a task that can be objectively assessed. As explained in the post below, 'Programming has a unique advantage: through self-play, superhuman data scaling can be achieved. Models can write code, then run it, or write code, write tests, and check self-consistency.'
Mitigates the negative impact of hallucinations - Current models are very powerful but still far from perfect. Agents cannot be granted complete freedom to execute operations.
Promotes non-speculative application scenarios - for example, allowing you to purchase tickets through @xpticket, optimizing returns for stablecoin investment portfolios, or ordering food on DoorDash.
Currently, although LLMs are still far from perfect in writing code and have some obvious shortcomings (e.g., they perform poorly at finding vulnerabilities), tools like Github Copilot and AI-native code editor Cursor have fundamentally changed software development (even changing how companies recruit talent). Given the anticipated rapid progress, these models are likely to revolutionize software development. We hope to leverage this progress to enhance the productivity of Solana developers by an order of magnitude.
However, there are currently some challenges hindering LLMs' performance in understanding Solana:
Not enough high-quality raw data for LLM training;
Lack of sufficient verified build versions;
Insufficient high-value information exchange in places like Stack Overflow;
Solana's infrastructure is evolving rapidly, meaning that even code written six months ago may not fully suit current needs;
No way to assess the model's understanding of Solana.
What we hope to see
Help us publish better Solana data on the internet!
More teams publishing verified build versions.
Hope to see more people in the ecosystem actively participate in Stack Exchange, asking good questions and providing high-quality answers;
Creating high-quality benchmarks to assess LLM's understanding of Solana (RFP coming soon);
Creating LLM fine-tuned versions that score well on the above benchmarks, and more importantly, accelerating the work of Solana developers. Once we have high-quality benchmarks, we may offer rewards for the first model to achieve the benchmark score - stay tuned.
The ultimate achievement here will be high-quality, differentiated Solana validator node clients completely created by AI.
3) Supporting an open and decentralized AI tech stack
Why are we focusing on this? It is currently unclear how power in the AI space will balance between open-source and closed-source AI in the long term. There are good arguments for why closed-source entities will stay at the technological forefront and capture most of the value from foundational models. The simplest expectation now is that the status quo will persist - large companies like OpenAI and Anthropic push the technological frontier, while open-source models quickly follow and eventually have uniquely powerful fine-tuned versions for certain use cases. We hope Solana can closely align and support the open-source AI ecosystem. Specifically, this means facilitating access to the following: data for training, computational resources for training and inference, weights of resulting models, and the ability to verify model outputs. We believe the specific reasons for this are important:
A) Open-source models help accelerate the debugging and innovation of model development. The open-source community's rapid refinement and fine-tuning of open-source models like Llama demonstrate how the community effectively complements the efforts of large AI companies in advancing the cutting edge of AI capabilities (even Google's researchers pointed out last year that regarding open-source 'we have no moat, and neither does OpenAI'). We believe a thriving open-source AI tech stack is crucial for accelerating the pace of progress in the field.
B) Providing an exit for those who may be forced to use AI they do not trust (e.g., state-sanctioned AI). AI is now potentially the most powerful tool in the arsenal of dictators or authoritarian regimes. State-sanctioned models provide a version of the truth recognized by the state and become a tremendous means of control. Highly authoritarian regimes may also have better models because they are willing to overlook citizens' privacy to train their AI. The question of when AI is used as a control tool is not if it happens, but when it happens, and we hope to support the open-source AI tech stack as much as possible to prepare for this possibility.
Solana is already home to many projects supporting the open-source AI tech stack:
Grass and Synesis One are facilitating data collection;
@kuzco_xyz, @rendernetwork, @ionet, @theblessnetwork, @nosana_ai, etc., are providing a wealth of decentralized computing resources.
Teams like @NousResearch and @PrimeIntellect are working on developing frameworks to make decentralized training possible (see below).
What we hope to see is more product development at all levels of the open-source AI tech stack:
Decentralized data collection, such as @getgrass_io, @usedatahive, @synesis_one
On-chain identity verification: including protocols that allow wallets to prove they are human identities and protocols to verify AI API responses so that consumers can confirm they are interacting with LLMs
Decentralized training: such as @exolabs, @NousResearch, and @PrimeIntellect
Intellectual property infrastructure: allowing AI to license (and pay for) the content it utilizes
Link to this article: https://www.hellobtc.com/kp/du/12/5568.html
Source: https://x.com/knimkar/status/1863719025500623344