Near Protocol has made a significant announcement at the Redacted conference in Bangkok, Thailand, revealing its plan to develop the world’s largest open-source AI model, featuring 1.4 trillion parameters. This groundbreaking initiative surpasses the scale of Meta’s Llama model by 3.5 times and aims to position Near Protocol as a leader in the open-source AI ecosystem.
Collaborative Development Strategy
The ambitious project will be driven by competitive, crowdsourced research through Near Protocol’s newly established AI Research Hub. Starting November 10, contributors can begin training an initial 500 million parameter model, serving as the first stage in a comprehensive multi-phase development plan.
Structured Progression Across Seven Stages
The project will advance through seven progressively larger and more complex model stages. Only the highest-performing contributors will move on to work on subsequent models, ensuring that the development process is both competitive and capable of scaling AI performance incrementally.
Emphasis on Monetization and Data Privacy
A unique aspect of this initiative is the integration of monetization alongside robust privacy measures. Near Protocol plans to utilize encrypted Trusted Execution Environments (TEEs) to safeguard data while rewarding contributors for their work. This approach guarantees secure data handling and incentivizes ongoing contributions, with the capability to incorporate continuous updates as the project evolves.
Implications for the Future
This bold move by Near Protocol highlights its dedication to advancing the frontier of open-source AI development. By setting a new benchmark in AI model size and capabilities, Near Protocol is driving forward the creation of high-performance AI systems while promoting a collaborative and community-centric development ethos.
Further details and updates will follow as the project progresses.