Excited to share that Marlin and @cortensor are joining forces to build a more accessible, decentralized AI infrastructure. Cortensor’s community-driven AI inference network and Marlin’s TEE-based coprocessor network, Oyster, will integrate to offer users a powerful, flexible AI solution for both Web2 and Web3 applications.

Cortensor’s decentralized network harnesses distributed computation and open-source models to bring scalable, cost-effective AI services to a wide audience. By allowing contributors to run large language models (LLMs) and various other GenAI models, Cortensor is pushing AI beyond traditional boundaries, making it more accessible and affordable.

Marlin’s Oyster network complements this vision with a secure, high-performance TEE environment. Oyster nodes, designed for efficient, serverless execution, ensure that tasks can be processed quickly and verifiably while safeguarding data. With TEEs, Marlin provides a secure foundation that lets users delegate AI inference tasks with confidence.

Together, Cortensor and Marlin’s networks enable users to run AI inference on demand and with robust security guarantees. This collaboration brings a reliable SLA to AI developers and businesses, helping them achieve seamless AI-powered workflows with flexible cost and performance options. Also excited to support Cortensor’s community as they begin testing and onboarding more participants.