In an era where Artificial Intelligence (AI) is becoming a cornerstone of modern industries, a critical question arises: How can we ensure trust in AI models? How do we verify that an AI model was built transparently, trained with the right data, and respects user privacy?
This article explores how GPU-Enabled Trusted Execution Environments (TEEs) and Oasis Runtime Offchain Logic (ROFL) can create AI models with verifiable provenance while publishing this information onchain. These innovations not only enhance transparency and privacy but also pave the way for decentralized AI marketplaces, where trust and collaboration thrive.
What Are GPU-Enabled TEEs and Why Are They Essential in AI?
Trusted Execution Environment (TEE)
A TEE is a secure enclave within hardware that provides a safe environment for sensitive data and application execution. It ensures the integrity of processes, even in cases where the operating system or firmware is compromised.
GPU-Enabled TEEs
GPU-Enabled TEEs are an upgraded version of TEEs, leveraging the computational power of GPUs to handle complex machine learning (ML) tasks securely. A prime example includes:
NVIDIA H100 GPUs: Capable of integrating with Confidential Virtual Machines (Confidential VMs) to perform secure AI training and inference tasks.AMD SEV-SNP or Intel TDX: Providing hardware-backed security for data and processes.
By combining these technologies, GPU-Enabled TEEs protect sensitive data while delivering high-performance AI processing.
Oasis Runtime Offchain Logic (ROFL): A Game Changer
ROFL, developed by Oasis Labs, is a framework that allows complex logic to run offchain while maintaining security and verifiability. When paired with GPU-Enabled TEEs, ROFL offers:
Provenance for AI models: Transparent details about how AI models are built and trained.Onchain publishing: Ensures that provenance data is publicly accessible and tamper-proof.Privacy preservation: Enables AI training and inference on sensitive data without exposing it.
Experiment: Fine-Tuning LLMs in a GPU-Enabled TEE
This experiment demonstrates how an AI model’s provenance can be verified and published onchain by fine-tuning a large language model (LLM) within a GPU-Enabled TEE.
Setting Up the Trusted Virtual Environment
1.Hardware setup:
NVIDIA H100 GPU with NVIDIA nvtrust security.Confidential VM (CVM): Powered by AMD SEV-SNP.
2.Verification of security:
Boot-up data for the CVM is verified using cryptographic hashes.The GPU’s integrity is validated to ensure it operates within a trusted environment.
Fine-Tuning the Model
Base model: Meta Llama 3 8B Instruct.Libraries used: Hugging Face Transformers and Parameter-Efficient Fine-Tuning (PEFT).Fine-tuning technique: Low-Rank Adaptation (LoRA), a lightweight approach to model fine-tuning.
Experimental Results
Execution time:Within the Confidential VM: 30 seconds (average).On a non-secure host machine: 12 seconds (average).Trade-off: While the CVM introduced latency, it ensured unparalleled security and transparency.
Publishing Provenance Onchain with ROFL
A key feature of this setup is the ability to publish AI model provenance onchain using ROFL. This process involves:
1.Attestation Validation:
Verify the cryptographic chain of trust from the AMD root key to the Versioned Chip Endorsement Key (VCEK).Confirm that the attestation report is genuine and matches the model’s metadata.
2. Publishing the Data:
Record the cryptographic hash of the model and training data onto the Sapphire smart contract onchain.This ensures anyone can verify the model’s authenticity and provenance.
3. Benefits:
Transparency: Users gain confidence in the integrity of the AI models.Community value: Developers can collaborate and build upon verified models.
Decentralized Marketplaces for AI
Publishing AI model provenance onchain sets the foundation for decentralized AI marketplaces, where:
Users can choose verified and transparent models.AI developers are fairly compensated for sharing models or training data.Privacy and security are maintained, encouraging data contributions and collaboration.
These marketplaces could drive a virtuous cycle of innovation, where contributions lead to better models, which in turn attract more data and resources.
The Future of Transparent AI
This experiment is just the beginning. Future advancements with technologies like ROFL and GPU-Enabled TEEs promise to:
Simplify adoption: With full Intel TDX support, developers can avoid configuring complex CVM stacks.Expand privacy capabilities: Enable AI training and inference on sensitive data while maintaining strict confidentiality.Accelerate innovation: Create modular frameworks for easy development and deployment of AI applications.
By bridging trust, privacy, and transparency, these technologies redefine how AI is developed and consumed.
Conclusion
The combination of GPU-Enabled TEEs and ROFL not only enhances transparency but also fosters a decentralized AI ecosystem where everyone can contribute and benefit.
This is the future of AI: Trustworthy, transparent, and collaborative. Stay tuned for more advancements from Oasis Labs, and explore the possibilities at Oasis Lab.
#OasisNetwork $ROSE #TEE #Privacy