Binance Square
LIVE
Ocean Ambassadors
@Square-Creator-910466064
Following
Followers
Liked
Shared
All Content
LIVE
--
Ocean Protocol, @Fetch_ai & @SingularityNET unite to create the Artificial Superintelligence Alliance, through a token merger into a single $ASI token with a combined value of USD$7.5 Billion, with a current ranking at ~#20 on #CoinMarketCap. ASI is the brainchild of @bengoertzel,@trentmc0 and @HMsheikh4,3 leading minds in decentralized #AI. The main goals: - to accelerate the race to Artificial General Intelligence (AGI) and - to challenge Big Tech’s grip on AI development, use, & monetization. The dawn of a new era in AI is here and this alliance is combining the forces of the 3 projects to accelerate: Laying the groundwork for a scalable AI infrastructure →The commercialization of groundbreaking technology →The path to #AGI’s development on the blockchain What does this represent for the OCEAN community? Read the alpha below: 1️⃣$FET & $AGIX communities need to vote on the token merger proposal 2️⃣No voting is necessary from the OCEAN ty - Ocean relinquished all control over the OCEAN max supply mint 3️⃣Once approved, $FET is rebranded $ASI, with a 2.63 Billion total tokens supply 4️⃣OCEAN , at a conversion rate of 0.433226:1 5️⃣$AGIX tokens migrate to $ASI, at a conversion rate of 0.433350:1 The Superintelligence Alliance takes form with a new governing council: - Humayun Sheikh (http://Fetch.ai) - Chairman - Ben Goertzel (SingularityNET) - CEO - Trent McConaghy & Bruce Pon - representing Ocean Protocol - Janet Adams - representing SingularityNET “It is imperative that AGI and ASI are not owned and controlled by any particular party with their own biased interests. It makes total sense that our 3 projects come together to form a tokenomic network that has greater power to take on Big Tech” “Our mission with this token merger is to combine our platforms to ensure ethical and transparent AI [...] This enhances data privacy and paves the way for a more democratic and trustworthy AI ecosystem” #binance #Bitcoin #Solana📈🚀🌐 #sec #etf $OCEAN $SOL $ALT
Ocean Protocol, @Fetch_ai & @SingularityNET unite to create the Artificial Superintelligence Alliance, through a token merger into a single $ASI token with a combined value of USD$7.5 Billion, with a current ranking at ~#20 on #CoinMarketCap. ASI is the brainchild of @bengoertzel,@trentmc0 and @HMsheikh4,3 leading minds in decentralized #AI.

The main goals:
- to accelerate the race to Artificial General Intelligence (AGI) and
- to challenge Big Tech’s grip on AI development, use, & monetization.
The dawn of a new era in AI is here and this alliance is combining the forces of the 3 projects to accelerate:
Laying the groundwork for a scalable AI infrastructure →The commercialization of groundbreaking technology →The path to #AGI’s development on the blockchain

What does this represent for the OCEAN community? Read the alpha below:
1️⃣$FET & $AGIX communities need to vote on the token merger proposal
2️⃣No voting is necessary from the OCEAN ty - Ocean relinquished all control over the OCEAN max supply mint
3️⃣Once approved, $FET is rebranded $ASI, with a 2.63 Billion total tokens supply
4️⃣OCEAN , at a conversion rate of 0.433226:1
5️⃣$AGIX tokens migrate to $ASI, at a conversion rate of 0.433350:1

The Superintelligence Alliance takes form with a new governing council:
- Humayun Sheikh (http://Fetch.ai) - Chairman
- Ben Goertzel (SingularityNET) - CEO
- Trent McConaghy & Bruce Pon - representing Ocean Protocol
- Janet Adams - representing SingularityNET

“It is imperative that AGI and ASI are not owned and controlled by any particular party with their own biased interests. It makes total sense that our 3 projects come together to form a tokenomic network that has greater power to take on Big Tech”

“Our mission with this token merger is to combine our platforms to ensure ethical and transparent AI [...] This enhances data privacy and paves the way for a more democratic and trustworthy AI ecosystem”
#binance #Bitcoin #Solana📈🚀🌐 #sec #etf $OCEAN $SOL $ALT
Introducing 5x Predictoor Boost for Volume Data Farming A bug fix made Volume DF rewards drop. Many were upset. Going forward, 5x Predictoor Boost basically restores this drop. Retroactively: we've airdropped another 4x Volume DF rewards to compared to what Volume DFers got yesterday. AMA soon! This new blog post has details - blog.oceanprotocol.com #Binance​ #Ethereum✅ #Bitcoin(BTC) #web3 #Solana🚀 $BTC $OCEAN $SOL
Introducing 5x Predictoor Boost for Volume Data Farming

A bug fix made Volume DF rewards drop. Many were upset.

Going forward, 5x Predictoor Boost basically restores this drop.

Retroactively: we've airdropped another 4x Volume DF rewards to compared to what Volume DFers got yesterday.

AMA soon!

This new blog post has details - blog.oceanprotocol.com
#Binance​ #Ethereum✅ #Bitcoin(BTC) #web3 #Solana🚀
$BTC $OCEAN $SOL
Ocean Protocol joins the #AIgaming Coalition to bring AI to gaming Ocean Protocol and NIM Network are teaming up to bring you the future of gaming, as part of the AI Gaming Coalition. With Ocean Protocol’s cutting-edge, privacy-focused data sharing system, designed for AI, and NIM Network’s highly-adaptable blockchain, we’re setting new standards in gaming. NIM Network is the first RollApp to launch on Dymension, offering the best playground for games that blend AI and Web3 technology. The collaboration with Ocean Protocol is set to expedite the shift towards a player-centric, AI-focused gaming landscape to enhance player engagement, dynamic content generation, and decentralized ownership. By leveraging Ocean Protocol’s data sharing technology, the AI Gaming Coalition is set to bring together a diverse group of AI and gaming partners with a view to develop solutions that meet the dynamic need of games. Other members include Dymension, Today The Game, JokeRace, and more to come. Grounded in research, this collaborative effort is set to drive innovation and growth throughout the industry, unlocking a world of endless possibilities in gaming. As Ocean Protocol and NIM Network embark on this exciting journey, the gaming community can anticipate a new era of possibilities, where data ownership, privacy, and innovation converge to upgrade the landscape of AI-powered gaming to the next level. Stay tuned as Ocean Protocol and NIM Network pave the way for a future where gaming meets the forefront of technological evolution! #Binance​ #Solana🚀 #Bitcoin(BTC) #RWA $OCEAN $SOL $BTC
Ocean Protocol joins the #AIgaming Coalition to bring AI to gaming

Ocean Protocol and NIM Network are teaming up to bring you the future of gaming, as part of the AI Gaming Coalition. With Ocean Protocol’s cutting-edge, privacy-focused data sharing system, designed for AI, and NIM Network’s highly-adaptable blockchain, we’re setting new standards in gaming.
NIM Network is the first RollApp to launch on Dymension, offering the best playground for games that blend AI and Web3 technology. The collaboration with Ocean Protocol is set to expedite the shift towards a player-centric, AI-focused gaming landscape to enhance player engagement, dynamic content generation, and decentralized ownership.

By leveraging Ocean Protocol’s data sharing technology, the AI Gaming Coalition is set to bring together a diverse group of AI and gaming partners with a view to develop solutions that meet the dynamic need of games. Other members
include Dymension, Today The Game, JokeRace, and more to come. Grounded in research, this collaborative effort is set to drive innovation and growth throughout the industry, unlocking a world of endless possibilities in gaming.

As Ocean Protocol and NIM Network embark on this exciting journey, the gaming community can anticipate a new era of possibilities, where data ownership, privacy, and innovation converge to upgrade the landscape of AI-powered gaming to the next level.
Stay tuned as Ocean Protocol and NIM Network pave the way for a future where gaming meets the forefront of technological evolution!

#Binance​ #Solana🚀 #Bitcoin(BTC) #RWA $OCEAN $SOL $BTC
Ocean Protocol's ROADMAPIn this post we will talk about the Ocean Protocol roadmap, what components are currently being worked on and what exactly is planned to be implemented in the near future. The Ocean Protocol team has developed a number of unique components and is actively improving them, which is just the ocean predictoor, where predictors and traders have the opportunity to earn $OCEAN 🧐 So, the ocean protocol plans for the near future can be divided into the following points: ✅Development of Ocean Predictor ✅Launch C2D Springboard ✅Launch Ocean Enterprise Ocean predictoor development Here the work will be aimed at simplifying the work with the application, the possibility of increasing earnings for #Traders , as well as scaling #Defi in the ocean predictor. Make $ trading: internal. Conduct experiments iteratively until we make $ trading in some combination of trading venue & pair, prediction strategy, trading strategy, other.Simultaneously, build a first-class data pipeline and analytics dapp, to better answer the question “how much $ am I making” and drill-down questions. The dapp will start with predictoor bots, then expand to trading bots. Make $ trading: external. Update Predictoor product so that community can follow the make $ strategy from. A more efficient approach to pricing: Now the price of each channel is greatly reduced in comparison with the earnings that traders receive; a different pricing strategy will be applied here. Also the most important factor is the scaling of the ocean predictoor: Using a telegram bot for traders, this greatly simplifies working with the ocean predictor for ordinary people, since this approach does not require any technical data from the trader. Just launch the TG bot and buy trading lines. Increase the number of channels in DeFi, from 20 to 20,000. This will globally increase the capabilities of the predictor window, as well as the number of people who use it. Going beyond Defi. As we know, the ocean predictor has not been developed, only for defi, this is just the initial stage, it can also be used in the field of weather, climate, energy, real estate, logistics, agriculture, food prices and so on. It is possible to start with the weather in a small city, and then expand. Stretch goal: world model on ground-truth physics. They will train a single AI model across all feeds. This model will continually ingest the physics of the world to predict its next state. Truly a world model. They also give it all the training data that current LLMs take. The net result: a model with vastly more data than current LLMs, and ground-truth physics in all that additional data. Prediction is intelligence. This is Predictoor’s endgame. Launch Compute-to-Data Springboard Work is also underway on one of the existing Ocean Protocol instruments, namely Compute-to-Data. ✅Simplify deployment and support of various Ocean components This will be achieved through the Unified backend. This update means easier installation with just one command and a user-friendly admin panel, scheduled for release in the second quarter of 2024. ✅Ease of use A unified backend means simple installation and maintenance, and easier ways for users to write and deploy machine learning algorithms and models. This update will be released in the third quarter of 2024. ✅ Build a “Compute-to-Data springboard” This platform will be a clear demonstration of the power of your AI algorithms, reminiscent of Ocean Market, but heavily customized for Compute-to-Data workflows Ocean Enterprise - a new initiative Ocean Enterprise is an initiative aimed at strengthening the synergies between traditional businesses and the innovative technologies provided by Ocean Protocol. This platform is a response to growing interest in Ocean technologies, including Data NFTs (data tokens), which allow data to be uniquely identified and traded, and Compute-to-Data, which enables processing of data without revealing its contents. Ocean Enterprise represents the synergy of a team of specialists and a technology stack designed keeping in mind the needs and requirements of the business. This technology stack offers an enterprise version of Ocean, tailored to meet the requirements of various industries and compliance with regulations such as GDPR and MiCA. The Ocean Enterprise Collective, made up of diverse organizations from multiple countries and industries, is working to gather requirements, design and develop a technology solution. This stack will enable enterprises to securely collaborate, share and monetize data while being fully compliant with current standards. The first version of this product is expected to be released soon, and work on the project has already been underway for several months. Ocean Enterprise promises to be a bridge between modern technology innovation and the demanding world of traditional business, providing tools to interact with data efficiently and securely. In conclusion, we can say that in 2024 the main directions for Ocean Protocol will be Predictoor, work on the “C2D springboard” and the launch of Ocean Enterprise. Another year of fruitful work lies ahead! #BitcoinHalvingAlerts #DataMonetization #DataDrivenSuccess #BTC #WIF $BTC $SOL

Ocean Protocol's ROADMAP

In this post we will talk about the Ocean Protocol roadmap, what components are currently being worked on and what exactly is planned to be implemented in the near future. The Ocean Protocol team has developed a number of unique components and is actively improving them, which is just the ocean predictoor, where predictors and traders have the opportunity to earn $OCEAN 🧐
So, the ocean protocol plans for the near future can be divided into the following points:
✅Development of Ocean Predictor
✅Launch C2D Springboard
✅Launch Ocean Enterprise
Ocean predictoor development
Here the work will be aimed at simplifying the work with the application, the possibility of increasing earnings for #Traders , as well as scaling #Defi in the ocean predictor.
Make $ trading: internal. Conduct experiments iteratively until we make $ trading in some combination of trading venue & pair, prediction strategy, trading strategy, other.Simultaneously, build a first-class data pipeline and analytics dapp, to better answer the question “how much $ am I making” and drill-down questions. The dapp will start with predictoor bots, then expand to trading bots.
Make $ trading: external. Update Predictoor product so that community can follow the make $ strategy from.
A more efficient approach to pricing: Now the price of each channel is greatly reduced in comparison with the earnings that traders receive; a different pricing strategy will be applied here.
Also the most important factor is the scaling of the ocean predictoor:
Using a telegram bot for traders, this greatly simplifies working with the ocean predictor for ordinary people, since this approach does not require any technical data from the trader. Just launch the TG bot and buy trading lines.
Increase the number of channels in DeFi, from 20 to 20,000. This will globally increase the capabilities of the predictor window, as well as the number of people who use it.
Going beyond Defi. As we know, the ocean predictor has not been developed, only for defi, this is just the initial stage, it can also be used in the field of weather, climate, energy, real estate, logistics, agriculture, food prices and so on. It is possible to start with the weather in a small city, and then expand.
Stretch goal: world model on ground-truth physics. They will train a single AI model across all feeds. This model will continually ingest the physics of the world to predict its next state. Truly a world model. They also give it all the training data that current LLMs take. The net result: a model with vastly more data than current LLMs, and ground-truth physics in all that additional data. Prediction is intelligence. This is Predictoor’s endgame.
Launch Compute-to-Data Springboard
Work is also underway on one of the existing Ocean Protocol instruments, namely Compute-to-Data.
✅Simplify deployment and support of various Ocean components
This will be achieved through the Unified backend. This update means easier installation with just one command and a user-friendly admin panel, scheduled for release in the second quarter of 2024.
✅Ease of use
A unified backend means simple installation and maintenance, and easier ways for users to write and deploy machine learning algorithms and models. This update will be released in the third quarter of 2024.
✅ Build a “Compute-to-Data springboard”
This platform will be a clear demonstration of the power of your AI algorithms, reminiscent of Ocean Market, but heavily customized for Compute-to-Data workflows
Ocean Enterprise - a new initiative
Ocean Enterprise is an initiative aimed at strengthening the synergies between traditional businesses and the innovative technologies provided by Ocean Protocol. This platform is a response to growing interest in Ocean technologies, including Data NFTs (data tokens), which allow data to be uniquely identified and traded, and Compute-to-Data, which enables processing of data without revealing its contents.
Ocean Enterprise represents the synergy of a team of specialists and a technology stack designed keeping in mind the needs and requirements of the business. This technology stack offers an enterprise version of Ocean, tailored to meet the requirements of various industries and compliance with regulations such as GDPR and MiCA.
The Ocean Enterprise Collective, made up of diverse organizations from multiple countries and industries, is working to gather requirements, design and develop a technology solution. This stack will enable enterprises to securely collaborate, share and monetize data while being fully compliant with current standards.
The first version of this product is expected to be released soon, and work on the project has already been underway for several months. Ocean Enterprise promises to be a bridge between modern technology innovation and the demanding world of traditional business, providing tools to interact with data efficiently and securely.
In conclusion, we can say that in 2024 the main directions for Ocean Protocol will be Predictoor, work on the “C2D springboard” and the launch of Ocean Enterprise. Another year of fruitful work lies ahead!
#BitcoinHalvingAlerts #DataMonetization #DataDrivenSuccess #BTC #WIF $BTC $SOL
Do you know what is Ocean Predictoor? Ocean Predictoor is a stack and a dapp for prediction feeds. It has accountability for accuracy, via staking. It’s globally distributed and censorship-resistant, by being on-chain. We expect its accuracy to improve over time, due to its incentive structure. Its first use case is DeFi token prediction because users can close the data value-creation loop quickly to make tangible $. Prediction feeds are crowd-sourced. "Predictoor" agents submit individual predictions and stake on them. They make money when they're correct and lose money when not. This drives accurate prediction feeds, because only accurate predictoors will be making $ and sticking around.“Trader” agents buy aggregate predictions, then use them to take action like buying or selling. The more accurate the predictions, the more easily they make $, the longer they stick around to keep buying prediction feeds from trading profits.Predictoor is built on the Ocean Protocol stack, including contracts for tokenized data and middleware to cache metadata. To keep predictions private unless paid for, Predictoor uses Oasis Sapphire privacy-preserving EVM chain.The initial dapp is live at predictoor.ai. It’s for up/down predictions of #Bitcoin , #Ethereum , and other tokens’ prices. The dapp helps users build a mental model of Predictoor behavior. Predictoors and traders’ main workflow is to do run predicting / trading bots with the help of the Py SDK. We have seeded Predictoor with bots that have AI/ML models of accuracy comfortably above 50% — a precondition to make $ trading. You can check this dapp here - www.predictoor.ai #defi #Tokenization #solana $BTC $SOL $OCEAN
Do you know what is Ocean Predictoor?

Ocean Predictoor is a stack and a dapp for prediction feeds. It has accountability for accuracy, via staking. It’s globally distributed and censorship-resistant, by being on-chain. We expect its accuracy to improve over time, due to its incentive structure. Its first use case is DeFi token prediction because users can close the data value-creation loop quickly to make tangible $.

Prediction feeds are crowd-sourced. "Predictoor" agents submit individual predictions and stake on them. They make money when they're correct and lose money when not. This drives accurate prediction feeds, because only accurate predictoors will be making $ and sticking around.“Trader” agents buy aggregate predictions, then use them to take action like buying or selling. The more accurate the predictions, the more easily they make $, the longer they stick around to keep buying prediction feeds from trading profits.Predictoor is built on the Ocean Protocol stack, including contracts for tokenized data and middleware to cache metadata. To keep predictions private unless paid for, Predictoor uses Oasis Sapphire privacy-preserving EVM chain.The initial dapp is live at predictoor.ai. It’s for up/down predictions of #Bitcoin , #Ethereum , and other tokens’ prices. The dapp helps users build a mental model of Predictoor behavior. Predictoors and traders’ main workflow is to do run predicting / trading bots with the help of the Py SDK. We have seeded Predictoor with bots that have AI/ML models of accuracy comfortably above 50% — a precondition to make $ trading.

You can check this dapp here - www.predictoor.ai
#defi #Tokenization #solana $BTC $SOL $OCEAN
Discord Community Dynamics: Analyze Ocean Protocol Interactions! I hasten to inform you about the upcoming event from @oceanprotocol, namely Discord Community Dynamics: Analyze Ocean Protocol Interactions! Let's work together to unlock valuable knowledge and contribute to the growth of this vibrant! So, what is this? This initiative is designed to delve into the patterns of community engagement and trends within the Ocean Protocol's Discord server. Our goal is to gain a comprehensive understanding of the community's interactions and forecast upcoming trends in their activities. We encourage your participation in this endeavor to explore the ways members connect, communicate, and participate on this platform. This presents a unique chance to employ your analytical expertise on actual data, providing insights that could significantly impact the community's evolution and trajectory. Whats prizes? This year, we've boosted our prize pool to $10,000, which will be shared among the top 10 competitors, allowing a greater number of participants to secure a win. We're also enhancing our championship points system to more accurately recognize the accomplishments of our top contenders. Now, the top 10 finishers in each event will earn points, with a total of 100 championship points available. The distribution is as follows: $2,400 - 24 points $2,000 - 20 points $1,600 - 16 points $1,200 - 12 points $800 - 8 points $600 - 6 points $500 - 5 points $400 - 4 points $300 - 3 points $200 - 2 points These adjustments not only reward peak performance but also recognize and motivate a wider array of participants demonstrating talent and potential. This revamped structure is set to inject more excitement and competitive spirit into the championship, and we're eager to watch our skilled participants embrace the challenge! Participants must achieve a minimum score of 50% to be eligible for the reward prizes and points! In general, this event is an excellent opportunity to prove yourself, get closer to the technologies that Ocean Protocol is developing, as well as be part of the community and also earn money! You can find the details here - https://desights.ai/ Ocean Protocol website - oceanprotocol.com #Bitcoin‬ #DataEconomy #BullRun🐂 #BinanceSquareExplorers #BTC $OCEAN $BTC $SOL

Discord Community Dynamics: Analyze Ocean Protocol Interactions!

I hasten to inform you about the upcoming event from @oceanprotocol, namely Discord Community Dynamics: Analyze Ocean Protocol Interactions! Let's work together to unlock valuable knowledge and contribute to the growth of this vibrant!

So, what is this?

This initiative is designed to delve into the patterns of community engagement and trends within the Ocean Protocol's Discord server. Our goal is to gain a comprehensive understanding of the community's interactions and forecast upcoming trends in their activities.
We encourage your participation in this endeavor to explore the ways members connect, communicate, and participate on this platform. This presents a unique chance to employ your analytical expertise on actual data, providing insights that could significantly impact the community's evolution and trajectory.

Whats prizes?
This year, we've boosted our prize pool to $10,000, which will be shared among the top 10 competitors, allowing a greater number of participants to secure a win.
We're also enhancing our championship points system to more accurately recognize the accomplishments of our top contenders. Now, the top 10 finishers in each event will earn points, with a total of 100 championship points available. The distribution is as follows:
$2,400 - 24 points
$2,000 - 20 points
$1,600 - 16 points
$1,200 - 12 points
$800 - 8 points
$600 - 6 points
$500 - 5 points
$400 - 4 points
$300 - 3 points
$200 - 2 points
These adjustments not only reward peak performance but also recognize and motivate a wider array of participants demonstrating talent and potential. This revamped structure is set to inject more excitement and competitive spirit into the championship, and we're eager to watch our skilled participants embrace the challenge!
Participants must achieve a minimum score of 50% to be eligible for the reward prizes and points!

In general, this event is an excellent opportunity to prove yourself, get closer to the technologies that Ocean Protocol is developing, as well as be part of the community and also earn money!

You can find the details here - https://desights.ai/
Ocean Protocol website - oceanprotocol.com

#Bitcoin‬ #DataEconomy #BullRun🐂 #BinanceSquareExplorers #BTC $OCEAN $BTC $SOL
DF77 Completes and DF78 Launches1. Overview Ocean Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions (in Predictoor). Here are DF docs. Data Farming Round 77 (DF77) has completed. 150K OCEAN + 20K ROSE was budgeted for rewards. Rewards counting started 12:01am Feb 15, 2024 and ended 12:01 am Feb 22. You can claim rewards at the DF dapp Claim Portal. DF78 is live today, Feb 22. It concludes on Feb 29. 150K OCEAN and 20K ROSE are budgeted in total for rewards. This post is organized as follows: Section 2: DF structureSection 3: How to earn rewards, and claim themSection 4: Specific parameters for DF78 2. DF structure Passive DF. As a veOCEAN holder, you get passive rewards by default.Active DF has two substreams. – Volume DF. Actively curate data by allocating veOCEAN towards data assets with high Data Consume Volume (DCV), to earn more. – Predictoor DF. Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn. 3. How to Earn Rewards, and Claim Them There are three ways to earn and claim rewards: passive DF (like before), Active DF : Volume DF (like before), and Predictoor DF (new). Passive DF. To earn: lock OCEAN for veOCEAN, via the DF webapp’s veOCEAN page. To claim: go to the DF Webapp’s Rewards page; within the “Passive Rewards” panel, click the “claim” button. The Ocean docs have more details.Active DF – Volume DF substream. To earn: allocate veOCEAN towards data assets, via the DF webapp’s Volume DF page. To claim: go to the DF Webapp’s Rewards page; within the “Active Rewards” panel, click the “claim” button (it claims across all Active DF substreams at once). The Ocean docs have more details. – Predictoor DF substream. To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs. 4. Specific Parameters for DF78 This round is part of DF Main, phase 1. Budget. This round has 150,000 OCEAN + 20,000 ROSE rewards total. That OCEAN and ROSE is allocated as follows: Passive DF: 50% of rewards = 75,000 OCEANActive DF: 50% of rewards – Predictoor DF. 50% = 37,500 OCEAN + 20k ROSE – Volume DF. 50% = 37,500 OCEAN Networks. Passive DF applies to OCEAN locked on Ethereum mainnet. Predictoor DF applies to activity on Oasis Sapphire. Volume DF applies to data assets published on Ethereum Mainnet, Polygon, BSC, EWC, and Moonriver. Here is more information about Ocean deployments to networks. Volume DF rewards are calculated as follows: First, distribute OCEAN across each asset based on rank: highest-DCV asset gets most OCEAN, etc.Then, for each asset and each veOCEAN holder: – If the holder is a publisher, 2x the effective stake – Baseline rewards = (% stake in asset) * (OCEAN for asset) – Bound rewards to the asset by 125% APY – Bound rewards by asset’s DCV * 0.1%. This prevents wash consume. Predictoor DF rewards are calculated as follows: First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards.Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards. Expect further evolution in Active DF: tuning substreams and budget adjustments among substreams. What remains constant is passive DF, and the total OCEAN rewards emission schedule. Updates are always announced at the beginning of a round, if not sooner. Appendix: Further Reading The Data Farming Series post collects key articles and related resources about DF. About Ocean Protocol Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on @oceanprotocol to keep up to date. Chat directly with the Ocean community on https://discord.com/invite/TnXjkR5. #binance- #strk #Bitcoin‬ #DataMonetization #DataEconomy $BTC $STRK

DF77 Completes and DF78 Launches

1. Overview
Ocean Data Farming (DF) is Ocean’s incentives program. In DF, you can earn OCEAN rewards by locking OCEAN, curating data, and making predictions (in Predictoor). Here are DF docs.
Data Farming Round 77 (DF77) has completed. 150K OCEAN + 20K ROSE was budgeted for rewards. Rewards counting started 12:01am Feb 15, 2024 and ended 12:01 am Feb 22. You can claim rewards at the DF dapp Claim Portal.
DF78 is live today, Feb 22. It concludes on Feb 29. 150K OCEAN and 20K ROSE are budgeted in total for rewards.
This post is organized as follows:
Section 2: DF structureSection 3: How to earn rewards, and claim themSection 4: Specific parameters for DF78
2. DF structure
Passive DF. As a veOCEAN holder, you get passive rewards by default.Active DF has two substreams.
– Volume DF. Actively curate data by allocating veOCEAN towards data assets with high Data Consume Volume (DCV), to earn more.
– Predictoor DF. Actively predict crypto prices by submitting a price prediction and staking OCEAN to slash competitors and earn.
3. How to Earn Rewards, and Claim Them
There are three ways to earn and claim rewards: passive DF (like before), Active DF : Volume DF (like before), and Predictoor DF (new).
Passive DF. To earn: lock OCEAN for veOCEAN, via the DF webapp’s veOCEAN page. To claim: go to the DF Webapp’s Rewards page; within the “Passive Rewards” panel, click the “claim” button. The Ocean docs have more details.Active DF
– Volume DF substream. To earn: allocate veOCEAN towards data assets, via the DF webapp’s Volume DF page. To claim: go to the DF Webapp’s Rewards page; within the “Active Rewards” panel, click the “claim” button (it claims across all Active DF substreams at once). The Ocean docs have more details.
– Predictoor DF substream. To earn: submit accurate predictions via Predictoor Bots and stake OCEAN to slash incorrect Predictoors. To claim OCEAN rewards: run the Predictoor $OCEAN payout script, linked from Predictoor DF user guide in Ocean docs. To claim ROSE rewards: see instructions in Predictoor DF user guide in Ocean docs.
4. Specific Parameters for DF78
This round is part of DF Main, phase 1.
Budget. This round has 150,000 OCEAN + 20,000 ROSE rewards total. That OCEAN and ROSE is allocated as follows:
Passive DF: 50% of rewards = 75,000 OCEANActive DF: 50% of rewards
– Predictoor DF. 50% = 37,500 OCEAN + 20k ROSE
– Volume DF. 50% = 37,500 OCEAN
Networks. Passive DF applies to OCEAN locked on Ethereum mainnet. Predictoor DF applies to activity on Oasis Sapphire. Volume DF applies to data assets published on Ethereum Mainnet, Polygon, BSC, EWC, and Moonriver. Here is more information about Ocean deployments to networks.
Volume DF rewards are calculated as follows:
First, distribute OCEAN across each asset based on rank: highest-DCV asset gets most OCEAN, etc.Then, for each asset and each veOCEAN holder:
– If the holder is a publisher, 2x the effective stake
– Baseline rewards = (% stake in asset) * (OCEAN for asset)
– Bound rewards to the asset by 125% APY
– Bound rewards by asset’s DCV * 0.1%. This prevents wash consume.
Predictoor DF rewards are calculated as follows:
First, DF Buyer agent purchases Predictoor feeds using OCEAN throughout the week to evenly distribute these rewards.Then, ROSE is distributed at the end of the week to active Predictoors that have been claiming their rewards.
Expect further evolution in Active DF: tuning substreams and budget adjustments among substreams. What remains constant is passive DF, and the total OCEAN rewards emission schedule.
Updates are always announced at the beginning of a round, if not sooner.
Appendix: Further Reading
The Data Farming Series post collects key articles and related resources about DF.
About Ocean Protocol
Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.
Follow Ocean on @oceanprotocol to keep up to date. Chat directly with the Ocean community on https://discord.com/invite/TnXjkR5.
#binance- #strk #Bitcoin‬ #DataMonetization #DataEconomy $BTC $STRK
Ocean Predictoor just hit $2B in annualized volume $6M / day x 365 days = $2B Predictoor's growing up! As a rite of passage, Predictoor gets its own X account: @predictoor_ai Follow it to keep in the Predictoor loop #binancesquare #defi #ETH #Trading #Altcoins $OCEAN $ALT $BTC
Ocean Predictoor just hit $2B in annualized volume
$6M / day x 365 days = $2B
Predictoor's growing up!
As a rite of passage, Predictoor gets its own X account: @predictoor_ai
Follow it to keep in the Predictoor loop
#binancesquare #defi #ETH #Trading #Altcoins $OCEAN $ALT $BTC
In this thread we will talk about Ocean protocol "developer hub", whats its include, and what you can build with ocean protocol tools, for earn a lot of $OCEAN tokens! This is info for #developers mostly. So Ocean's developer hub includes various resources and tools for developers to start using Ocean Protocol. It encompasses: ✅Architecture: This includes information on the blockchain/contracts layer, middleware, and dApps; ✅Earning revenue: Guidance on coding to receive payment, fractional $, and community $; ✅Schemas: Details on Metadata, identifiers/DIDs, identifier objects/DDOs, storage, and fine-grained permissions; ✅Components: ➖Barge: A local chain for testing; ➖Ocean subgraph: For grabbing event data from the chain; ➖Ocean CLI: A command-line interface; ➖Compute-to-Data: A practical privacy approach; ➖Aquarius: A metadata cache. ➖Provider: Handshaking for access control. How do developers start using Ocean? ✅App level: Use an Ocean Template. ✅Library level: Use ocean.js is a library built for the key environment of dApp developers: JavaScript. Import it & use it your frontend or NodeJS. ✅Contract level: Call Ocean contracts on Eth mainnet or other chains. What you can build with Ocean Protocol tools? ✅Token-gated dApps & REST APIs: monetize by making your dApp or its REST API token-gated; ✅AI dApps: monetize your AI dApp by token-gating on AI training data, feature vectors, models, or predictions; ✅Data Markets: build a decentralized data market; ✅Private user profile data: storing user profile data on your centralized server exposes you to liability. Instead, have it on-chain encrypted by the user's wallet, and just-in-time decrypt for the app; If you are part of a talented and ambitious team of developers, you have an interest in decentralization and data monetization, we invite you to familiarize yourself with the details for developers - https://docs.oceanprotocol.com/developers. Additional questions can be asked in the Ocean Protocol discord channel! #binance #solana #Polygon #Bitcoin #NFT $SOL $BTC
In this thread we will talk about Ocean protocol "developer hub", whats its include, and what you can build with ocean protocol tools, for earn a lot of $OCEAN tokens! This is info for #developers mostly.

So Ocean's developer hub includes various resources and tools for developers to start using Ocean Protocol. It encompasses:

✅Architecture: This includes information on the blockchain/contracts layer, middleware, and dApps;
✅Earning revenue: Guidance on coding to receive payment, fractional $, and community $;
✅Schemas: Details on Metadata, identifiers/DIDs, identifier objects/DDOs, storage, and fine-grained permissions;
✅Components:
➖Barge: A local chain for testing;
➖Ocean subgraph: For grabbing event data from the chain;
➖Ocean CLI: A command-line interface;
➖Compute-to-Data: A practical privacy approach;
➖Aquarius: A metadata cache.
➖Provider: Handshaking for access control.

How do developers start using Ocean?

✅App level: Use an Ocean Template.
✅Library level: Use ocean.js is a library built for the key environment of dApp developers: JavaScript. Import it & use it your frontend or NodeJS.
✅Contract level: Call Ocean contracts on Eth mainnet or other chains.
What you can build with Ocean Protocol tools?
✅Token-gated dApps & REST APIs: monetize by making your dApp or its REST API token-gated;
✅AI dApps: monetize your AI dApp by token-gating on AI training data, feature vectors, models, or predictions;
✅Data Markets: build a decentralized data market;
✅Private user profile data: storing user profile data on your centralized server exposes you to liability. Instead, have it on-chain encrypted by the user's wallet, and just-in-time decrypt for the app;

If you are part of a talented and ambitious team of developers, you have an interest in decentralization and data monetization, we invite you to familiarize yourself with the details for developers - https://docs.oceanprotocol.com/developers.
Additional questions can be asked in the Ocean Protocol discord channel!
#binance #solana #Polygon #Bitcoin #NFT $SOL $BTC
Just when you thought it couldn't get better, Ocean Predictoor shatters records: $2.45M in 24h, scaling to a staggering $25.65M in 30 days. We've gone beyond the limits🚀 Ready to be elite? Trade, predict & prosper with us!💼💸 Your next move? 👉https://predictoor.ai #solana #binance #defi $SOL $ALT $OCEAN
Just when you thought it couldn't get better, Ocean Predictoor shatters records: $2.45M in 24h, scaling to a staggering $25.65M in 30 days.

We've gone beyond the limits🚀
Ready to be elite? Trade, predict & prosper with us!💼💸

Your next move? 👉https://predictoor.ai

#solana #binance #defi $SOL $ALT $OCEAN
Ocean Predictoor just hit $1M in daily volume. A great milestone for a product launched just a few months ago, as a wholly new product category (prediction feeds). And volume keeps growing exponentially. Ocean Predictoor - https://predictoor.ai #data #Bitcoin #Web3 #Ethereum #defi $OCEAN $BTC $ETH
Ocean Predictoor just hit $1M in daily volume.

A great milestone for a product launched just a few months ago, as a wholly new product category (prediction feeds).
And volume keeps growing exponentially.

Ocean Predictoor - https://predictoor.ai

#data #Bitcoin #Web3 #Ethereum #defi $OCEAN $BTC $ETH
In this thread, we’ll talk a little about why I think the Ocean Protocol is ultra-promising and the direction of the decentralized data economy in general! First, let me remind you whats $OCEAN offers: a set of tools that allows you to ensure secure ownership and exchange of data , as well as monetize your data. I will highlight the main reasons why the Ocean Protocol is very promising and useful: 1. Data security and control; 2. Data monetization; 3. Promoting the development of AI; 4. Transparency and trust; 5. These functions can be applied in real life right now; Let's look at my arguments in a little more detail🧐 ✅Security and data control Ocean Protocol tools allow users to use their data without the risk of loss. For example, Compute-To-data function then gives the ability to process data using special algorithms, without actual data transfer. ✅Data monetization Ocean Protocol technologies allow you to monetize your data when the owner receives real payments for the use of his data. ✅Promoting the development of AI As you know, #AI learns by studying data. The Ocean Protocol significantly expands the volume of this data, thereby contributing to the development of artificial intelligence as a whole. ✅Transparency and trust The decentralized nature of Ocean Protocol ensures transaction transparency. Every transaction is recorded on the blockchain, providing a transparent and immutable history of data access and payments. ✅Applicability of technologies in practice Today, there are many industries where the functions of the Ocean Protocol can be applied and the work of these industries can be significantly improved, for example: medicine, financial services, logistics, education and many others, they can be listed all day. So we are talking about a technology that has the potential to become widely used throughout the world, in everyday life. If you same intererested as me? you can check details here - https://oceanprotocol.com #Ethereum #binance #defi #Web3 $ETH $ALT
In this thread, we’ll talk a little about why I think the Ocean Protocol is ultra-promising and the direction of the decentralized data economy in general!

First, let me remind you whats $OCEAN offers: a set of tools that allows you to ensure secure ownership and exchange of data , as well as monetize your data.

I will highlight the main reasons why the Ocean Protocol is very promising and useful:

1. Data security and control;

2. Data monetization;

3. Promoting the development of AI;

4. Transparency and trust;

5. These functions can be applied in real life right now;

Let's look at my arguments in a little more detail🧐

✅Security and data control
Ocean Protocol tools allow users to use their data without the risk of loss. For example, Compute-To-data function then gives the ability to process data using special algorithms, without actual data transfer.

✅Data monetization
Ocean Protocol technologies allow you to monetize your data when the owner receives real payments for the use of his data.

✅Promoting the development of AI
As you know, #AI learns by studying data. The Ocean Protocol significantly expands the volume of this data, thereby contributing to the development of artificial intelligence as a whole.

✅Transparency and trust
The decentralized nature of Ocean Protocol ensures transaction transparency. Every transaction is recorded on the blockchain, providing a transparent and immutable history of data access and payments.

✅Applicability of technologies in practice
Today, there are many industries where the functions of the Ocean Protocol can be applied and the work of these industries can be significantly improved, for example: medicine, financial services, logistics, education and many others, they can be listed all day. So we are talking about a technology that has the potential to become widely used throughout the world, in everyday life.

If you same intererested as me? you can check details here - https://oceanprotocol.com

#Ethereum #binance #defi #Web3 $ETH $ALT
In this thread, we’ll talk a little about why I think the Ocean Protocol is ultra-promising and the direction of the decentralized data economy in general! First, let me remind you whats $OCEAN offers: a set of tools that allows you to ensure secure ownership and exchange of data , as well as monetize your data. I will highlight the main reasons why the Ocean Protocol is very promising and useful: 1. Data security and control; 2. Data monetization; 3. Promoting the development of AI; 4. Transparency and trust; 5. These functions can be applied in real life right now; Let's look at my arguments in a little more detail🧐 ✅Security and data control Ocean Protocol tools allow users to use their data without the risk of loss. For example, Compute-To-data function then gives the ability to process data using special algorithms, without actual data transfer. ✅Data monetization Ocean Protocol technologies allow you to monetize your data when the owner receives real payments for the use of his data. ✅Promoting the development of AI As you know, #AI learns by studying data. The Ocean Protocol significantly expands the volume of this data, thereby contributing to the development of artificial intelligence as a whole. ✅Transparency and trust The decentralized nature of Ocean Protocol ensures transaction transparency. Every transaction is recorded on the blockchain, providing a transparent and immutable history of data access and payments. ✅Applicability of technologies in practice Today, there are many industries where the functions of the Ocean Protocol can be applied and the work of these industries can be significantly improved, for example: medicine, financial services, logistics, education and many others, they can be listed all day. So we are talking about a technology that has the potential to become widely used throughout the world, in everyday life. If you same intererested as me? you can check details here - https://oceanprotocol.com #Ethereum #binance #defi #Web3 $ETH $ALT
In this thread, we’ll talk a little about why I think the Ocean Protocol is ultra-promising and the direction of the decentralized data economy in general!

First, let me remind you whats $OCEAN offers: a set of tools that allows you to ensure secure ownership and exchange of data , as well as monetize your data.

I will highlight the main reasons why the Ocean Protocol is very promising and useful:

1. Data security and control;

2. Data monetization;

3. Promoting the development of AI;

4. Transparency and trust;

5. These functions can be applied in real life right now;

Let's look at my arguments in a little more detail🧐

✅Security and data control
Ocean Protocol tools allow users to use their data without the risk of loss. For example, Compute-To-data function then gives the ability to process data using special algorithms, without actual data transfer.

✅Data monetization
Ocean Protocol technologies allow you to monetize your data when the owner receives real payments for the use of his data.

✅Promoting the development of AI
As you know, #AI learns by studying data. The Ocean Protocol significantly expands the volume of this data, thereby contributing to the development of artificial intelligence as a whole.

✅Transparency and trust
The decentralized nature of Ocean Protocol ensures transaction transparency. Every transaction is recorded on the blockchain, providing a transparent and immutable history of data access and payments.

✅Applicability of technologies in practice
Today, there are many industries where the functions of the Ocean Protocol can be applied and the work of these industries can be significantly improved, for example: medicine, financial services, logistics, education and many others, they can be listed all day. So we are talking about a technology that has the potential to become widely used throughout the world, in everyday life.

If you same intererested as me? you can check details here - https://oceanprotocol.com

#Ethereum #binance #defi #Web3 $ETH $ALT
Have you submitted your entry for the "Traffic Accidents in Catalunya" DataChallenge yet? Analyze real-world #data, uncover insights, and compete for prizes (6500 $USDC prize pool) Deadline approaching: 30 Jan, 11:59 PM UTC Join here➡️ https://bit.ly/3RKUkda #web3 #Ethereum #JUP $OCEAN $ETH
Have you submitted your entry for the "Traffic Accidents in Catalunya" DataChallenge yet?

Analyze real-world #data, uncover insights, and compete for prizes (6500 $USDC prize pool)

Deadline approaching: 30 Jan, 11:59 PM UTC

Join here➡️ https://bit.ly/3RKUkda

#web3 #Ethereum #JUP $OCEAN $ETH
2024 Ocean Protocol Data Challenge Championship is LiveAre you the top Data Scientist in the land? Demonstrate your case for 1st place in this year’s data challenge championship season. Introduction This blog introduces the kickoff of the 2024 Ocean Protocol Data Challenge Championship. The first Data Challenge of the year is live on Desights now and ends on Tuesday, Jan 30, 2024. ’24 is the 3rd year of OceanProtocol -sponsored data science competitions. This year welcomes season 2 of the championship and leaderboard points. Additional details about the 2023 Season can be found in this blog post. Some minor details have been added/subtracted to this year’s championship, which are presented below: What’s New This Season? 2023 welcomed over 150 unique data scientists competing in data challenges. To accommodate an increasing number of recurring participants, we have raised the prize pool for each data challenge from $5000 USD to $6500 USD available every cycle. Due to this, the 250 $OCEAN participation bonus for submitting reports & proposals has concluded. Starting with the current data challenge (Road to Safety: Traffic Accident Analysis) the $6500 prize pool will be distributed to the top 10 scored submissions per data challenge. Additionally, leaderboard points for the championship season will be rewarded for every challenge, scaled to the top 10 on a given data challenge. All participants outside of the top 10 will not receive points towards the season leaderboard. Furthermore, new data challenges will begin roughly on 2 Thursdays of each month, and each will be open to participate for 20 days. 2022 & 2023 data challenges tested different time durations between 7–30 days. It has been determined that initiatives and hypothesis testing that require longer than 20 days will be tagged and executed as something other than a data challenge (data science competition). Beyond the program structure, new features and functionalities of Desights.ai continue to roll out regularly. Desights is the application that the Ocean Data Science team uses to conduct data challenges. The platform continues to mature as the web3 platform to crowdsource solutions to AI & ML challenges, business intelligence, applied data science, and predictive analytics. 2024 Leaderboard & Awards As briefed above, the reward structure for each data challenge and the end-of-championship season awards have been modified. 1) More $ incentives repeatedly for the best quality reports/outcomes, and 2) advancements in gamifying the points structure for the year are the two main pillars of change to bear in mind. An updated structure for the current data challenge and all challenges hosted in the 2024 season is articulated in the image below: Calendar of Events Data Challenges sponsored by the Ocean Protocol Data Science team will begin 2 Thursdays of every month. Challenges will also end on 2 Tuesdays of every month. Each Challenge will be open for 20 days to formulate a report, results, or alternative submission criteria for the given challenge. The 2024 Championship Leaderboard is live and will conclude in December 2024. Updates to changes in the leaderboard will be published through Ocean Protocol media channels after each data challenge ends and new points are accrued. Join The Community The Ocean Protocol Data Science team and alternative core teams are available in the Ocean Protocol Community Discord & Desights Community Discord Channels. Live updates to the Ocean Data Challenge leaderboard and all related initiatives + updates are available via Twitter, Ocean Website, Discord, and blog. For questions, comments, and community data science dialogue, reach out in our discord under the data-science-hub channel: https://discord.gg/yFRPH9PCN4 for updates and new challenges. Stay tuned for published research, challenge reviews, and all Ocean Protocol updates on the blog page under blog.oceanprotocol.com. To see past, current, and future data challenges sponsored by Ocean, please visit https://oceanprotocol.com/earn/data-challenges. About Ocean Protocol Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Contacts: Twitter - @oceanprotocol Website - https://oceanprotocol.com Discord - https://discord.com/invite/TnXjkR5 #Web3 #Ethereum #Polygon

2024 Ocean Protocol Data Challenge Championship is Live

Are you the top Data Scientist in the land? Demonstrate your case for 1st place in this year’s data challenge championship season.

Introduction
This blog introduces the kickoff of the 2024 Ocean Protocol Data Challenge Championship. The first Data Challenge of the year is live on Desights now and ends on Tuesday, Jan 30, 2024. ’24 is the 3rd year of OceanProtocol -sponsored data science competitions. This year welcomes season 2 of the championship and leaderboard points. Additional details about the 2023 Season can be found in this blog post. Some minor details have been added/subtracted to this year’s championship, which are presented below:
What’s New This Season?
2023 welcomed over 150 unique data scientists competing in data challenges. To accommodate an increasing number of recurring participants, we have raised the prize pool for each data challenge from $5000 USD to $6500 USD available every cycle. Due to this, the 250 $OCEAN participation bonus for submitting reports & proposals has concluded.
Starting with the current data challenge (Road to Safety: Traffic Accident Analysis) the $6500 prize pool will be distributed to the top 10 scored submissions per data challenge. Additionally, leaderboard points for the championship season will be rewarded for every challenge, scaled to the top 10 on a given data challenge. All participants outside of the top 10 will not receive points towards the season leaderboard. Furthermore, new data challenges will begin roughly on 2 Thursdays of each month, and each will be open to participate for 20 days. 2022 & 2023 data challenges tested different time durations between 7–30 days. It has been determined that initiatives and hypothesis testing that require longer than 20 days will be tagged and executed as something other than a data challenge (data science competition).
Beyond the program structure, new features and functionalities of Desights.ai continue to roll out regularly. Desights is the application that the Ocean Data Science team uses to conduct data challenges. The platform continues to mature as the web3 platform to crowdsource solutions to AI & ML challenges, business intelligence, applied data science, and predictive analytics.

2024 Leaderboard & Awards
As briefed above, the reward structure for each data challenge and the end-of-championship season awards have been modified.
1) More $ incentives repeatedly for the best quality reports/outcomes, and 2) advancements in gamifying the points structure for the year are the two main pillars of change to bear in mind.
An updated structure for the current data challenge and all challenges hosted in the 2024 season is articulated in the image below:

Calendar of Events
Data Challenges sponsored by the Ocean Protocol Data Science team will begin 2 Thursdays of every month. Challenges will also end on 2 Tuesdays of every month.
Each Challenge will be open for 20 days to formulate a report, results, or alternative submission criteria for the given challenge.
The 2024 Championship Leaderboard is live and will conclude in December 2024.
Updates to changes in the leaderboard will be published through Ocean Protocol media channels after each data challenge ends and new points are accrued.
Join The Community
The Ocean Protocol Data Science team and alternative core teams are available in the Ocean Protocol Community Discord & Desights Community Discord Channels. Live updates to the Ocean Data Challenge leaderboard and all related initiatives + updates are available via Twitter, Ocean Website, Discord, and blog.
For questions, comments, and community data science dialogue, reach out in our discord under the data-science-hub channel: https://discord.gg/yFRPH9PCN4 for updates and new challenges.
Stay tuned for published research, challenge reviews, and all Ocean Protocol updates on the blog page under blog.oceanprotocol.com.
To see past, current, and future data challenges sponsored by Ocean, please visit https://oceanprotocol.com/earn/data-challenges.

About Ocean Protocol
Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.

Contacts:

Twitter - @oceanprotocol
Website - https://oceanprotocol.com
Discord - https://discord.com/invite/TnXjkR5

#Web3 #Ethereum #Polygon
How can Data Marketplaces & Compute-to-Data be useful for AI?In this article will raise the topic of the technologies that the #OceanProtocol offers and how they can be useful for #artificialintelligence Since the $OCEAN offers tools that improve the process of data exchange, we will, of course, talk about #ArtificialIntelligence." training. First, about the essence of technology. What is Compute OF Data? Compute-to-Data (C2D) is a feature of Ocean Protocol that enables you to monetize the output of compute jobs on your datasets without revealing the contents of the data or algorithms themselves. It allows algorithms to run on private data on-premise, and only the results are shared, not the raw data. This approach helps in preserving the privacy of sensitive data while still allowing for valuable computations like statistical analysis or #AiEra model development! Data Marketplaces & Compute-to-Data (C2D) can be useful for AI in several ways: Access to Private Data: C2D allows AI models to be trained on private data without compromising the privacy of the data. This is because only the results of computations are revealed, not the data itself.Enhanced AI Model Development: Access to a wider range of data, including sensitive and previously inaccessible data, can improve the predictive accuracy of AI models.Monetization of Data: Data owners can monetize their datasets by providing compute access rather than selling the raw data, creating a new revenue stream while maintaining data privacy.Data Provenance: Blockchain technology records the acts of publishing, purchasing, and consuming data, providing a tamper-proof audit trail that is beneficial for AI data management. So, Data Marketplaces & Compute-to-Data (C2D) can be useful for AI by providing a platform where data providers can publish data and buyers can consume data, all while maintaining control over the data and ensuring privacy. This is particularly valuable for AI as it enables access to vast amounts of training data while preserving the privacy of the data, which is crucial for sensitive datasets like health records. C2D resolves the tradeoff between leveraging private data for AI model development and mitigating risks associated with data exposure. You can check detail about Ocean Protocol and technologies here - https://oceanprotocol.com #DataEconomy

How can Data Marketplaces & Compute-to-Data be useful for AI?

In this article will raise the topic of the technologies that the #OceanProtocol offers and how they can be useful for #artificialintelligence
Since the $OCEAN offers tools that improve the process of data exchange, we will, of course, talk about #ArtificialIntelligence." training.
First, about the essence of technology. What is Compute OF Data?
Compute-to-Data (C2D) is a feature of Ocean Protocol that enables you to monetize the output of compute jobs on your datasets without revealing the contents of the data or algorithms themselves. It allows algorithms to run on private data on-premise, and only the results are shared, not the raw data. This approach helps in preserving the privacy of sensitive data while still allowing for valuable computations like statistical analysis or #AiEra model development!

Data Marketplaces & Compute-to-Data (C2D) can be useful for AI in several ways:
Access to Private Data: C2D allows AI models to be trained on private data without compromising the privacy of the data. This is because only the results of computations are revealed, not the data itself.Enhanced AI Model Development: Access to a wider range of data, including sensitive and previously inaccessible data, can improve the predictive accuracy of AI models.Monetization of Data: Data owners can monetize their datasets by providing compute access rather than selling the raw data, creating a new revenue stream while maintaining data privacy.Data Provenance: Blockchain technology records the acts of publishing, purchasing, and consuming data, providing a tamper-proof audit trail that is beneficial for AI data management.
So, Data Marketplaces & Compute-to-Data (C2D) can be useful for AI by providing a platform where data providers can publish data and buyers can consume data, all while maintaining control over the data and ensuring privacy. This is particularly valuable for AI as it enables access to vast amounts of training data while preserving the privacy of the data, which is crucial for sensitive datasets like health records. C2D resolves the tradeoff between leveraging private data for AI model development and mitigating risks associated with data exposure.
You can check detail about Ocean Protocol and technologies here - https://oceanprotocol.com
#DataEconomy
Ocean Predictoor statsOcean Predictoor daily volume is doubling every 18 days It's now at $130K / day! That's $3.9M monthly, or $47.5M annually -- ignoring future growth. It's grown by 50x since November. What will the future hold? What is Ocean Predictoor? Ocean Predictoor is an on-chain, privacy-enabled, AI-powered application and stack that provides prediction feeds, which are streams of predictions for a given time series, such as the future price of cryptocurrencies like #ETH and #BTC🔥🔥 It operates by allowing "Predictoor" agents to submit individual predictions and stake on them, with the aggregated predictions being sold to trader agents who use them to inform their #Trading decisions. Predictoor is built on the #oceanprotocol Protocol stack and uses the Oasis Sapphire privacy-preserving #evm chain to keep predictions private unless paid for. The initial dapp is live at predictoor.ai and is designed for up/down predictions of cryptocurrency prices. You can check this dapp here - www.predictoor.ai

Ocean Predictoor stats

Ocean Predictoor daily volume is doubling every 18 days
It's now at $130K / day!
That's $3.9M monthly, or $47.5M annually -- ignoring future growth.
It's grown by 50x since November. What will the future hold?

What is Ocean Predictoor?
Ocean Predictoor is an on-chain, privacy-enabled, AI-powered application and stack that provides prediction feeds, which are streams of predictions for a given time series, such as the future price of cryptocurrencies like #ETH and #BTC🔥🔥 It operates by allowing "Predictoor" agents to submit individual predictions and stake on them, with the aggregated predictions being sold to trader agents who use them to inform their #Trading decisions. Predictoor is built on the #oceanprotocol Protocol stack and uses the Oasis Sapphire privacy-preserving #evm chain to keep predictions private unless paid for. The initial dapp is live at predictoor.ai and is designed for up/down predictions of cryptocurrency prices.
You can check this dapp here - www.predictoor.ai
Congrats to our 2023 Data Challenge Champions! 2023 accumulated over 175 unique data scientists participating in challenges, yet only 10 can sit in the Top-10 leaderboard to end the year. Ocean Protocol is a decentralized data exchange protocol that enables individuals and organizations to share, sell, and consume data in a secure, transparent, and privacy-preserving manner. It helps implement #DataMonetization by introducing crypto primitives like "data on-ramp" and "data off-ramp" via datatokens, allowing publishers to create ERC20 datatokens for datasets and enabling consumers to access datasets by acquiring these datatokens. Ocean Protocol uses #BlockchainCommerce technology, smart contracts, and cryptographic techniques to facilitate this process. It is new #DataEconomy #DeFis
Congrats to our 2023 Data Challenge Champions!

2023 accumulated over 175 unique data scientists participating in challenges, yet only 10 can sit in the Top-10 leaderboard to end the year.

Ocean Protocol is a decentralized data exchange protocol that enables individuals and organizations to share, sell, and consume data in a secure, transparent, and privacy-preserving manner. It helps implement #DataMonetization by introducing crypto primitives like "data on-ramp" and "data off-ramp" via datatokens, allowing publishers to create ERC20 datatokens for datasets and enabling consumers to access datasets by acquiring these datatokens. Ocean Protocol uses #BlockchainCommerce technology, smart contracts, and cryptographic techniques to facilitate this process. It is new #DataEconomy

#DeFis
AI-Powered Bots in Ocean Predictoor Get a UX Upgrade: CLI & YAMLThe pdr-backend v0.2 release has command-line interface & YAML file to set parameters, to run bots more easily Summary With Predictoor, you can run #AI-powered prediction bots or trading bots on #crypto price feeds to earn $. The interface to use predictoor bots & trader bots just got a lot simpler, via a CLI and using a YAML file for parameters. It also refactors backend code such to that we can do more powerful experiments around making $. 1. Intro: Predictoor & Bots #oceanprotocol Predictoor provides on-chain “prediction feeds” on whether #ETH #BTC🔥🔥 etc will rise in the next 5 min or 60 min. “Predictoors” submit predictions and stake on them; predictions are aggregated and sold to traders as alpha. Predictoor runs on OassisSapphire, the only confidential EVM chain in production. We launched Predictoor and its Data Farming incentives in September & November 2023, respectively. The pdr-backend GitHub repo has the Python code for all bots: Predictoor bots, Trader bots, and support bots (submitting true values, buying on behalf of DF, etc). As a predictoor, you run a predictoor bot with the help of a predictoor bot README in the pdr-backend GitHub repo. It takes 1–2 h to go through, including getting OCEAN & ROSE in Sapphire. The repo provides starting-point predictoor bots, which gather historical CEX price data and build AI/ML models. You can gain your own edge — to earn more $ — by changing the bot as you please: more data, better feature vectors, different modeling approaches, and more. Similarly, as a trader, you can run a trader bot with the help of a trader bot README. The repo provides starting-point trader bots, which use Predictoor price feeds as alpha for trading. You can gain your own edge — to earn more $ — by changing the bot as you please for more sophisticated trading strategies. Predictoor has promising traction, with 1.86M transactions and $1.86M in volume in the previous 30d [Ref DappRadar] [1]. Our main internal goal overall is to make $ trading, and then take those learnings to the community in the form of product updates, and related communications. Towards that, we’ve been eating our own dogfood: running our own predictoor bots & trader bots, and improving things based on our own experience. Most of these improvements come to Predictoor’s backend: the pdr-backend repo. We’ve evolved it a lot lately! Where it mandates the first big release since launch (yet still pre-v1). That’s what this blog post describes. The rest of this post is organized as follows. Section 2 describes the prior release (pdr-backend v0.1), and section 3 its challenges. Section 4 describes the new release (pdr-backend v0.2), focusing on its key features of CLI and YAML file, which help usability in running bots. Section 5 describes how v0.2 addresses the challenges of v0.1. Section 6 concludes. 2. About pdr-backend v0.1 Flows We released pdr-backend when we launched Predictoor in September 2023, and have been continually improving it since then: fixing bugs, reducing onboarding friction, and adding more capabilities (eg simulation flow). The first official release was v0.1.0 on November 20, 2023; with subsequent v0.1.x releases. It is licensed under Apache V2, a highly permissive open-source license. In the last v0.1 predictoor bot README, the flow had you do simulation, then run a bot on testnet, then run a bot on mainnet. Let’s elaborate. Simulation. You’d start simulation with a call like: python pdr_backend/simulation/runtrade.py. It grabs historical data, builds models, predicts, does simulated trades, then repeats, all on historical data. It logs and plots progress in real time. It would run according to default settings: what price feeds to use for AI model inputs, how much training data, etc. Those settings were hardcoded in the runtrade.py script. To change settings, you’d have to change the script itself, or support code. Run a bot on testnet. First, you’d specify envvars via the terminal: your private key, envvars for network (e.g. RPC_URL), and envvars to specify feeds (PAIR_FILTER,TIMEFRAME_FILTER, SOURCE_FILTER). Then you’d run the bot with a call like:python pdr_backend/predictoor/main.py 3. It would run according to default settings. The 3meant predictoor approach #3: dynamic model building. To change predictoor settings, you’d have to change code. Run a bot on mainnet. This was like testnet, except specifying different envvars for network and perhaps private key. Any further configurations, such as what CEX data sources to use in modeling, would be hardcoded in the script. To use different values, you’d have to modify those in your local copy of the code. The last v0.1 trader bot README had a similar flow to the v0.1 predictoor bot README. 3. Challenges in pdr-backend v0.1 Flows We were — and are — proud of the v0.1 predictoor bot & trader bot flows. We’d streamlined them fairly well: one could get going quickly, and accomplish what they needed to. To go further and modify parameters, one would have to jump into Python code. At first glance this might have thought this a problem; however target users (and actual users) are data scientists or developers, who have no trouble modifying code. Yet there were a few issues. First, it was annoying to manually change code to change parameters. We could have written higher-level code that looped, and modified the parameters code at each loop iteration; however code that changes code is error-prone and can be dangerous.Trader bots and predictoor bots had the same issue, and worse: the py code for parameter changes was scattered in a few places. Even if the scattering was fixed, the core issue would remain. Second, envvars didn’t have enough fidelity, and adding more would have led to an unusably high number of envvars. Recall that we used envvars to specify feeds (PAIR_FILTER, etc). This wasn’t enough detail for all our needs. For example, in running a predictoor bot, one couldn’t use envvars to specify the model output feed (what feed to predict) and model input price feeds, let alone non-price feeds for model inputs.And, putting it into envvars would be sloppy and error-prone; if we weren’t careful, we’d have a crazy number of envvars. Third, a messy CLI was organically emerging. Recall, one would run a predictoor bot with a custom call directly to the script, such as:python pdr_backend/predictoor/main.py 3, where 3meant approach 3. Similar for simulation or trader flows.Support for CLI-level parameters was pretty basic, only lightly tested, and was implemented on a per-script basis. Then, from our own runs of predictoor bots we were starting to do basic analytics, and a new ./scripts/directory emerged, with each script having its own custom CLI call. Things were getting messier yet. Finally, we wanted to extend pdr-backend functionality, and doing it in v0.1 code would explode complexity. We have big plans for our “make $” experiments, and for these, we saw the need to extend functionality by a lot.We wanted to build out a proper analytics app. We wanted to professionalize and operationalize the data pipeline, for use by simulation, the bots, and the analytics app.We wanted to extend simulation, into a flow that supported experiments on realtime data and with the possibility of live trading. Doing this would have means even more parameters and flows; if we kept the v0.1 parameter-setting and messy CLI then complexity would become unwieldy. We needed a cleaner base before we could proceed. 4. Introducing pdr-backend v0.2 We’re pleased to announce the release of pdr-backend v0.2. It solves the issues above 🎉 via a good CLI, and a YAML file to set parameters. It’s live now in the pdr-backend repo. The rest of this section describes the heavily-updated CLI, the YAML file, and changes to the pdr-backend architecture for a good data pipeline and analytics. 4.1 Updated CLI You get started with Predictoor like before: Then, you can type pdr to see the interface at the command-line: There are commands to run experiments / simulation (pdr xpmt), predictoor bot (pdr predictoor), trader bot (pdr trader), and for people running predictoor bots to claim rewards (pdr claim_OCEAN, pdr claim_ROSE). There’s a new command to fill the data lake (pdr lake), and several new analytics-related commands (pdr get_predictoors_info , …, pdr check_network ). Remaining commands are typically for use by the core team. To get help for a given command, just type the command without any argument values. 4.2 New: YAML file The default file is ppss.yaml. Most CLI commands take PPSS_FILE (YAML file) as an input argument. Therefore users can make their own copy from the default ppss.yaml, and modify at will. The YAML file holds most parameters; the CLI specifies which YAML file and network, and sometimes commonly-updated parameters. To minimize confusion, there are no envvars. All parameters are in the YAML file or the CLI. One exception: PRIVATE_KEY envvar is retained because putting it in a file would have reduced security. The YAML file has a sub-section for each bot: a predictoor_ss section, a trader_ss section, etc. The web3_pp section holds info for all networks. Below is an an example of the predictoor_ss section in the YAML file. Note how it specifies a feed to predict (predict_feed), as well as input feeds for AI modeling (aimodel_ss.input_feeds). Most CLI commands take NETWORK as an input argument. The YAML file holds RPC_URL and other network parameters for each network. Combining this, the NETWORK CLI argument selects from them. Therefore to wants to use a different network (e.g. testnet → mainnet), then one only needs to change the network name in the CLI. Compare this to v0.1 where several envvars needed changing. A bonus: the new setup allows convenient storage of many different network configurations (in the YAML file). When the whole YAML file is read, it creates a PPSS object. That object has attributes corresponding to each bot: a predictoor_ss object (of class PredictoorSS), a trader_ss object (of class TraderSS), etc. It also holds network info in its web3_pp object (of class Web3PP). 4.3 New: Good data pipeline We refined pdr-backend architecture to have a proper data pipeline, in new directory /lake/. It’s centered around a data lake with tiers from raw → refined. We’ve moved from storing raw price data as csv files, to parquet files, because parquet supports querying without needing to have a special database layer on top (!), among other benefits. In conjunction, we’ve moved from Pandas dataframes to Polars dataframes, because Polars scales better and plays well with parquet. (We are already seeing intensive data management and expect our data needs to grow by several orders of magnitude.) 4.4 New: Space to grow analytics We’ve also updated pdr-backend analytics support, in the new directory /analytics/ . First, what used to be ad-hoc scripts for analytics tools now has proper CLI support:pdr get_predictoors_info , …, pdr check_network. These analytics tools now use data from the lake, and continue to be evolved. Furthermore, we are building towards a more powerful analytics app that uses python-style plotting in the browser, via streamlit. 5. How pdr-backend v0.2 Fixes v0.1 Issues Here’s how v0.2 fixes each of the four issues raised above. Issue: Annoying to manually change code to change parametersv0.2 fix: use YAML file & CLI for all parameters. The YAML file holds most parameters; the CLI specifies which YAML file and network, and sometimes commonly-updated parameters. The YAML file holds parameters that were previously envvars, or somewhere in code. Here’s the default YAML file.Issue: envvars didn’t have enough fidelityv0.2 fix: use YAML file & CLI for all parameters. In the YAML file, each bot gets its own subsection, including which feeds to work with. The YAML has far more fidelity because it also includes variables that were previously in code.Issue: a messy CLI was organically emergingv0.2 fix: now we have a clean CLI. Previous calls to scripts for simulation, predictoor bot, trader bot, and various analytics are all now folded into the CLI. The CLI is implemented in new directory /cli/; its core modules cli_arguments.py and cli_module.py use argparse, the best-practices CLI library for Python. The CLI has unit tests and system tests.Issue: we wanted to extend pdr-backend functionality, and doing it in v0.1 code would explode complexity.v0.2 fix: YAML & clean CLI give a less-complex, more flexible foundation to build from. And we’re now nicely along in our progress: as we were building v0.2, we have also refined its architecture to have a proper data pipeline (in /lake/), the beginnings of a more powerful analytics app (in /analytics/), and are about to upgrade the simulation engine for more flexible and powerful experiments. 6. Conclusion With Ocean Predictoor, you can run AI-powered prediction bots or trading bots on crypto price feeds to earn $. With pdr-backend v0.2, the interface to use predictoor bots & trader bots just got a lot simpler, via a CLI and using a YAML file for parameters. It also refactors backend code such to that we can do more powerful experiments around making $. Get started at the pdr-backend’s https://github.com/oceanprotocol/pdr-backend. Notes [1] Two 1.86M values is a coincidence. Usually the values aren’t identical, though typically within 0.5x — 2x of each other About Ocean Protocol Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data. Follow Ocean on https://twitter.com/oceanprotocol to keep up to date. Chat directly with the Ocean community on https://discord.gg/kwWmXxwBDY. Or, track Ocean progress directly on https://github.com/oceanprotocol.

AI-Powered Bots in Ocean Predictoor Get a UX Upgrade: CLI & YAML

The pdr-backend v0.2 release has command-line interface & YAML file to set parameters, to run bots more easily

Summary
With Predictoor, you can run #AI-powered prediction bots or trading bots on #crypto price feeds to earn $. The interface to use predictoor bots & trader bots just got a lot simpler, via a CLI and using a YAML file for parameters. It also refactors backend code such to that we can do more powerful experiments around making $.
1. Intro: Predictoor & Bots
#oceanprotocol Predictoor provides on-chain “prediction feeds” on whether #ETH #BTC🔥🔥 etc will rise in the next 5 min or 60 min. “Predictoors” submit predictions and stake on them; predictions are aggregated and sold to traders as alpha. Predictoor runs on OassisSapphire, the only confidential EVM chain in production. We launched Predictoor and its Data Farming incentives in September & November 2023, respectively.

The pdr-backend GitHub repo has the Python code for all bots: Predictoor bots, Trader bots, and support bots (submitting true values, buying on behalf of DF, etc).
As a predictoor, you run a predictoor bot with the help of a predictoor bot README in the pdr-backend GitHub repo. It takes 1–2 h to go through, including getting OCEAN & ROSE in Sapphire. The repo provides starting-point predictoor bots, which gather historical CEX price data and build AI/ML models. You can gain your own edge — to earn more $ — by changing the bot as you please: more data, better feature vectors, different modeling approaches, and more.
Similarly, as a trader, you can run a trader bot with the help of a trader bot README. The repo provides starting-point trader bots, which use Predictoor price feeds as alpha for trading. You can gain your own edge — to earn more $ — by changing the bot as you please for more sophisticated trading strategies.
Predictoor has promising traction, with 1.86M transactions and $1.86M in volume in the previous 30d [Ref DappRadar] [1].

Our main internal goal overall is to make $ trading, and then take those learnings to the community in the form of product updates, and related communications. Towards that, we’ve been eating our own dogfood: running our own predictoor bots & trader bots, and improving things based on our own experience. Most of these improvements come to Predictoor’s backend: the pdr-backend repo.
We’ve evolved it a lot lately! Where it mandates the first big release since launch (yet still pre-v1). That’s what this blog post describes.
The rest of this post is organized as follows. Section 2 describes the prior release (pdr-backend v0.1), and section 3 its challenges. Section 4 describes the new release (pdr-backend v0.2), focusing on its key features of CLI and YAML file, which help usability in running bots. Section 5 describes how v0.2 addresses the challenges of v0.1. Section 6 concludes.
2. About pdr-backend v0.1 Flows
We released pdr-backend when we launched Predictoor in September 2023, and have been continually improving it since then: fixing bugs, reducing onboarding friction, and adding more capabilities (eg simulation flow).
The first official release was v0.1.0 on November 20, 2023; with subsequent v0.1.x releases. It is licensed under Apache V2, a highly permissive open-source license.
In the last v0.1 predictoor bot README, the flow had you do simulation, then run a bot on testnet, then run a bot on mainnet. Let’s elaborate.
Simulation. You’d start simulation with a call like: python pdr_backend/simulation/runtrade.py. It grabs historical data, builds models, predicts, does simulated trades, then repeats, all on historical data. It logs and plots progress in real time. It would run according to default settings: what price feeds to use for AI model inputs, how much training data, etc. Those settings were hardcoded in the runtrade.py script. To change settings, you’d have to change the script itself, or support code.

Run a bot on testnet. First, you’d specify envvars via the terminal: your private key, envvars for network (e.g. RPC_URL), and envvars to specify feeds (PAIR_FILTER,TIMEFRAME_FILTER, SOURCE_FILTER). Then you’d run the bot with a call like:python pdr_backend/predictoor/main.py 3. It would run according to default settings. The 3meant predictoor approach #3: dynamic model building. To change predictoor settings, you’d have to change code.
Run a bot on mainnet. This was like testnet, except specifying different envvars for network and perhaps private key.
Any further configurations, such as what CEX data sources to use in modeling, would be hardcoded in the script. To use different values, you’d have to modify those in your local copy of the code.
The last v0.1 trader bot README had a similar flow to the v0.1 predictoor bot README.
3. Challenges in pdr-backend v0.1 Flows
We were — and are — proud of the v0.1 predictoor bot & trader bot flows. We’d streamlined them fairly well: one could get going quickly, and accomplish what they needed to. To go further and modify parameters, one would have to jump into Python code. At first glance this might have thought this a problem; however target users (and actual users) are data scientists or developers, who have no trouble modifying code.
Yet there were a few issues. First, it was annoying to manually change code to change parameters.
We could have written higher-level code that looped, and modified the parameters code at each loop iteration; however code that changes code is error-prone and can be dangerous.Trader bots and predictoor bots had the same issue, and worse: the py code for parameter changes was scattered in a few places. Even if the scattering was fixed, the core issue would remain.
Second, envvars didn’t have enough fidelity, and adding more would have led to an unusably high number of envvars.
Recall that we used envvars to specify feeds (PAIR_FILTER, etc). This wasn’t enough detail for all our needs. For example, in running a predictoor bot, one couldn’t use envvars to specify the model output feed (what feed to predict) and model input price feeds, let alone non-price feeds for model inputs.And, putting it into envvars would be sloppy and error-prone; if we weren’t careful, we’d have a crazy number of envvars.
Third, a messy CLI was organically emerging.
Recall, one would run a predictoor bot with a custom call directly to the script, such as:python pdr_backend/predictoor/main.py 3, where 3meant approach 3. Similar for simulation or trader flows.Support for CLI-level parameters was pretty basic, only lightly tested, and was implemented on a per-script basis. Then, from our own runs of predictoor bots we were starting to do basic analytics, and a new ./scripts/directory emerged, with each script having its own custom CLI call. Things were getting messier yet.
Finally, we wanted to extend pdr-backend functionality, and doing it in v0.1 code would explode complexity.
We have big plans for our “make $” experiments, and for these, we saw the need to extend functionality by a lot.We wanted to build out a proper analytics app. We wanted to professionalize and operationalize the data pipeline, for use by simulation, the bots, and the analytics app.We wanted to extend simulation, into a flow that supported experiments on realtime data and with the possibility of live trading. Doing this would have means even more parameters and flows; if we kept the v0.1 parameter-setting and messy CLI then complexity would become unwieldy. We needed a cleaner base before we could proceed.
4. Introducing pdr-backend v0.2
We’re pleased to announce the release of pdr-backend v0.2. It solves the issues above 🎉 via a good CLI, and a YAML file to set parameters. It’s live now in the pdr-backend repo.
The rest of this section describes the heavily-updated CLI, the YAML file, and changes to the pdr-backend architecture for a good data pipeline and analytics.
4.1 Updated CLI
You get started with Predictoor like before:

Then, you can type pdr to see the interface at the command-line:

There are commands to run experiments / simulation (pdr xpmt), predictoor bot (pdr predictoor), trader bot (pdr trader), and for people running predictoor bots to claim rewards (pdr claim_OCEAN, pdr claim_ROSE).
There’s a new command to fill the data lake (pdr lake), and several new analytics-related commands (pdr get_predictoors_info , …, pdr check_network ). Remaining commands are typically for use by the core team.
To get help for a given command, just type the command without any argument values.
4.2 New: YAML file
The default file is ppss.yaml. Most CLI commands take PPSS_FILE (YAML file) as an input argument. Therefore users can make their own copy from the default ppss.yaml, and modify at will.
The YAML file holds most parameters; the CLI specifies which YAML file and network, and sometimes commonly-updated parameters.
To minimize confusion, there are no envvars. All parameters are in the YAML file or the CLI. One exception: PRIVATE_KEY envvar is retained because putting it in a file would have reduced security.
The YAML file has a sub-section for each bot: a predictoor_ss section, a trader_ss section, etc. The web3_pp section holds info for all networks.
Below is an an example of the predictoor_ss section in the YAML file. Note how it specifies a feed to predict (predict_feed), as well as input feeds for AI modeling (aimodel_ss.input_feeds).
Most CLI commands take NETWORK as an input argument. The YAML file holds RPC_URL and other network parameters for each network. Combining this, the NETWORK CLI argument selects from them. Therefore to wants to use a different network (e.g. testnet → mainnet), then one only needs to change the network name in the CLI. Compare this to v0.1 where several envvars needed changing. A bonus: the new setup allows convenient storage of many different network configurations (in the YAML file).
When the whole YAML file is read, it creates a PPSS object. That object has attributes corresponding to each bot: a predictoor_ss object (of class PredictoorSS), a trader_ss object (of class TraderSS), etc. It also holds network info in its web3_pp object (of class Web3PP).
4.3 New: Good data pipeline
We refined pdr-backend architecture to have a proper data pipeline, in new directory /lake/. It’s centered around a data lake with tiers from raw → refined. We’ve moved from storing raw price data as csv files, to parquet files, because parquet supports querying without needing to have a special database layer on top (!), among other benefits.
In conjunction, we’ve moved from Pandas dataframes to Polars dataframes, because Polars scales better and plays well with parquet. (We are already seeing intensive data management and expect our data needs to grow by several orders of magnitude.)
4.4 New: Space to grow analytics
We’ve also updated pdr-backend analytics support, in the new directory /analytics/ . First, what used to be ad-hoc scripts for analytics tools now has proper CLI support:pdr get_predictoors_info , …, pdr check_network. These analytics tools now use data from the lake, and continue to be evolved. Furthermore, we are building towards a more powerful analytics app that uses python-style plotting in the browser, via streamlit.
5. How pdr-backend v0.2 Fixes v0.1 Issues
Here’s how v0.2 fixes each of the four issues raised above.
Issue: Annoying to manually change code to change parametersv0.2 fix: use YAML file & CLI for all parameters. The YAML file holds most parameters; the CLI specifies which YAML file and network, and sometimes commonly-updated parameters. The YAML file holds parameters that were previously envvars, or somewhere in code. Here’s the default YAML file.Issue: envvars didn’t have enough fidelityv0.2 fix: use YAML file & CLI for all parameters. In the YAML file, each bot gets its own subsection, including which feeds to work with. The YAML has far more fidelity because it also includes variables that were previously in code.Issue: a messy CLI was organically emergingv0.2 fix: now we have a clean CLI. Previous calls to scripts for simulation, predictoor bot, trader bot, and various analytics are all now folded into the CLI. The CLI is implemented in new directory /cli/; its core modules cli_arguments.py and cli_module.py use argparse, the best-practices CLI library for Python. The CLI has unit tests and system tests.Issue: we wanted to extend pdr-backend functionality, and doing it in v0.1 code would explode complexity.v0.2 fix: YAML & clean CLI give a less-complex, more flexible foundation to build from. And we’re now nicely along in our progress: as we were building v0.2, we have also refined its architecture to have a proper data pipeline (in /lake/), the beginnings of a more powerful analytics app (in /analytics/), and are about to upgrade the simulation engine for more flexible and powerful experiments.
6. Conclusion
With Ocean Predictoor, you can run AI-powered prediction bots or trading bots on crypto price feeds to earn $. With pdr-backend v0.2, the interface to use predictoor bots & trader bots just got a lot simpler, via a CLI and using a YAML file for parameters. It also refactors backend code such to that we can do more powerful experiments around making $.
Get started at the pdr-backend’s https://github.com/oceanprotocol/pdr-backend.
Notes
[1] Two 1.86M values is a coincidence. Usually the values aren’t identical, though typically within 0.5x — 2x of each other
About Ocean Protocol
Ocean was founded to level the playing field for AI and data. Ocean tools enable people to privately & securely publish, exchange, and consume data.
Follow Ocean on https://twitter.com/oceanprotocol to keep up to date. Chat directly with the Ocean community on https://discord.gg/kwWmXxwBDY. Or, track Ocean progress directly on https://github.com/oceanprotocol.

Latest News

--
View More
Sitemap
Cookie Preferences
Platform T&Cs