So far, the myth of welfare in the currency circle/blockchain industry continues, and the next important area of ​​"creating wealth" is focusing on the "game track". The XAI project is holding an Odyssey event. If you are interested, please participate in this article of mine on the square: XAI Game Public Chain Odyssey Event Zero-Cost Beginner’s Guide

In this article, I will bring you a detailed explanation of the Sentry node of the XAI game public chain! This article is relatively technical, so if you are interested in making money, you must read it carefully. Because only if you understand the "logic" yourself and improve your cognition, can you have the opportunity to make money!

If you want to see directly about the Sentry node, read the first part directly without reading later; if you want a logical closed loop, then you need to read the second, third and fourth parts!

What I want to emphasize is that Xai receives direct technical support from Offchain Labs. This kind of support is unimaginable for other Orbit chains! and is a key component of Xai’s strategic game plan within the Arbitrum ecosystem.

Part 1: Sentry node explanation

The Sentry node is an observation node that monitors the Xai rollup protocol and if a bad block is proposed, it will sound an alert (by any means its operator chooses) so that others can intervene. The purpose of the sentinel node is to solve the verifier’s dilemma (see Part IV for details on the verifier’s dilemma).

Click here to view the promotional video:

Sentinel Node Video Promotion

Run Xai nodes and obtain Xai tokens with one click!

Sentinel nodes can run on community members’ laptops, desktops, or even cloud instances. As long as the node is running, there is a probabilistic algorithm that determines whether the node operator will be rewarded with esXai tokens from the network. By staking Xai, you increase the probability of the algorithm. If you don’t know about esXai, please participate in my article on the square: Interpretation of the “Token Economy” of the XAI project

1. Working principle of sentinel node

The Attention Challenge v2 protocol involves multiple participants, including the Xai chain, a parent chain (Arbitrum One), a trusted challenger, Xai sentinels and their license keys, and a Referee (referee) contract. The challenger creates a BLS key pair, registers the public key to the referee contract, and signs the claims made by the validator in the Xai rollup protocol on Arbitrum One. These signatures are verified by the referee contract and recorded as challenges associated with the claim.

Xai Sentinels can register with the referee contract by purchasing a Sentinel license key to be eligible to publish statements about claims. They get the state root of the correct statement that will be the successor of the issuing statement. If a certain condition is met, they issue a statement about the statement by invoking the referee contract. If a follow-up statement is created and confirmed, and Sentinel issues the correct statement, Sentinel will contact the referee contract to issue a redemption transaction. The referee will verify several conditions before paying the reward to the Sentinel.

This protocol ensures that each claim must fully consume the inbox messages that existed when its predecessor claim was created. This means that once a claim is created, the state roots of its correct subsequent claims are fully determined and can be computed by any node. This encourages each sentinel to determine the correct next state root. The sentinel's reward is determined by the sentinel's state permission ID, the subsequent state root, and a challenge value that does not become known until the subsequent state root is fully determined.

2. Who can run the node?

Anyone can operate Sentinel by downloading the software, installing it, and running it. However, to receive token rewards, at least one Sentinel license key must be purchased.

Purchasers must successfully pass a KYC check to ensure they:

  • not in the united states

  • Not subject to any US OFAC sanctions (OFAC is reflected in a US sanctions list)

Those Sentinel nodes that are not running or do not have the appropriate funds to pay gas fees (GAS) will not accrue rewards, even with a license key. Therefore, operators will want to ensure that their nodes are funded, online, and running.

3.Referee (referee) contract

Referee is a smart contract designed to enforce compliance with predefined rules, verify the origin of submissions, and distribute rewards to winners within the system. The referee smart contract is a key component in the Xai ecosystem, responsible for managing and validating claims made by sentinel nodes in the network. The contract has several key functions:

3.1 Submission of statement

The referee contract allows sentinel nodes to submit claims to challenges. This function can only be called by the owner of the Sentinel license key or their approved address on this contract. This function checks whether the challenge is still open for submission and whether a NodeLicense has already been submitted for this challenge.

3.2 Receive rewards

The contract contains a feature that allows users to claim rewards for successful claims. This function checks if the challenge has been closed for submission and checks if the owner of the node key has completed KYC. If these conditions are met and the claim is eligible for payment, the reward will be sent to the user.

3.3 Create claim hash and check payment.

The contract has a function that creates a hash of the sentinel permission ID, challenge ID, challengerSignedHash in the challenge, and subsequent state root. It then checks if the hash is below a certain threshold, which is calculated based on the total number of Sentinel licenses that have been minted. If the hash is below the threshold, the claim is eligible for payment.

The referee contract ensures the integrity of the Xai network by validating claims and rewarding successful ones, thereby incentivizing sentinel nodes to monitor the network accurately and diligently.

4. Challenger component

The challenger is a trusted entity in the Xai ecosystem. It creates a BLS key pair and registers the public key to the referee contract. When a validator makes a claim in the Xai rollup protocol, the challenger signs the claim using its private key and submits the signature to the referee. The referee verifies the signature and records it as a challenge associated with the statement. This process ensures the integrity of the claims made in the Xai rollup protocol.

5. Key (Sentinel node key permission, based on NFT)

The Sentinel License Key is a unique, non-fungible token (NFT) that is required to operate a Sentinel node in the Xai network. Sentinel licenses serve as proof of eligibility for nodes to submit claims and receive rewards. It is minted by sending the correct amount of ETH, and the price of minting is determined by an increasing threshold system.

Node licensing plays a key role in the referee contract. When a node wants to submit a claim to a challenge, it must provide its Sentinel permission ID. The referee contract checks whether the Sentinel license is valid and whether the node is the owner of the Sentinel license or an approved operator (KYC section above). If these conditions are met, the node's claim is submitted to the challenge.

Sentinel permissions also come into play when claiming rewards for successful claims. The referee contract checks whether the owner of the Sentinel license has completed KYC and whether the statement is eligible for payment. If these conditions are met, the reward is sent to the owner of the Sentinel license.

In summary, the Sentinel permission is a key component in the Xai network, which regulates the operation of Sentinel nodes, the submission of claims, and the distribution of rewards.

6. Node download and run

To run a sentinel node, users only need to download a specific software package. This package can be used in a desktop application or as a command line tool on your computer. Simply put, these apps are tools that make Sentinel software easier to use. The purpose of this package is to automate all the operations required to run Sentinel, making it very simple to set up and use, even if you are not technical.

This package helps users with tasks like setup, management, and interaction with other parts, and has an easy-to-use interface that allows users to view and adjust settings easily. Using this package, users can focus more on how to run better and get more token rewards. Users can choose to run this package using a desktop application or a command line tool, both of which are very easy to use and make the operation process very smooth.

7. Sentinel wallet function

In the Xai ecosystem, the Sentinel wallet plays a key role in the interaction between Sentinel nodes and referee smart contracts. Sentinel Wallet acts as an intermediary agent and is responsible for submitting statements to the referee on behalf of the relevant Sentinels. This is achieved through specific functions in the referee contract that can only be called by the owner of the Sentinel license key or their approved address on this contract.

The Sentinel wallet can submit a statement to the challenge by calling the submitAssertionToChallenge function in the referee contract. This function checks whether the challenge is still open for submission and whether the node key has already been submitted for this challenge.

The Sentry Wallet can also claim rewards for successful claims by calling the claimReward function in the referee contract. This feature checks if the challenge has been closed for submission and checks if the owner of the Sentinel license has completed a "KYC" check. If these conditions are met and the claim is eligible for payment, the reward will be sent to the owner of the Sentinel.

In summary, the Sentinel Wallet acts as a messenger, facilitating the interaction between nodes and referees, thereby ensuring the smooth operation of the Xai network.

8. License

The relationship between the number of licenses and the submission capabilities of the node is fundamental. While it is possible for a node to have multiple licenses associated with it, it is important to realize that the number of licenses directly affects the node's ability to commit. Essentially, to ensure fair commit quotas, the ratio of licenses to node instances is maintained at 1:1. By following the above principles, the system establishes a structured approach to licensing, submission of rights, and overall operation of nodes within the ecosystem.

9.Sentry node software and hardware requirements

Sentinel node software supports Windows desktop, Mac and Linux (requires 64-bit). The following are the current resources required to run the Sentinel node software for up to 100 license keys:

  • 4GB of RAM

  • 2 CPU cores

  • 60 GB disk space

  • x86/X64 processor (supports ARM processor, such as Apple M1/M2 chip)

  • Stable internet connection

When adding additional keys to a node, ideally the hardware capabilities should increase accordingly. However, it is not mandatory to assign a separate machine to each key. The system is expected to be scalable to accommodate dozens of keys on a single machine, and possibly more.

Note: These hardware requirements are subject to change.

10. Estimated Sentry node network rewards

XAI token economic model, please refer to: Interpretation of the "Token Economy" of the XAI project

Here are three scenarios for estimating the Xai rewards you might earn from running a Sentry node. These three scenarios are based on the following assumptions:

  • The sum of XAI and esXAI will never exceed 2,500,000,000. Given that the Xai ecosystem is dynamic, it is impossible to accurately predict the monthly token rewards for each Sentry key.

  • 100% of GAS is burned, so there is no guarantee that the supply will always be inflationary, it may be deflationary.

  • The Xai Foundation will not sell more than 50,000 Sentry keys (a node can load multiple keys). This is expected to take 2-3 years. Sentry keys get more expensive over time.

  • The monthly esXAI amount per Sentry key may also fluctuate based on the number of staking participants in the ecosystem.

The meaning of the following three tables is that under the different circulation of XAI and esXAI tokens, the number of node keys activated in the network and the corresponding expected monthly token rewards per key:

Scenario A estimate: If there are a total of 750,000 XAI and esXAI tokens in circulation, then each Sentry key will receive esXAI rewards according to the following table:

Scenario B Estimate: If there are a total of 1,250,000,000 XAI and esXAI tokens in circulation, then each Sentry key will receive esXAI rewards according to the following table:

Scenario C Estimate: If there are a total of 2,187,500,000 XAI and esXAI tokens in circulation, then each Sentry key will receive esXAI rewards according to the following table:

Part 2: XAI is developed and maintained by Arbitrum (ARB), so we have to shed some light on Arbitrum’s architecture:

1.Nitro decision

All Arbitrum chains are built on Arbitrum Nitro, which is the underlying technology for all chains in the ecosystem. Nitro runs a forked version of Geth and uses WebAssembly as its fraud-proof underlying virtual machine.

2.Anytrust decision

Anytrust is an Arbitrum protocol that manages data availability through a collection of licensors called the Data Availability Committee (DAC). This protocol reduces transaction fees by introducing an additional trust assumption regarding data availability, rather than using Ethereum’s trustless data availability mechanism.

3. Introduction to Arbitrum 2 layers that you may know

Arbitrum Nova is an example of an AnyTrust chain; Arbitrum One is another alternative chain that implements the purely trustless (and more L1-gas intensive) Arbitrum Rollup protocol. Both chains are built on Nitro.

4.Orbit chain

Arbitrum Orbit allows third parties to create their own self-managed Arbitrum Rollup and AnyTrust chains. Arbitrum offers Rollup and AnyTrust technologies for maximum flexibility when building Orbit chains. Like all chains in the Arbitrum ecosystem, both Arbitrum Rollups and the Arbitrum Anytrust Orbit chain are built using Nitro as the underlying technology.

5. Understand the basic situation of Xai

Let’s understand Xai in the above context. Xai operates as an Arbitrum Orbit chain, leveraging Anytrust technology for maximum speed and minimum cost. Unlike most “self-governed” Orbit chains, Xai benefits from direct technical support from Offchain Labs. This kind of support is unimaginable for other Orbit chains! and is a key component of Xai’s strategic game plan within the Arbitrum ecosystem.

Part 3: After you have the above concepts, let’s further understand the architecture:

1.AnyTrust: Revolutionary Blockchain Infrastructure

Within the AnyTrust framework and as a cutting-edge variant of Arbitrum Nitro technology, Offchain Labs leverages innovation to solve some of the most pressing challenges in the blockchain space. AnyTrust brings us a new perspective by incorporating light trust assumptions, significantly reducing costs while ensuring strong data availability and security.

2. Reduce costs through trust assumptions

At the core of the Arbitrum protocol, all Arbitrum nodes (including validators who verify the correctness of the chain and stake the accurate results) need to access the data of every layer two (L2) transaction in the Arbitrum chain inbox. Traditionally, Arbitrum rollup ensures data access by publishing data as calldata on layer one (L1) Ethereum, a process that generates significant Ethereum gas fees, which is a major cost component in Arbitrum.

3.Ketsets Flexibility

Ketsets play a key role in AnyTrust's architecture. They specify the public keys of the committee members and the number of signatures required to verify the Data Availability Certificate (DACert). Ketsets provide flexibility for changing committee members and enable committee members to update their keys as needed.

4. Data Availability Certificates (DACerts)

In AnyTrust, a basic concept is the data availability certificate (DACert). A DACert consists of the hash of the data block, an expiration time, and proof that N-1 committee members have signed the (hash, expiration time) pair. This proof includes a hash of the keyset used for the signature, a bitmap indicating which committee members signed, and a BLS aggregate signature on the BLS12-381 curve, proving the signer.

Due to the 2-of-N trust assumption, DACert serves as proof that a block's data will be available to at least one honest committee member until a specified expiration time. This trust assumption is the basis for the reliability and security of data availability within the AnyTrust framework.

5.Dual data release mechanism

AnyTrust introduces a dual method of publishing data blocks on L1. In addition to the traditional method of publishing complete data blocks, it also allows the issuance of DACerts, which are certificates that prove the availability of data. The L1 inbox contract verifies the validity of DACerts, including reference to valid Kesets specified in the DACert.

The L2 code responsible for reading the data from the inbox is designed to handle both data formats seamlessly. When a DACert is encountered, it performs validity checks, including ensuring that the number of signers meets Ketsets' requirements, validating aggregate signatures, and confirming that the expiration date exceeds the current L2 timestamp. Valid DACerts ensure that the data block is accessible and can be exploited by L2 code.

6. Data Availability Server (DAS)

Committee members operate the Data Availability Server (DAS), which provides two key APIs:

(1) Sorter API: Designed for use by the Arbitrum chain's sorter, this JSON-RPC interface enables the sorter to submit data blocks to DAS for storage.

(2) REST API: Designed for wider accessibility, the RESTful HTTP(S)-based protocol allows retrieval of data chunks via hash. It is fully cacheable and can be deployed in conjunction with caching proxies or CDNs to enhance scalability and protect against potential DoS attacks.

7. Sorter-Committee Interaction

When the Arbitrum sorter intends to publish a batch of data through the committee, it sends the data and an expiration time to all committee members in parallel via RPC. Each committee member stores the data, signs the (hash, expiration time) pair using its BLS key, and returns the signature and success indicator to the sequencer. Once enough signatures are collected, the sequencer aggregates them to create a valid DACert for (hash, expiration time) pairs. This DACert is then published to the L1 inbox contract, making it accessible to L2’s AnyTrust chain software. In the event that the sequencer cannot collect enough signatures within the specified time frame, it adopts a "fallback to rollup" strategy and publishes the complete data directly to the L1 chain. L2 software excels at understanding both data publishing formats (via DACert or complete data) and handling each format appropriately. In summary, AnyTrust, as a groundbreaking innovation within the Offchain Labs ecosystem, represents a critical advancement in addressing data availability, security, and cost-efficiency of blockchain infrastructure. Through a sensible trust assumption and a novel approach to data publishing, AnyTrust paves the way for more scalable, accessible, and secure blockchain solutions.

Part 4: With the above concepts in mind, let’s now explain why sentinel nodes are important: the cheater checking problem, why the validator’s dilemma is harder than you think and the solution!

The author is Ed Felten, chief scientist at Arbitrum

In blockchain systems, a common design pattern is to have one party do some work and escrow a deposit for correct behavior, and then invite others to verify the work and take away this deposit if they catch the worker cheating. . You could call it the "assert-challenge" design pattern. We do this in Arbitrum and have seen proposals like Optimistic Rollup in the news recently.

These systems may be affected by the validator's dilemma, which is basically the observation that there is no point in checking someone's work if you know they won't cheat; but if you don't check, they have an incentive to cheat. If you are a designer, you want to prove that your system is incentive compatible, meaning that if everyone behaves consistently with their incentives, no cheating will occur. This is an area where intuition can lead you wrong. This problem is much harder than it seems, as we'll see when we unpack the incentives of the parties below.

A super simple model

We start by building the simplest model we can. Suppose there are two players. The asserter makes a statement, which may be true or false. The checker can check the asserter's claim, or the checker can choose to do nothing, presumably on the assumption that the asserter is probably telling the truth. We assume that the checker's cost of checking is C. If the checker checks and finds that the asserter cheated, the checker will receive a reward of R. (R includes all benefits accruing to the examiner from catching cheating. This includes benefits realized "outside the system" as well as any benefits gained due to increased confidence in the system.) If the asserter were not caught, Under cheating, the checker loses L, for example because the cheating asserter can fraudulently take valuable items from the checker.

Now we have two threats to worry about: bribery and laziness. Bribery is the possibility that the asserter may bribe the checker not to check, thereby allowing the asserter to cheat without being detected. We can prevent this from happening by ensuring that the asserter escrows a very large deposit that is larger than the total value in the system and pays the checker when cheating is detected, so that the asserter is not willing to pay larger than the checker reward R's bribe. This will prevent bribery, but it requires the system to be fully collateralized, which can be very expensive.

Another threat is laziness, the risk that the checker decides not to check the asserter's work. (Remember, checkers may say they are checking but are not actually doing so.) Let's look at the incentives for checkers to see if this is a reasonable strategy.

Suppose the asserter cheats with probability X. Now, the inspector's utility is as follows:

  • If the reviewer checks: RX-C

  • If the checker does not check: -XL

Checking is worthwhile only if the utility of checking is greater than the utility of not checking, that is, if X > C/(R+L). Here's the bad news: the asserter can cheat randomly, with probability less than C/(R+L), a rational checker will never check, so the asserter will never be caught cheating.

Let's plug in some numbers. If the cost of checking each transaction is $0.10, and the checker receives a bounty of $75 if it detects cheating, but loses $25 if it fails to detect cheating, then the asserter can cheat with impunity A thousand times. If we want this system to run thousands of transactions, then we have a big problem. There is obviously nothing we can do in this model to reduce the probability of cheating to zero. We can only hope for an overcollateralized system so that the denominator of C/(R+L) becomes larger.

This is a surprisingly robust result—in a bad way. It does not depend at all on the incentives of the asserter. As long as the asserter gets a non-zero advantage from successful cheating, it can do so with some probability, knowing that it's not worth the checker's effort to check. This result also does not depend on how much time we give the inspector to complete the job, or whether we pay for the (purported) inspector. Maybe you're thinking now, the problem is there's only one inspector. Would adding more checkers reduce the likelihood of cheating? Surprisingly, it doesn't.

Adding censors doesn't help prevent cheating

Again, let's formulate a simplest model. There are now two inspectors acting independently. Each checker pays C if it checks, and if someone checks and catches the asserter cheating, a reward of R is paid to the successful checker, or if they both checked, the reward is divided equally between the two. (If you like, you can give one of them a random full reward of R in the case where they all check. This doesn't affect anyone's strategy or results.) As before, each checker will lose L if the asserter Cheating without getting caught.

It remains the case that if the asserter cheats less than C/(R+L) of the time, then it is not worthwhile for the checker to check, since the utility of checking is less than the utility of not checking. In fact, the incentive problem is worse than before, because the cost of checking per checker is still C, but the expected reward for a successful checker catching cheating is less than R, because the reward sometimes needs to be split - the expected reward will be in R/2 and R. If the expected reward is bR, where b is between 0.5 and 1, then the asserter can cheat up to C/(bR+L) of the time - this is more undetected cheating than if there was only one checker! (The math gets a little complicated because the value of b depends on the examiner's strategy, and their strategy depends on b, but it should be clear that they will sometimes need to split the reward. Also, the effective value of L is also reduced, since one does not Checkers may not lose their L due to checks by other checkers.)

One place where adding censors would really help is in preventing bribery. With two checkers, the asserter must pay a bribe of more than R to each assertor, making the bribe twice as expensive, allowing 50% staking instead of full staking. But the trade-off is that the amount of cheating increases.

I won't go into all the math here, but under reasonable assumptions, increasing from one checker to two could result in a 50% increase in undetected cheating.

Adding censors makes things worse!

You can add more checkers and things will get worse. As the number of checkers increases, the checker needs to worry more about the reward being split in multiple ways, so the expected reward for each successful checker gradually decreases, causing the probability of the asserter to cheat safely to gradually increase. From this perspective, the worst-case scenario is that everyone in the world could become a censor. This isn't infinitely bad, since things get worse as more censors are added, but it certainly won't help prevent cheating, even if it does effectively eliminate the risk of bribery.

Are you sure your system is incentive compatible?

If you have a system that fits this type of model, and you think it is incentive compatible, you need to think carefully about why. In particular, you need to explain why the checker would do the job of checking, even if they think the asserter is unlikely to cheat. Simply having a big cheating penalty isn't enough. Simply having a reward for catching cheaters is not enough. Simply having a lot of checkers isn't enough - in fact, it can make things worse. Why is your system immune?

This challenge applies to systems like Optimistic Rollup. When we talk about Arbitrum, it applies to us too.

Taking the above into consideration, traditional incentive checking methods do not achieve the desired results - there is a baseline cheating rate below which checkers will consider the check not worthwhile. In conclusion:

There are two players, an asserter, who makes a claim whether it is true or false, and a checker, who can check the claim at some computational cost. If the checker checks, his utility is RX-C, if he does not check, his utility is -XL, where R is the reward for catching cheating, C is the cost of checking, and L is the checker's loss from not detecting cheating, X is the probability of the asserter cheating (chosen by the asserter). Some algebra shows that if

To solve this problem and create a situation where an incentive-driven reviewer will always check, we have to change the reviewer's incentives. The basic problem is that in the original model, the positive incentives for checkers to check are all proportional to If we want a check incentive that operates regardless, we need to create a check incentive or a disincentive for not checking that is independent of the asserter's actions.

TrueBit attempts to do this by adding intentionally false claims to the set of assertions, essentially replacing X with X plus a constant. There are some problems with this approach. (The original Arbitrum paper has a section on TrueBit’s motivation issues.)

Focus on challenges

We use a different approach that we call focusing on the challenge. The idea is that if the asserter is computing a value f(x), it first issues x and a cryptographic challenge. To respond to the challenge correctly, the examiner needs to know f(x). Only after the challenge has occurred does the asserter publish f(x) - at this point, the checker has already done the hard work of computing f(x), so it has no incentive to be lazy. (More details about the agreement follow.)

To reduce the number of on-chain transactions this requires, we will arrange things so that the correct response to a challenge by a checker is usually silence. But in rare cases, the checker must publish a very small transaction on the chain. If the checker gives the wrong answer - silence when it should be released, or silence when it should be released - it will lose a small deposit.

Let's adapt the original incentive model to incorporate attention challenges. We need two new parameters (both of which we can choose): P, the percentage of time the checker's correct response is posted, and A, the penalty if the checker gives a wrong answer. Now, the inspector's utility is:

If checked: RX-C

If not check: -LX-PA

The key observation is that as long as PA > C, then checking is the optimal strategy, no matter what X (the probability of cheating) is.

Very low cost

To evaluate the cost, let's look at a concrete example. Let's assume there is an assertion every five minutes and the cost of checking is $0.001. If we set the probability P to 0.3%, the checker will have to pay a deposit of $3. Now, the checker's cost per assertion is $0.0003 in gas fees (the $0.10 gas fee for posting its non-silent response, multiplied by the 0.3% probability it has to post), plus about $0.0003 to lock its $3 bet for five minutes The total cost of interest is $0.0006 per assertion.

Extension for multiple inspectors

The focus challenge scales well with multiple examiners. The protocol issues a challenge that affects each checker differently, forcing each checker to compute f(x) on its own. Each checker will experience the same cost (in our case, $0.0006 per assertion).

In an open system, anyone is eligible to check the calculations, and you can allow anyone to register as a checker and place the small deposit required. This will make them eligible to receive attention challenges and potentially receive compensation from dapp developers. Anyone can challenge an asserter's incorrect claims, but only registered examiners face attention challenges.

Technical details of the agreement

Now that we understand what focusing on challenges can do for us, let’s dive into the technical details of how they work.

Each checker has a private key k and corresponding public key gᵏ, defined in an appropriate group. The public key of each checker is known to everyone. We will also rely on a suitable hash function H.

To issue a challenge to the computation of f(x), where the function f is known in advance, the asserter generates a random value r and then issues (x, gʳ) as a challenge.

A checker possessing private key k should respond to the challenge by publishing a small transaction only if H(gʳᵏ, f(x)) < T, where T is an appropriately chosen threshold. Note that only the asserter (who knows r) and that particular checker (who knows its private key k) can compute the hash, since they are the only ones who can compute gʳᵏ. Also note that calculating the hash requires knowing f(x).

After the checkers have had some time to post their responses to the challenge, the asserter can post its f(x) and if any checker disagrees with it, it will be challenged as usual. At this point, the asserter can accuse any checker of incorrect responses; the asserter must issue r to substantiate his accusation. The miner or contract can check whether the accusation is correct and punish the violator; but if the asserter's assertion of f(x) is not ultimately accepted as correct, the accusation will be ignored. If any checker is fined, the asserter will receive half of the forfeited funds and the other half will be destroyed.

This approach gives the examiner the right incentives. Knowing how a checker should respond to a challenge requires knowing that checker's private key and f(x), so each checker will want to compute f(x). Unless the checker computes f(x) itself, it cannot safely enforce the protocol. The responses of other checkers are not useful in determining f(x) because they rely on the private keys of those checkers. If a checker relies on someone else telling it f(x), it has no way to verify that claimed value (other than computing f(x) itself), and the checker risks being penalized if it is wrong. There is even an incentive for one party to try to mislead the checker about f(x) - that is the asserter, who profits from the checker's error and may use these profits to bribe the checker's "friends" to provide the checker with wrong information.

Optimization and conclusion

There are several tricks to make this protocol more effective. For example, we could bundle an assertion with the next challenge into an on-chain transaction so that the challenge does not increase the number of transactions. If P is small (e.g., 0.3% in our example) and the number of checkers is not very large, then checkers rarely need to write transactions on the chain, so the overall impact of the protocol on the number of on-chain transactions will be the smallest.

With clever implementation, the cost of this protocol should be very low compared to the up-front cost of issuing assertions on-chain. In our case, adding attention challenges to the existing assertion-challenge protocol increases the total cost by less than 1%.

And the gains are substantial – we get an incentive-compatible checking protocol that is immune to the validator’s dilemma. As long as at least one checker is rational, the asserter's claims will always be checked.

For other information about the project, please refer to: Game public chain Xai: Binance Square database

#ARB #Layer3 #game #XAI #web3