Blockchain Challenges and Solutions: An Overview of Verisense
Blockchain technology has revolutionized various industries by providing a decentralized, secure, and transparent method of record-keeping and transaction processing. However, as the technology matures, several issues have emerged that hinder its broader adoption and integration with modern technologies like Artificial Intelligence (AI). This document outlines the primary challenges faced by traditional blockchain architectures and introduces Verisense as a potential solution to these problems.
Challenges Faced by Blockchain Technology
Limited to Deterministic Computation
Traditional blockchains are designed to execute deterministic computations. This design inherently excludes input/output (IO) operations, which are essential for interacting with external systems and play a crucial role in computing’s functionality. Blockchain networks rely on oracles to bridge these interactions with the external world, a mechanism that is often cumbersome and limited in scope. In the AI era, where dynamic data interactions are paramount, the rigidity of traditional blockchain structures becomes a significant barrier to innovation.
Cryptographic Fragmentation
Typically, a blockchain network implements a single digital signature cryptographic scheme, which forms the backbone of its security and integrity. This approach results in significant compatibility issues when different networks use disparate cryptographic techniques. The lack of interoperability between distinct cryptographic systems creates a substantial hurdle in developing applications that require interactions across multiple blockchain networks.
Cost-Complexity Trade-off
There is an intrinsic correlation between the cost of using a blockchain network and its degree of decentralization. More decentralized networks offer higher security and data integrity but at increased costs. Developers often face a dilemma where they must choose between the technological merits and the economic feasibility of using a particular blockchain. This challenge is further compounded by the vibrant and sometimes polarizing blockchain ecosystems, leading developers to prioritize network popularity over the application's intrinsic requirements. For instance, constructing a social media application on a highly decentralized network like Bitcoin is impractical due to cost concerns, yet decentralized finance (DeFi) applications align well with such networks given their financial focus.
Verisense is an innovative blockchain solution aiming to address the aforementioned challenges. It is designed to overcome the limitations of traditional blockchains by providing a more flexible, interoperable, and cost-effective framework. In the forthcoming sections, we will detail the core capabilities of Verisense and how it resolves these critical issues.
Introduction to Verisense Architecture
Verisense represents a distinctive approach to blockchain architecture by implementing a dual-layer network model. This configuration is specifically designed to address the limitations of traditional blockchain systems and to enable more agile and functional application development.
Hostnet
The first layer of Verisense, known as the Hostnet, is a Proof-of-Stake (PoS) network constructed using the Substrate framework. At first glance, this may seem conventional, as it lacks support for Ethereum Virtual Machine (EVM) contracts; however, this is an intentional design choice. After over a decade of blockchain innovation, Verisense recognizes that the current paradigm of smart contracts has reached an innovation plateau. Consequently, Verisense deviates from the conventional smart contract virtual machine model, directing all application operations to the second layer, the Subnet.
Subnet
An application within Verisense are referred to as a Nucleus, and each nucleus operates on an independent Subnet. A subnet is essentially a subset of Hostnet members. This architecture allows each Verisense application to determine its unique consensus requirements, selecting only the necessary nodes for verification based on its specific characteristics and needs. This strategy is inspired by the concept of restaking but extends it further by providing a set of primitive-level Software Development Kits (SDKs) for application development.
Each Subnet functions semi-autonomously, allowing developers to tailor the network’s governance and operational model to best fit the application’s needs. This reduces unnecessary overhead and increases the efficiency and scalability of decentralized applications (dApps).
Advantages Over Traditional Smart Contracts
Unlike traditional smart contracts, Nuclei offer enhanced capabilities that empower developers to create more powerful applications in web2 development way. Key features include:
-
Active Network Requests: A nucleus can initiate network requests, enabling them to interact with external systems and data sources such as LLMs or other blockchains.
-
Subnet-Level Multi-Type Threshold Signatures: This feature allows a Nucleus to hold some different types of private key such as EcDSA over secp256k1, Ed25519 or Schnorr over secp256k1 and sign arbitrary data. That enables the Nucleus naturally integrate with specific blockchains.
-
Timers: The tool is especially beneficial for applications requiring routine operations, scheduled data processing, or time-sensitive triggers.
Verisense's architecture is a forward-thinking approach that breaks away from the limitations of conventional blockchain frameworks. By eschewing the traditional smart contract model and introducing a nuanced dual-layer system, Verisense enables developers to build more robust, flexible, and efficient applications. Its innovative use of subnets and the Nucleus application model marks a significant step forward in the evolution of blockchain technology, positioning Verisense as a pivotal player in the advancement of decentralized solutions. Further technical details and implementation guidelines will be elaborated on in subsequent chapters of this documentation.
Nucleus
In Verisense, a Nucleus represents a decentralized application running within a subnet. This section delves into the capabilities of a Nucleus, which is compiled into WebAssembly (WASM) bytecode, allowing for efficient execution within the Verisense framework. As previously mentioned in the Introduction, decentralized applications should operate within a cost-effective decentralized environment. In Verisense, the degree of decentralization, determined by the number of nodes securing a Nucleus, is customizable by developers to align with the application’s security needs. The process of achieving consensus among multiple nodes is discussed in detail in the "Monadring" section. Here, we will explore the specific capabilities that Verisense provides for Nucleus applications.
Reverse Gas Mode
Traditional blockchain systems typically employ a "pay-to-write" model, where the actor modifying the ledger incurs a cost (e.g., deploying contracts, changing contract states). This model has long posed a barrier for broad user adoption beyond the realm of Web3 enthusiasts. Verisense innovates with a reverse gas mode, where the platform charges the publisher of the Nucleus for usage. This pricing model resembles that of cloud service providers like AWS. By default, users can interact with a Nucleus (both reads and writes) free of charge, unless the developer explicitly chooses otherwise. This setup aligns more closely with traditional web applications, where certain API calls may require user authentication or payment, while others remain freely accessible.
Feature-Rich SDK
Most blockchain systems primarily offer two functionalities: key-value database read/write operations and signature verification. While smart contract virtual machines introduce Turing-complete development capabilities, the user experience often falls short compared to equivalent Web2 applications. Verisense aims to bridge this gap by offering a robust SDK for Nucleus development, featuring capabilities rarely found in other blockchains:
-
Proactive Network Requests: Nuclei can autonomously initiate network requests, enabling dynamic interactions with external data sources and systems.
-
Timers: Developers can set timers within Nuclei to trigger events or operations at scheduled intervals, enhancing application functionality and automation.
-
Multitype Public Key Access and Signature Functions: Nuclei can obtain various public key types and execute functions to sign any data.
The picture below shows some use cases of Nucleus.
Lifecycle
The lifecycle of a Nucleus in Verisense encompasses several distinct stages, from creation through operation and potential decommissioning.
- Creation
The creation of a Nucleus is initiated through a legitimate transaction on the Verisense Hostnet. Developers can utilize the
vrx
command-line tool to facilitate this process. To install thevrx
tool, use the following command:
cargo install --git https://github.com/verisense-network/vrs-cli.git
Note: Verisense is subject to rapid development, hence frequent updates may be required for vrx
. Please refer to the Developer Guides for detailed instructions.
- WASM Update
In Verisense, the code of a Nucleus is an integral part of its state. This unification implies that there is no distinction between the initial deployment of code and subsequent updates. The initial deployment of a Nucleus’s WebAssembly (WASM) code is logged as the zeroth event in the Nucleus’s lifecycle.
- Recovery
Subnet member nodes assigned to a Nucleus initiate an additional WASM virtual machine (different from the Verisense Hostnet) dedicated to operating the Nucleus. These nodes expose the Nucleus's interfaces via an RPC endpoint. Verisense implements a sophisticated billing model that tracks charges based on the following activities:
- Storage usage
- Data write requests
- Invocation of system functions
Each time the state root of a Nucleus is synchronized with the Hostnet, the corresponding account address of the Nucleus is automatically debited with the accrued costs.
Should the balance of a Nucleus's account fall below a predetermined threshold, Verisense will cease to process requests associated with the Nucleus until additional funds are deposited. This mechanism ensures that network resources are allocated efficiently and that the operation of Nucleuses remains financially sustainable.
Indexer
In blockchain networks, maintaining consensus requires that state updates are processed with deterministic time complexity. Consequently, blockchain storage is typically restricted to key-value (KV) databases, where queries and modifications have predictable time complexity. Verisense follows this same principle, with each Nucleus possessing its own isolated storage space implemented using RocksDB.
However, to enable advanced querying capabilities, an additional component akin to a blockchain explorer is often necessary. In the context of a Nucleus, such a component is referred to as an "Indexer," designed to facilitate complex business information queries. Unlike blockchain explorers, which serve the entire network, the Indexer for a Nucleus is a specialized off-chain component tailored by the developer for specific use cases within their application.
The implementation of an Indexer is at the discretion of the Nucleus developer, allowing for flexibility and adaptability to various business requirements. Developers can leverage a range of technologies to build their Indexers, including:
- Traditional relational databases
- Full-text search engines
- Services like AWS serverless architectures
This flexibility enables developers to optimize data indexing and querying based on the particular needs of their application.
Online Demo: Aitonomy
This nucleus demonstrates the abilities of Verisense including bridgeless connection with external blockchains using TSS and AI integration using networking requests.
Monadring
The Monadring protocol is an essential subprotocol within the Verisense ecosystem, designed to attain consensus for Nucleus operations. It is engineered to function effectively even in small-scale decentralized networks by leveraging an underlying blockchain network, specifically the Verisense Hostnet. Our rigorous design and analysis of the Monadring protocol are documented in a paper available on arXiv. This section offers a concise overview of its foundational principles.
Subnet Topology
Monadring defines a topological structure among network members, forming a ring where all members are connected end-to-end. This ring structure is established by the sorting of Verisense validators via submitted Verifiable Random Function (VRF) proofs as part of their candidacy.
Token Circulation
Within a subnet, a token circulates periodically around the ring, granting state modification rights to the node currently holding it. Though the token structure is complex and detailed in our paper, it can be simplistically understood as containing each node's received events, the current state, and node signatures. When a node receives the token, it first executes events enclosed within, originating from other nodes, followed by its own events. Upon execution, it adds its events to the token, propagating it around the network. This ensures a globally recognized sequence of Nucleus events, defining the sequence of state modifications.
Full Homomorphic Encryption (FHE)
A network relying solely on VRF for random member selection lacks inherent security. Thus, we introduce FHE to sign the states contained within the token, ensuring that nodes cannot view the processing results of others for specific events. By utilizing VRF for node selection and FHE for token signing, the subnet consensus mechanism simulates a Prisoner's Dilemma scenario. Even in small-scale networks, appropriately designed incentive strategies can enforce network security.
Consensus on Network Requests
Nucleus state changes are abstractly referred to as events, akin to ledger-modifying transactions within traditional blockchains. Verisense enhances this with network request capabilities, necessitating special consensus treatment for such requests.
Handling Network Requests
Network requests within a Nucleus are partitioned into two events: request initiation and response reception. Developers initiate a network request by calling an asynchronous function, returning a request_id. For the execution environment, nodes dispatch the request while recording TLS handshake keys as parameters. Nodes execute events only when holding the token, ensuring a fair distribution of request execution across the network.
The consensus for request events is straightforward since given the same event sequence, all nodes generate a network request event. However, network request events within the token are not executed again, as their presence indicates prior execution by a token-holding node.
Handling Network Response Events
HTTPS server certificates and shared handshake keys from the request-initiating node allow deterministic session key computation through the Diffie-Hellman handshake process. Only the node that issued the original request will receive a response. This HTTP response, encrypted with the session key, is set as a response event in the token and passed to other nodes. Upon receiving this event, nodes decrypt and execute it independently.
Compare to other networking solutions
IO ability | How it works | Limit |
---|---|---|
Substrate Off-chain Worker | A validator acts as an oracle, initiates I/O, and submits results via transactions. | Unverifiable and lacks true data activeness. |
ICP | All nodes within a subnet initiate the same request then compare the result. | Only works with idempotent APIs. Fails with dynamic sources like LLMs or external chains. |
Regular Blockchain (passive) + zkTLS Oracle | A zk-prover proves the TLS session and submits the result on-chain. | The zk-prover needs to pay for the gas hence much less motivated |
Verisense | One validator sends a request; others verify it via TLS handshake. | Overcomes all above limitations. Suitable for most use cases (e.g., tweeting, uploading images, agent responses). Response time is already acceptable and will be optimized further. |
Consensus on Timers
Timers present similar challenges due to their reliance on local system time and scheduling, which cannot be synchronized perfectly across all nodes. Thus, timers are divided into two events: timer setup and timer trigger. Within a subnet, only one node will actually trigger a scheduled timer; the rest will recognize the trigger through token transmission and disable their local timers upon receiving the event.
Conclusion
The Monadring protocol enables Nucleus consensus, balancing flexibility and security in decentralized applications within Verisense. By organizing a strategic combination of VRF, FHE, and unique event handling mechanisms, Monadring supports secure and efficient consensus even in small networks. Detailed exploration of this protocol's intricacies can be found in our arXiv publication, complementing this overview with a deeper theoretical and technical foundation.
Introduction
This chapter primarily elucidates the combination of Verisense and FHE to implement three components: 1. Auctioning of Validator stakes; 2. Role Playing of AVS; and 3. Imperfect Information Game based on DeFi. The chapter about the basic principles of FHE introduces the current development of FHE and how it can be integrated with Zero-Knowledge Proofs (ZKPs). Verisense Network is dedicated to provide demand-side solutions in the FHE ecosystem. In the area of FHE, it offers more effective tools for specific scenarios to achieve faster homomorphic computations and privacy-preserving computations.We later introduce the application of game theory in blockchain, and how FHE changes games of perfect information into games of imperfect information.Finally, we present the application of FHE in the Verisense Network.
FHE Basic
Homomorphic encryption (HE) is a method of encryption that allows computations to be carried out on encrypted data, generating an encrypted result which, when decrypted, matches the outcome of computations performed on the plaintext. This property enables sophisticated computations on encrypted data while maintaining data security. HE schemes are a type of encryption method that can protect data privacy because they allow computations to be performed directly on the encrypted data. For example, an HE scheme might allow a user to perform operations like addition and multiplication on encrypted numbers, and these operations would have the same result as if they were performed on the original, unencrypted numbers. This technology is seen as a key component for secure cloud computing since it allows complex data manipulations to be carried out on completely secure encrypted data.
Fully Homomorphic Encryption (FHE) is a more advanced form of Homomorphic Encryption. FHE allows arbitrary computations to be carried out on encrypted data, which is not the case with normal HE that might be limited in the types of computation it supports. FHE computations generate a result that, when decrypted, corresponds to the result of the same computations performed on the plaintext. This makes FHE extremely useful for cases where sensitive data must be processed or analyzed, but security and privacy considerations prevent the data from being decrypted. With FHE, you can perform unlimited calculations on this encrypted data just like you would on unencrypted data. For instance, in the field of cloud computing, FHE allows users to operate computations on encrypted data stored in the cloud, preserving data confidentiality and privacy.
We present here a few popular FHE schemes.
BGV (Brakerski-Gentry-Vaikuntanathan):
The BGV scheme is a Fully Homomorphic Encryption (FHE) method, proposed by Zvika Brakerski, Craig Gentry, and Vinod Vaikuntanathan. It offers a choice of FHE schemes based on the learning with error (LWE) or ring-LWE problems that have substantial security against known attacks. BGV allows the encryption of a single bit at a time and the efficiency of the encryption is largely considered in cloud storage models.
BFV (Brakerski/Fan-Vercauteren):
BFV is another homomorphic encryption scheme that is often considered for its practical performance alongside the BGV scheme. BFV supports a set of mathematical operations such as addition and multiplication to be performed directly on the encrypted data. It has been implemented efficiently and there have also been several optimizations proposed to enhance its performance in different applications.
The Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/Fan-Vercauteren (BFV) schemes differ mainly in how they encode information. BGV encodes messages in the least significant digit (LSD) of the integer, while BFV encodes messages in the most significant digit (MSD) of the integer. This difference can affect how the encrypted data is handled and manipulated during computations.
Verisense utilizes the BFV scheme for its FHE functionalities.
CKKS (Cheon-Kim-Kim-Song):
The CKKS scheme is known for being a Leveled Homomorphic Encryption method that supports approximate arithmetic operations over encrypted data. The CKKS scheme is especially suitable for computations involving real or complex numbers. Its ability to perform operations on encrypted data without the necessity for decryption makes it highly useful for maintaining data security during computations.
The Cheon-Kim-Kim-Song (CKKS) scheme is particularly useful in the field of Artificial Intelligence (AI), largely due to its ability to handle computations with real or complex numbers - including floating-point numbers. In many AI applications, computations involve floating-point numbers. Especially in machine learning and deep learning scenarios, data is represented as floating-point numbers, and neural networks operate over these numbers. The CKKS scheme allows these computations to be carried out on encrypted data, thus providing a privacy-preserving solution for AI applications. Its capabilities make it a significant tool for implementing machine learning algorithms that can operate directly on encrypted data, which is critical for situations where the privacy of the data is paramount.
The encryption process of BFV can be described as \[ \mathbf{a}\cdot \mathbf{s} +\Delta m +\mathbf{e} \] where \(\mathbf{a}\) is a uniformly random polynomial ring element: \(\mathbf{a}\in R_Q\), \( R_Q=(\mathbb{Z}/Q\mathbb{Z})[X]/(X^N+1)\). Similarly, \(\mathbf{s}\) is secret key and \(\mathbf{e}\) is a Gaussian distributed noise: \(\mathbf{s}\in R,\mathbf{e}\in R\). \(\mathbf{m}\) is the message \(\mathbf{m}\in R_t\), \(\Delta\) is the scaling factor \(\Delta = \lfloor Q/t\rfloor\). The choice of \( t \) is often a balancing act between these two constraints. On one hand, we want \(t\) to be large enough so that the encrypted data remains secure (that is, the noise isn't so small that it makes the encryption scheme vulnerable), but on the other hand, we want \( t \) to be small enough so that the noise after homomorphic computations( especially for homomorphic multiplication) does not lead to inaccuracies in the decrypted results. Thus, the selection of \( t \) often depends on the specific application, the security requirements, and the nature of the computations to be performed.
When homomorphic computations (especially multiplications) are performed on the encrypted data (ciphertext), it leads to an increase in the "noise" present in the ciphertext. This increase in the noise can interfere with the decryption process, leading to an inaccurate output. This is where the bootstrapping technique comes into play in Fully Homomorphic Encryption (FHE) systems. Bootstrapping is a unique process designed to "refresh" the ciphertext by reducing this increased noise while still preserving the computed result in an encrypted form. It essentially involves applying the FHE decryption circuit homomorphically to the "noisy" ciphertext to yield a "cleaner" version of it - one that embodies the same output but with significantly reduced noise. In this way, bootstrapping ensures that the resulting ciphertext can be decrypted correctly and accurately, despite the many computations that it underwent. Some Fully Homomorphic Encryption (FHE) schemes, such as FHEW and TFHE - which is used by Zama, don't easily support CRT packing schemes, making it challenging to perform parallel homomorphic computations efficiently. On the brighter side, TFHE offers swift bootstrapping, which aids in managing noise effectively and enhances overall computational efficiency.
Homomorphic operations, such as multiplication, tend to create ciphertexts that are no longer associated with the original linear secret key but some higher-degree variant of the key (for instance, a degree-2 key or squared key after a multiplication operation). This higher degree key form can disrupt further computations and complicate the decryption process. This is where key switching steps in. Key switching is a technique that allows the transformation of the ciphertext from being associated with a higher-degree key back to a simpler form associated with the linear key. This technique ensures that the ciphertext can be either decrypted correctly with the simple secret key or subjected to further homomorphic operations.
Differences between FHE and ZKPs and integration solutions
- FHE allows arbitrary computations on encrypted data without needing to decrypt it. The results, once decrypted, are the same as if they were performed on the original plaintext data. This makes FHE highly valuable for preserving confidentiality in situations where sensitive data must be analyzed or manipulated. ZKPs, on the other hand, allow one party to prove to another that they know a value or a secret, without conveying any information apart from the truth of the claim. This makes ZKPs essential in contexts where you need to confirm information without revealing it, thereby maintaining privacy.
- The security of most FHE schemes, including the popular ones like BGV, BFV, and CKKS, is based on the hardness of lattice problems. Lattice-based cryptography is believed to be resistant to attacks from quantum computers, which makes FHE schemes potentially useful for post-quantum cryptography. Their resilience to quantum attacks is due to the fact that no efficient quantum algorithm is known for solving the hard lattice problems that underpin these cryptographic systems. Many ZKPs, including some of the most efficient ones like zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge), rely on the hardness of problems in pairing-based elliptic curve cryptography. These approaches offer powerful privacy-preserving properties and have been used in various cryptographic system constructions. However, it should be noted that these schemes are not necessarily resistant to quantum computing attacks. The security of the elliptic curve depends on the difficulty of the elliptic curve discrete logarithm problem, which can be solved using Shor's algorithm on a sufficiently powerful quantum computer.
- In many practical scenarios, Fully Homomorphic Encryption (FHE) is used in combination with Zero-Knowledge Proofs (ZKPs) to achieve secure data processing and validation. The reason behind this is that while FHE allows computations on encrypted data without revealing the original data, it does not provide a means of independently verifying the correctness of these computations without access to the secret key. This is where ZKPs come into play. By using ZKPs along with FHE, a system can offer proofs that computations were performed correctly without revealing any sensitive data, including the secret key. This is highly valuable in blockchain or distributed ledger technologies where trustless validation is necessary. For example, when an entity performs computations on encrypted data using their private key, they can generate a ZKP to attest that the computation was performed correctly according to the rules of the specific protocol, without revealing any information about the private key or the original data. When this ZKP is submitted to the network (or 'chain'), other participants can verify the computation's correctness without accessing the encrypted data or the private key. Therefore, the combination of FHE and ZKPs can create a powerful cryptographic toolset capable of both preserving data privacy and ensuring computational integrity, particularly in decentralized environments where trustless verification is required.
Threshold Key Sharing based on Shamir Secret Sharing
Shamir's Secret Sharing is an algorithm in cryptography devised by Adi Shamir. It's a form of secret sharing, where a secret is divided into parts, giving each participant its own unique part. The unique feature of the algorithm is the minimal amount of parts, or shares, needed to reconstruct the secret. Here's a simple walkthrough of how Shamir's Secret Sharing can be used for threshold private key sharing:
- Choose the Threshold: Define the threshold number \(t\) below which knowing \(t\) points gives no information about the secret, but \( t \) points yields the secret.
- Generate a Polynomial: Generate a random polynomial of degree \(t-1\) with the constant term being the secret (private key) to be shared. i.e. \[ f(x)=a_0+a_1x+a_2x^2+\ldots+a_{t-1}x^{t-1} \]
- Create Shares: Evaluate the polynomial at different points to get \(n\) shares, where \(n\) is the total number of participants. Each participant is given one share, which is a point on the polynomial. i.e. \[\mathcal{s}_i=f(x_i)\]
- Distribute the Shares: The shares of the private key are then distributed among the participants. The key property here is that any \(t\) shares (points) are enough to reconstruct the polynomial (and hence discover the secret), whereas \(t-1\) or fewer shares reveal no information about the secret.
- Reconstruct the Secret: When the need arises to use the private key (secret), any \(t\) participants come together and combine their shares using polynomial interpolation (for example, via Lagrange interpolation) to reconstruct the polynomial and discover the constant term, which is the secret. \[ f(x)=\sum^{t-1}_{i=0} s_i \prod_j \frac{x-s_j}{s_i-s_j} \] It is worth noting that all polynomials are defined over the ring of polynomials in \((\mathbb{Z}/p\mathbb{Z})[X]/X^{t}\) and the Lagrange interpolation still holds.
This way, the private key (secret) is never explicitly revealed to any single party and no single party can access the secret alone. This is particularly useful in managing the risks associated with key management in cryptographic systems. It provides a balance between accessibility (through the threshold number of participants) and security (no single point of failure).
Game Theory
Game theory is a mathematical theory that studies the interaction behavior among decision-makers. In game theory, decision-makers, known as "players", make choices (strategies) according to rules (the form of the game) and obtain certain outcomes (payoffs or utilities) based on the choices of all players. Game theory covers the rationality assumptions of players, players' expectations, and players' strategy choices. Game theory can be divided into cooperative and non-cooperative games. Cooperative games focus on teamwork and the formation of alliances, with a particular emphasis on how to distribute payoffs. Non-cooperative games presume that players will selfishly pursue their interests, with the classic example being the prisoner's dilemma.
There are a large number of gaming application scenarios in blockchain
- Voting or Election: Consensus mechanisms in blockchain include forms of election or voting, such as which block to add next or deciding the truthful chain in case of forks. Game theory provides a model to understand the strategic interactions and potential behaviors of participants in these decisions.
- Auctions: Auctions, as in the case of token sales or gas fees bidding in Ethereum, play a significant role in blockchain. A strategic analysis using game theory can optimize the auction designs and predict bidding behaviors, potentially increasing the overall efficiency of such systems.
- DeFi (Decentralized Finance) Models: Complex mechanisms like Miner Extractable Value (MEV) and models like Ve(3,3) in Curve Finance are analyzed using game theory. It helps understand how rational users behave in various market conditions, considering different incentives provided by these models.
Nash Equilibrium
Nash equilibrium is a significant concept in game theory, introduced by mathematician John Nash in 1950, which describes a stable state of a game. In this state, each player selects the optimal strategy according to the strategies of all other players, and under the premise of knowing all other players' choices, no player can increase their own payoffs by unilaterally changing strategies. Simply put, a Nash equilibrium is the intersection of each player's best responses - when the game reaches a Nash equilibrium, no player wants to change their strategy. While Nash equilibrium provides a theoretical framework to understand how decision-makers make rational choices, there are relatively few examples in reality that can reach a Nash equilibrium. This is mainly because players' rational selections can be influenced by various factors, such as limited information and bounded rationality. Nevertheless, Nash equilibrium remains an important tool for understanding and analyzing strategic decision interactions.
Perfect Information Games vs Imperfect Information Games
Perfect Information Games
In games of perfect information, every player has complete knowledge about the game and the actions of other players. That is, each player is aware of all the previous moves made by all the players. Classic examples of such games are chess and go, where each player observes the entire history of the game and can make the optimal decision at each step. In the context of blockchain, perfect information might apply to consensus mechanisms where the actions of all validators are transparent and open to the network, such as Proof-of-Work or a public ledger’s transaction history.
In games of perfect information, a Nash equilibrium represents a state of the game where no player can unilaterally deviate from their current strategy to improve their payoff, given the strategies of all other players. The process of finding a Nash equilibrium in such games is relatively straightforward because all players have complete knowledge about the game structure and the past actions of all players. The concept of Nash equilibrium in this setting aligns with the idea of strategy profiles that are stable under mutual best response dynamics.
Imperfect Information Games
Contrastingly, in games of imperfect information, some aspects of the game are not fully known to all players. Players may have private knowledge or there may be uncertainty about the past actions of other players. This can lead to strategic behavior where players infer from incomplete information or signal to other players. Poker is an example of such a game, where each player does not know the cards that others hold. In blockchain applications, imperfect information often arises. An example would be a privacy-preserving blockchain where transaction data is hidden, or in peer-to-peer trading where one does not know the reservation price of a counterparty.
In games of imperfect information, a Nash equilibrium is a more complex concept. It's still defined as a state of the game where no player can profit from unilaterally deviating from their current strategy, given the strategies of others. However, because players now have private information, a player's strategy must specify what to do for every possible private information they can have. The Nash equilibrium concept in this case extends to the notion of a Bayesian Nash equilibrium, which considers players' beliefs about each other's private information, and each player's strategy is the best response to their beliefs about others' strategies. Therefore, in imperfect information games, Nash equilibria can involve complex strategic behavior like randomization (mixing between different actions) and sophisticated beliefs about others' private information.
In summary, while the basic intuition of Nash equilibrium - no profitable unilateral deviation - applies in both perfect and imperfect information games, the nature of strategies and the process to derive equilibria can be significantly more complex in games of imperfect information.
An example in Ve(3,3)
A very typical example is the Ve(3,3) model proposed by OHM.
The Nash equilibrium of this game model is located at point (3,3), which in turn, allows the maximization of the Total Value Locked (TVL) for the whole ecosystem. Within just five months, the TVL of OHM rapidly escalated to $800M. Nonetheless, due to the inherent data transparency of the blockchain, this game is one of perfect information, meaning that all participants know each other's information. As such, this game only reaches Nash equilibrium briefly. The reason is that once you are aware of other participants starting to exit the system, you, too, would leave the game.
With the help of Fully Homomorphic Encryption (FHE), we can transform the perfect information game into an imperfect information game, thereby achieving the Nash equilibrium point.
Quick Start Guide: Developing Nucleus on Verisense
This guide will help you quickly get started with developing your first Nucleus using Rust. By the end, you’ll have a simple deployed Nucleus and be able to interact with it.
1. Set Up Your Rust Environment
First, install Rust and configure it for WebAssembly (Wasm) compilation:
Install Rust:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
Add the WebAssembly target:
rustup target add wasm32-unknown-unknown
2. Create and Compile a Rust Project
Create a new Rust library:
cargo new --lib hello-avs
cd hello-avs
Update Cargo.toml
[package]
name = "hello-avs"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
vrs-core-sdk = { version = "0.0.2" }
parity-scale-codec = { version = "3.6", features = ["derive"] }
Write your first Nucleus code:
#![allow(unused)] fn main() { use parity_scale_codec::{Decode, Encode}; use vrs_core_sdk::{get, post, storage}; #[derive(Debug, Decode, Encode)] pub struct User { pub id: u64, pub name: String, } #[post] pub fn add_user(user: User) -> Result<u64, String> { let max_id_key = [&b"user:"[..], &u64::MAX.to_be_bytes()[..]].concat(); let max_id = match storage::search(&max_id_key, storage::Direction::Reverse) .map_err(|e| e.to_string())? { Some((id, _)) => u64::from_be_bytes(id[5..].try_into().unwrap()) + 1, None => 1u64, }; let key = [&b"user:"[..], &max_id.to_be_bytes()[..]].concat(); storage::put(&key, user.encode()).map_err(|e| e.to_string())?; Ok(max_id) } #[get] pub fn get_user(id: u64) -> Result<Option<User>, String> { let key = [&b"user:"[..], &id.to_be_bytes()[..]].concat(); let r = storage::get(&key).map_err(|e| e.to_string())?; let user = r.map(|d| User::decode(&mut &d[..]).unwrap()); Ok(user) } }
Build the project for WebAssembly:
cargo build --release --target wasm32-unknown-unknown
3. Install Command-Line Tools and Get Free Gas
Install the Verisense CLI:
cargo install --git https://github.com/verisense-network/vrs-cli.git
Generate an account:
vrx account generate --save
This command will generate an account and save the private key to ~/.vrx/default-key. Example output:
Phrase: exercise pipe nerve daring census inflict cousin exhaust valve legend ancient gather
Seed: 0x35929b4e23d26c5ba94d22d32222128e56f5a7dce35f9b36b467ac2be2b4d29b
Public key: 0x9cdaa67b771a2ae3b5e93b3a5463fc00e6811ed4f2bd31a745aa32f29541150d
Account Id: kGj5epfCkuae7DJpezu5Qx6mp96gHmLv2kDPHHTdJaEVNptRt
Request free gas:
Chat with the Verisense Faucet Bot and provide your account ID to request free $VRS.
4. Create and Deploy a Nucleus
Create a Nucleus:
vrx nucleus --devnet create --name hello_avs --capacity 1
Example output:
Nucleus created.
id: kGieDqL1fX8J7n1vRbXri7DVphwnZJpkDcoMoQZWo9XkTt1Sv
name: hello_avs
capacity: 1
Deploy the compiled Wasm:
vrx install --wasm target/wasm32-unknown-unknown/release/hello_avs.wasm --id kGieDqL1fX8J7n1vRbXri7DVphwnZJpkDcoMoQZWo9XkTt1Sv
If successful, you will see output like this:
Digest: 0xff878e546806da8b13f02765ea84f616963abcfdcac196ba3ea9f3f5d94b661e
Peer ID: 12D3KooWCz46orfkSfaahJqkph1bQqXU9t7ct98YQKTaDepNE6du
Transaction submitted: "0x0bc7d23b900a880e5274582755fc1a6c17df9453b0aa43f4cc382efc0bf1ec39"
5. Test Your Nucleus
Call add_user:
curl https://alpha-devnet.verisense.network -H 'Content-Type: application/json' -XPOST -d '{"jsonrpc":"2.0", "id":"whatever", "method":"nucleus_post", "params": ["kGieDqL1fX8J7n1vRbXri7DVphwnZJpkDcoMoQZWo9XkTt1Sv", "add_user", "000000000000000014416c696365"]}'
The networking component follows the standard JSON-RPC specification, and all post and get methods share a same endpoint separately. In the add_user case, the method is nucleus_post and so all other post methods.
The first parameter is the nucleus_id we just deployed, the second indicates the function name in the source code which is add_user. While the third is an parity-scale-encoded bytes whose value is: User {0, "Alice"}
. You can find different implementations for various programming languages.
Call get_user:
Calling get_user is similar, we just need to change the method and parameter:
curl https://alpha-devnet.verisense.network -H 'Content-Type: application/json' -XPOST -d '{"jsonrpc":"2.0", "id":"whatever", "method":"nucleus_get", "params": ["kGieDqL1fX8J7n1vRbXri7DVphwnZJpkDcoMoQZWo9XkTt1Sv", "get_user", "0100000000000000"]}'
Conclusion
Congratulations! You’ve created, deployed, and interacted with your first Nucleus on Verisense. You can now expand your AVS functionality and explore advanced features of the platform.
For more information, check the Verisense documentation.
What's next
For more advanced topics, see:
- Making http requests
- Setting a timer
- Deriving external addresses to hold external assets
- Signing an external signature
- Demo: developing a decentralized forum
Making a Request
To make a http request in Verisense nucleus, you have to split the process into two parts:
- make request: making a http request and return the request_id immediately;
- get the callback: a
#[callback]
function will be called with the request_id when the response is ready.
For example, let's request the https://www.google.com
.
use vrs_core_sdk::{CallResult, http::{*, self}, callback, post};
#[post]
pub fn request_google() {
let id = http::request(HttpRequest {
head: RequestHead {
method: HttpMethod::Get,
uri: "https://www.google.com".to_string(),
headers: Default::default(),
},
body: vec![],
})
.unwrap();
vrs_core_sdk::println!("http request {} enqueued", id);
}
#[callback]
pub fn on_response(id: u64, response: CallResult<HttpResponse>) {
match response {
Ok(response) => {
let body = String::from_utf8_lossy(&response.body);
vrs_core_sdk::println!("id = {}, response: {}", id, body);
}
Err(e) => {
vrs_core_sdk::eprintln!("id = {}, error: {:?}", id, e);
}
}
}
You have to maintain the request ids by a global structure such as a Hashmap.
Timer
Verisense has a powerful timer module. This module consists of
- #[init]
- #[timer]
- set_timer!()
set_timer!
and #[timer]
The macro set_timer!
is used to set a new timer, this timer will be triggered in the delay
time. Its signature is as follows:
set_timer!(Duration, timer_handler(params));
let's look at how to use it.
#[post]
pub fn test_set_timer() {
storage::put(b"delay", format!("init").as_bytes());
let a = "abc".to_string();
let b = 123;
set_timer!(std::time::Duration::from_secs(4), test_delay(a, b));
}
#[timer]
pub fn test_delay(a: String, b: i32) {
storage::put(b"delay", format!("delay_complete {} {}", a, b).as_bytes()).unwrap();
}
In the above example, in the post function test_set_timer
, we use a set_timer!
to create a new timer, this timer will be
triggered after 4 seconds, the triggered function is test_delay
. You can write parameters of the handler in the set_timer!
macro directly.
Let's inspect the timer handler further. The handler function test_delay
must be decorated by #[timer]
, which assigns the
decorated function the timer handler role. test_delay
will be called after 4 seconds when test_set_timer
is called.
So the set_timer!
is just a delay function, how to implement intervals?
Interval means executing a function periodically. We can use set_timer!
with recursive call to implement it. For example:
#[post]
pub fn test_set_timer() {
set_timer!(std::time::Duration::from_secs(2), run_interval());
}
#[timer]
pub fn run_interval(){
// do something
set_timer!(std::time::Duration::from_secs(1), run_interval());
}
In this example, we set a timer which would be triggered in 2 seconds. While run_interval
was triggered, you can do business in
run_interval()
, and at the last line of this function, just set a new timer, which would be executed in 1 second, and then triggered
the same run_interval()
by tail recursion. Then later the program will run forever by intervals of 1 second. We implement intervals by this way.
#[init]
A rust function decorated by this attribue macro is a special timer handler, it will be triggered when a new version of wasm code is upgraded.
For example:
#[init]
pub fn timer_init() {
storage::put(b"delay", format!("init").as_bytes());
}
The timer_init()
will be called automatically when a new version of one AVS wasm code is upgraded in Verisense.
You can refer to a more complex example at here.
Key Value Storage
Verisense has a full set of API on KV storage. Let's look at it.
APIs
put
Put a value into database via a key.
pub fn put(key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) -> CallResult<()>
Example:
use vrs_core_sdk::{get, post, storage};
#[post]
pub fn add_user(mut u: User) -> Result<(), String> {
let key = b"user:001";
let val: Vec<u8> = u.encode();
storage::put(&key, &val).map_err(|e| e.to_string())?;
Ok(())
}
Note: storage::put()
can only be used in the function decorated by #[post]
.
del
Delete a value from database via a key.
pub fn del(key: impl AsRef<[u8]>) -> CallResult<()>
Example:
use vrs_core_sdk::{get, post, storage};
#[post]
pub fn delete_user() -> Result<(), String> {
let key = b"user:001";
storage::del(&key).map_err(|e| e.to_string())?;
Ok(())
}
Note: storage::del()
can only be used in the function decorated by #[post]
.
get
Get a value from database via a key.
pub fn get(key: impl AsRef<[u8]>) -> CallResult<Option<Vec<u8>>> {
Example:
#[get]
pub fn get_user() -> Result<Option<User>, String> {
let key = b"user:001";
let r = storage::get(&key).map_err(|e| e.to_string())?;
let instance = r.map(|d| User::decode(&mut &d[..]).unwrap());
Ok(instance)
}
get_range
Get a batch of entries from the database with "start_key" and direction, the limit maximum is 1000
pub fn get_range(
start_key: impl AsRef<[u8]>,
direction: Direction,
limit: usize,
) -> CallResult<Vec<(Vec<u8>, Vec<u8>)>>
Example:
#[get]
pub fn get_user_range() -> Result<(), String> {
let prefix_key = b"user:";
let r = storage::get_range(&key, Direction::Forward, 100).map_err(|e| e.to_string())?;
...
}
delete_range
Removes the database entries in the range [start_key, end_key)
pub fn delete_range(start_key: impl AsRef<[u8]>, end_key: impl AsRef<[u8]>) -> CallResult<()>
Example:
#[post]
pub fn delete_user_range() -> Result<(), String> {
let start_key = b"user:001";
let end_key = b"user:100";
let r = storage::delete_range(&start_key, &end_key).map_err(|e| e.to_string())?;
...
}
Note: storage::delete_range()
can only be used in the function decorated by #[post]
.
search
Search a value with a key prefix and direction.
pub fn search(
key_prefix: impl AsRef<[u8]>,
direction: Direction,
) -> CallResult<Option<(Vec<u8>, Vec<u8>)>>
Example:
use vrs_core_sdk::storage::Direction;
pub fn search_blog_id() {
let key = [&b"blog:"[..], &0u64.to_be_bytes()[..]].concat();
let first_blog = storage::search(&key, Direction::Forward).unwrap();
let key = [&b"blog:"[..], &u64::MAX.to_be_bytes()[..]].concat();
let last_blog = storage::search(&key, Direction::Reverse).unwrap();
assert!(first_blog.is_some());
assert!(last_blog.is_some());
}
Demo: A Decentralized Forum
In this tutorial, I will guide you through the process of building a decentralized forum on Verisense.
To develop a decentralized application (dApp) on Verisense, you'll need to implement four main components:
-
AVS: This is a compiled WASM file created using the
vrs-core-sdk
. The front-end interacts with this component to write data. -
Surrogate: This is a proxy program responsible for syncing the latest data from the AVS and pushing it to MeiliSearch for efficient searching.
-
MeiliSearch: A fast and powerful search engine that will handle data queries from the front-end, enabling a seamless search experience.
-
Front-end app: This is the user-facing interface that allows interaction with the forum.
The AVS will be deployed on Verisense, while the Surrogate and MeiliSearch instances will be deployed on the same server node where Verisense is running.
Before we begin, please ensure that you have all the required tools installed.
AVS
Create an empty project
First, create a new Rust project,
cargo new --lib veavs
Put these into the Cargo.toml
.
[package]
name = "veavs"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
vrs-core-sdk = { git = "https://github.com/verisense-network/verisense.git", package = "vrs-core-sdk" }
parity-scale-codec = { version = "3.6", features = ["derive"] }
vemodel = { path = "../vemodel" }
You can refer to the original file content here.
Define models
For a decentralized forum, we need to define the following models:
use parity_scale_codec::{Decode, Encode};
use serde::{Deserialize, Serialize};
#[derive(Debug, Decode, Encode, Deserialize, Serialize)]
pub enum Method {
Create,
Update,
Delete,
}
#[derive(Debug, Decode, Encode, Deserialize, Serialize)]
pub struct VeSubspace {
pub id: u64,
pub title: String,
pub slug: String,
pub description: String,
pub banner: String,
pub status: i16,
pub weight: i16,
pub created_time: i64,
}
#[derive(Debug, Decode, Encode, Deserialize, Serialize)]
pub struct VeArticle {
pub id: u64,
pub title: String,
pub content: String,
pub author_id: u64,
pub author_nickname: String,
pub subspace_id: u64,
pub ext_link: String,
pub status: i16,
pub weight: i16,
pub created_time: i64,
pub updated_time: i64,
}
#[derive(Debug, Decode, Encode, Deserialize, Serialize)]
pub struct VeComment {
pub id: u64,
pub content: String,
pub author_id: u64,
pub author_nickname: String,
pub post_id: u64,
pub status: i16,
pub weight: i16,
pub created_time: i64,
}
You can refer to the original file content here.
Implement business
We will implement CRUD actions on each model.
// subspace
#[post]
pub fn add_subspace(mut sb: VeSubspace) -> Result<(), String> {
let max_id = get_max_id(PREFIX_SUBSPACE_KEY);
// update the id field from the avs
sb.id = max_id;
let key = build_key(PREFIX_SUBSPACE_KEY, max_id);
storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;
add_to_common_key(Method::Create, key)?;
Ok(())
}
#[post]
pub fn update_subspace(sb: VeSubspace) -> Result<(), String> {
let id = sb.id;
let key = build_key(PREFIX_SUBSPACE_KEY, id);
storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;
add_to_common_key(Method::Update, key)?;
Ok(())
}
#[post]
pub fn delete_subspace(id: u64) -> Result<(), String> {
let key = build_key(PREFIX_SUBSPACE_KEY, id);
storage::del(&key).map_err(|e| e.to_string())?;
add_to_common_key(Method::Delete, key)?;
Ok(())
}
#[get]
pub fn get_subspace(id: u64) -> Result<Option<VeSubspace>, String> {
let key = build_key(PREFIX_SUBSPACE_KEY, id);
let r = storage::get(&key).map_err(|e| e.to_string())?;
let instance = r.map(|d| VeSubspace::decode(&mut &d[..]).unwrap());
Ok(instance)
}
// article
#[post]
pub fn add_article(mut sb: VeArticle) -> Result<(), String> {
let max_id = get_max_id(PREFIX_ARTICLE_KEY);
// update the id field from the avs
sb.id = max_id;
let key = build_key(PREFIX_ARTICLE_KEY, max_id);
storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;
add_to_common_key(Method::Create, key)?;
Ok(())
}
#[post]
pub fn update_article(sb: VeArticle) -> Result<(), String> {
let id = sb.id;
let key = build_key(PREFIX_ARTICLE_KEY, id);
storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;
add_to_common_key(Method::Update, key)?;
Ok(())
}
#[post]
pub fn delete_article(id: u64) -> Result<(), String> {
let key = build_key(PREFIX_ARTICLE_KEY, id);
storage::del(&key).map_err(|e| e.to_string())?;
add_to_common_key(Method::Delete, key)?;
Ok(())
}
#[get]
pub fn get_article(id: u64) -> Result<Option<VeArticle>, String> {
let key = build_key(PREFIX_ARTICLE_KEY, id);
let r = storage::get(&key).map_err(|e| e.to_string())?;
let instance = r.map(|d| VeArticle::decode(&mut &d[..]).unwrap());
Ok(instance)
}
// comment
#[post]
pub fn add_comment(mut sb: VeComment) -> Result<(), String> {
let max_id = get_max_id(PREFIX_COMMENT_KEY);
// update the id field from the avs
sb.id = max_id;
let key = build_key(PREFIX_COMMENT_KEY, max_id);
storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;
add_to_common_key(Method::Create, key)?;
Ok(())
}
#[post]
pub fn update_comment(sb: VeComment) -> Result<(), String> {
let id = sb.id;
let key = build_key(PREFIX_COMMENT_KEY, id);
storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;
add_to_common_key(Method::Update, key)?;
Ok(())
}
#[post]
pub fn delete_comment(id: u64) -> Result<(), String> {
let key = build_key(PREFIX_COMMENT_KEY, id);
storage::del(&key).map_err(|e| e.to_string())?;
add_to_common_key(Method::Delete, key)?;
Ok(())
}
#[get]
pub fn get_comment(id: u64) -> Result<Option<VeComment>, String> {
let key = build_key(PREFIX_COMMENT_KEY, id);
let r = storage::get(&key).map_err(|e| e.to_string())?;
let instance = r.map(|d| VeComment::decode(&mut &d[..]).unwrap());
Ok(instance)
}
You can find the full code here.
Compile to wasm
In the root of this project, run:
cargo build --release --target wasm32-unknown-unknown
You can find the compiled wasm file located at target/wasm32-unknown-unknown/release/veavs.wasm
.
Deploy it to the Verisense
Register a new AVS protocol on Verisense.
vrx nucleus create veavs --capacity 1
This command will return the registered AVS (nucleus) ID like:
Nucleus created.
id: 5FsXfPrUDqq6abYccExCTUxyzjYaaYTr5utLx2wwdBv1m8R8
name: hello_avs
capacity: 1
Deply the generated wasm file to Verisense using the generated Nucleus ID.
vrx deploy --name veavs --wasm-path ../target/wasm32-unknown-unknown/release/veavs.wasm --nucleus-id 5FsXfPrUDqq6abYccExCTUxyzjYaaYTr5utLx2wwdBv1m8R8 --version 1
Wait for the process to complete successfully.
At this point, we have successfully deployed a new AVS onto Verisense.
Surrogate
The AVS functions as a raw database, but to make use of this data, we need to create a proxy that will index the data into the MeiliSearch engine.
You can check out the surrogate implementation here.
The basic concept behind the surrogate is to retrieve data from the AVS and inject it into MeiliSearch for efficient indexing and searching.
MeiliSearch
MeiliSearch provides a standardized approach for handling data queries.
You can find the API documentation here.
Front-end
You can find the reference code for the front-end here.
In the front-end application, the logic involves writing data to the AVS and querying data from MeiliSearch to display the results.
What It Looks Like
For a preview of the app in action, check out this video.