What

In this section, we will outline the challenges faced by real-world applications and their practical use cases.

Next, we will introduce an innovative concept: the light app.

Finally, we will present Verisense, exploring how it addresses these challenges effectively.

The Problems

Modern applications often face significant challenges that hinder their efficiency, scalability, and affordability. These challenges can limit their ability to meet the demands of real-world use cases. Below, we outline three key problems that need to be addressed:

Expensive Computation Cost

Applications today often rely on computationally intensive processes that require high-performance hardware or cloud infrastructure. This dependency drives up operational costs, particularly for decentralized applications or systems processing large amounts of data in real time. Inefficient algorithms and unoptimized system designs further exacerbate these expenses, making it difficult for smaller organizations or projects to compete.

Expensive Storage Cost

As data continues to grow at an unprecedented rate, the cost of storage becomes a significant burden. Many applications need to store large datasets for prolonged periods, often in distributed systems or the cloud. While storage technologies have evolved, the associated costs for maintaining accessibility, redundancy, and security can quickly escalate, especially for systems that prioritize decentralization or real-time data replication.

EVM, SVM, MoveVM

The rise of blockchain and decentralized applications has introduced various execution environments like the Ethereum Virtual Machine (EVM), Solana Virtual Machine (SVM), and Move Virtual Machine (MoveVM). While these environments enable decentralized computation and programmability, they also present challenges. For example, the EVM often suffers from high gas fees, limited scalability, and inefficiencies in execution. SVM and MoveVM, while innovative, still face hurdles in achieving widespread adoption and addressing performance bottlenecks, compatibility issues, and developer accessibility. These challenges highlight the need for more efficient and developer-friendly solutions.

Light App

The Light App is a novel application paradigm that lies between the smart contract app and the application chain. It seeks to blend the best aspects of both models, offering a flexible, efficient, and scalable solution for real-world use cases.

Advantages of Smart Contracts

Smart contracts have several key advantages, including:

  • Simplicity: Smart contracts are straightforward to deploy on existing platforms, requiring minimal setup compared to dedicated chains.
  • Interoperability: They operate seamlessly within the ecosystems they are deployed in, leveraging the shared infrastructure of the platform.
  • Security: Built-in security features, such as sandboxing and deterministic execution, ensure robust operation.

Advantages of Application Chains

On the other hand, application chains offer unique benefits:

  • Customizability: They allow complete control over the consensus mechanism, economic model, and governance.
  • Scalability: With dedicated resources, application chains can achieve higher performance and throughput.
  • Isolation: They provide a self-contained environment, minimizing dependencies on external systems.

Advantages of Light Apps

The Light App combines the strengths of both smart contracts and application chains:

  • Efficient Resource Utilization: Light apps inherit the simplicity of smart contracts while avoiding the overhead of maintaining an entire blockchain.
  • Customizability Without Complexity: They allow more customization than a typical smart contract, enabling tailored logic and optimizations.
  • Enhanced Performance: Running in a dedicated WASM-based container ensures lightweight and efficient execution.

Technical Foundation

A light app is fundamentally a piece of WASM code executed in a separate, secure container within the Verisense ecosystem. This design ensures that each light app operates in isolation, with the flexibility to define its behavior while leveraging the shared infrastructure of Verisense. By bridging the gap between smart contracts and application chains, light apps offer an optimal balance of performance, scalability, and ease of use.

What is Verisense

The Verisense Network is an innovative startup dedicated to addressing the challenges of smart contract blockchains and application blockchains. By identifying the limitations of these traditional approaches, Verisense aims to provide a more efficient, scalable, and flexible solution for modern application needs.

At the core of the Verisense Network is its groundbreaking Light App technology. This concept offers developers the ability to deploy their light apps seamlessly onto the Verisense blockchain. Unlike conventional smart contracts or application chains, light apps combine the simplicity of smart contracts with the performance and customization capabilities of application-specific chains.

Key Features of Verisense

  1. Scalability and Efficiency
    Verisense is designed to support the deployment and execution of hundreds of light apps simultaneously. Each light app operates independently within its own secure container and task environment, ensuring that:

    • Resource Isolation: Light apps do not interfere with each other, enhancing reliability and stability.
    • Optimized Performance: Independent execution allows for efficient allocation of computational resources.
  2. Developer-Centric Design
    Verisense offers an intuitive and developer-friendly platform, enabling:

    • Quick and seamless deployment of light apps.
    • Support for WASM (WebAssembly), a high-performance standard widely embraced by developers.
    • Flexible customization options to meet diverse application requirements.
  3. Interoperability and Flexibility
    By bridging the gap between smart contract and application blockchains, Verisense creates a versatile ecosystem that supports:

    • Cross-app interactions within the network.
    • Modular and scalable application development without the need to maintain a full blockchain for every app.

"The Matrix" of Blockchain Technology

Verisense functions like "The Matrix"—a dynamic, interconnected system that adapts to the needs of its users. Just as the Matrix provides an expansive digital environment where entities coexist and interact, Verisense offers a shared blockchain infrastructure where light apps can thrive. This analogy highlights Verisense's ability to handle complexity while maintaining an underlying simplicity that empowers developers to build without constraints.

The Vision of Verisense

Verisense envisions a future where decentralized applications are no longer constrained by the inefficiencies and limitations of current blockchain systems. With its innovative Light App technology, Verisense provides a foundation for scalable, cost-effective, and user-focused applications, setting a new standard for blockchain-powered ecosystems.

How Verisense Solves Problems

Verisense tackles the challenges faced by traditional blockchain systems through an innovative two-level hierarchical structure. This design ensures scalability, efficiency, and security while optimizing resource utilization for real-world applications.

The Two-Level Hierarchy

  1. First Level: Traditional PoS Blockchain
    At the top of the hierarchy, Verisense employs a Proof-of-Stake (PoS) blockchain. This layer serves as the backbone of the system, providing:

    • Global Security: Ensuring the integrity and immutability of the network.
    • Coordination: Managing the interactions and operations of the second-level networks.
    • Consensus and Governance: Acting as the decision-making hub, maintaining a trustless environment for all participants.
  2. Second Level: Nucleus Networks
    Below the PoS blockchain, Verisense introduces nucleus networks—specialized, isolated environments where each runs a single light app. These nucleus networks offer:

    • Isolation: Each light app operates independently, ensuring that issues in one do not affect others.
    • Efficient Execution: By delegating computation and storage to these dedicated environments, Verisense significantly reduces the burden on the main blockchain.
    • Customizability: Developers can optimize their nucleus networks to meet the specific needs of their applications.

Hierarchical Workflow

  • The top-level PoS blockchain oversees and controls the operations of the nucleus networks. It handles key responsibilities such as verifying light app deployment, logging state transitions, and ensuring consistency across the ecosystem.
  • The actual execution of light app code takes place within the second-level nucleus networks. This approach decentralizes resource-intensive tasks, improving scalability and reducing operational costs.

Solving Key Problems

The hierarchical design of Verisense directly addresses critical issues in blockchain systems:

  1. Computation Burden
    By offloading complex computational tasks to the nucleus networks, Verisense avoids overloading the main blockchain. This approach ensures smoother operations, even under heavy workloads, while providing high performance for individual apps.

  2. Storage Burden
    Verisense reduces storage costs by delegating data-heavy tasks to off-chain environments within nucleus networks. These networks can utilize flexible storage strategies without compromising the integrity of the broader system.

  3. Security Assurance
    Despite offloading tasks, the main PoS blockchain guarantees security and trust across the network. This centralized oversight ensures that the decentralized nucleus networks operate reliably and without malicious interference.

The Verisense Advantage

This layered architecture combines the best of centralized and decentralized approaches. The result is a system that balances scalability, performance, and security, making Verisense uniquely equipped to support a new generation of blockchain applications.

Why

In this section, we will explore the key design decisions behind Verisense and the rationale for its innovative approach.

First, we will discuss why Verisense avoids adopting the traditional forms of smart contract chains and application chains, highlighting their limitations and how they fall short of meeting modern demands.

Next, we will introduce Monadring, a new protocol designed to enhance efficiency and functionality within the Verisense network. We will also explain the role of Fully Homomorphic Encryption (FHE) in ensuring robust security and privacy for computations, setting Verisense apart from conventional systems.

Finally, we will examine the adoption of the restaking protocol, explaining how it strengthens the network's security, incentivizes participation, and aligns with Verisense's overarching goals.

Why Not a Smart Contract Chain

A smart contract chain centralizes computation and storage on the same blockchain, which fundamentally limits its scalability and practicality for real-world applications. This model faces significant challenges that make it unsuitable for supporting internet-scale decentralized applications.

The Problem with Centralized Computation and Storage

In a smart contract chain, every computation and data storage operation occurs directly on-chain. While this ensures security and transparency, it comes at a significant cost:

  • High Computation Costs: The on-chain execution of smart contracts requires every node in the network to process the same computations, resulting in inefficiency and high resource consumption.
  • Expensive Storage Costs: On-chain storage is inherently limited and costly due to the need for replication across all nodes for security and redundancy.

The Limitations of Layer 2 (L2) Solutions

While Layer 2 scaling solutions attempt to alleviate these issues by offloading some operations from the main chain, they still inherit the fundamental architecture of smart contract chains. This leads to:

  • Persistent Cost Issues: L2 solutions reduce, but do not eliminate, the high computation and storage costs.
  • Scalability Constraints: They struggle to meet the demands of internet-scale applications due to their reliance on the underlying L1 infrastructure.

L2 solutions may improve throughput but fall short of addressing the needs of real-world, high-demand use cases where performance, cost-efficiency, and scalability are paramount.

Modular Blockchains: An Incomplete Answer

Modular blockchain designs—where layers are separated into distinct components like execution, settlement, and data availability—offer improvements to L1/L2 capabilities. However, they cannot fully resolve the challenges of:

  • Application-Specific Requirements: Modular chains still lack the flexibility to tailor their architecture for diverse, real-world application needs.
  • Scalable Decentralization: They often require complex interoperability and additional layers of abstraction, adding operational overhead.

Why Smart Contract Chains Fall Short

The inherent structure of smart contract chains, whether augmented by L2 solutions or modular enhancements, is fundamentally unsuited for real-world applications that demand:

  • Cost-Effective Scalability: Efficient handling of large-scale operations without prohibitive costs.
  • Customizability: The ability to adapt the architecture to application-specific requirements.
  • True Decentralization at Scale: Maintaining security and decentralization while supporting massive user bases.

Verisense’s Approach

Recognizing these limitations, Verisense takes a different path by introducing the Light App model. This design moves beyond the constraints of smart contract chains, leveraging a two-level hierarchy to separate computation and storage into independent environments. By doing so, Verisense ensures a scalable, cost-effective, and secure infrastructure capable of supporting the next generation of decentralized applications.

Why Not Appchain

In contrast to the smart contract chain, where all applications run on a shared blockchain, the appchain model dedicates a separate blockchain to each decentralized application (dApp). This concept offers isolation and customizability for applications but comes with significant challenges that make it impractical for most Web3 projects.

The Concept of Appchains

An appchain provides a self-contained blockchain environment for a single application. Each appchain operates independently, giving developers full control over the blockchain's governance, consensus mechanism, and economic model. While this independence can be beneficial, it also introduces substantial complexities.

The Challenges of Appchains

  1. High Maintenance Costs
    Bootstrapping and maintaining an appchain is a resource-intensive process. Establishing a standalone blockchain requires:

    • Technical Expertise: Developing and launching a blockchain infrastructure demands significant technical knowledge.
    • Infrastructure Management: Continuous updates, security monitoring, and optimizations are necessary to keep the chain operational.
  2. Validator and Staking Requirements
    A blockchain’s success depends on attracting a robust network of validators and sufficient staking to secure the chain. However:

    • Resource Barriers: Most dApps lack the resources to incentivize and sustain a validator network.
    • Difficulty in Bootstrapping: Convincing validators to join a new chain, especially one with uncertain prospects, is a significant challenge.
    • Sustainability Issues: Even if a chain successfully launches, consistently rewarding validators and maintaining network security over time can be prohibitively expensive.
  3. Economic Viability
    Running an appchain requires a self-sustaining economic model. Many projects fail to generate enough value or transaction volume to justify the costs of maintaining a dedicated blockchain. This leads to reliance on external funding, which is often unsustainable in the long term.

Example: Axie Infinity

Axie Infinity is a notable example of an appchain. It successfully launched its own blockchain to support its ecosystem. However, the project also highlights the challenges of the appchain model, including the need for significant upfront investment, a dedicated validator network, and ongoing rewards to maintain network security. While Axie Infinity has demonstrated the potential of appchains, its success is not easily replicable for most Web3 projects.

Why Appchains Are Not the Ideal Solution

For the majority of Web3 applications, appchains are not a viable solution due to their:

  • High Entry Barriers: The resources needed to launch and maintain an appchain are beyond the reach of most projects.
  • Sustainability Challenges: Building a long-term validator incentive system is difficult and risky.
  • Lack of Scalability for Ecosystems: Managing many isolated chains introduces fragmentation and interoperability issues.

Verisense’s Alternative

Recognizing these limitations, Verisense introduces the Light App model. By enabling applications to run independently within dedicated environments (nucleus networks) on a shared blockchain, Verisense offers:

  • Lower Costs: Applications can operate without the overhead of maintaining a standalone blockchain.
  • Enhanced Security: The shared PoS blockchain secures all light apps without requiring separate validator networks.
  • Scalability: Applications can focus on growth and functionality without the complexities of managing their own chain.

This approach bridges the gap between the benefits of appchains and the practicality of shared blockchain systems, providing a more accessible and efficient solution for Web3 projects.

Why Monadring

While the Verisense network model offers many advantages, if it continues to rely on traditional Byzantine Fault Tolerant (BFT)-like consensus protocols, it faces several key challenges that can undermine its effectiveness for decentralized applications at scale. These challenges are:

Challenges with Traditional Consensus Protocols

  1. Difficulty in Reducing Network Overload
    Traditional BFT protocols, which are commonly used in decentralized systems, often struggle with network congestion as the number of participants increases. As more validators and nodes join the network, the overhead for reaching consensus grows, making it harder to maintain high throughput and low latency. This increases the cost of computation and limits the scalability of applications running on the network.

  2. Difficulty in Minimizing Network Size While Maintaining Security
    In conventional consensus protocols, ensuring security requires increasing the number of nodes in the network. However, as the network grows, so does the complexity of consensus operations and the potential for communication bottlenecks. Balancing network size with robust security becomes increasingly difficult, particularly when striving to maintain a small and efficient network while still ensuring decentralized trust and resistance to attacks.

Introducing Monadring

To address these problems, the Verisense team introduced Monadring, a new consensus protocol specifically designed to overcome the limitations of traditional BFT protocols in the context of the nucleus networks.

Monadring is tailored to provide both high security and high throughput even in smaller networks, solving the two major challenges outlined above.

Key Features of Monadring

  1. High Security in Small Networks
    Unlike traditional BFT protocols, which require large networks to achieve security, Monadring is optimized to provide strong security guarantees even with a limited number of nodes. This is accomplished through innovative mechanisms that reduce the need for excessive node participation while still ensuring that the network can resist malicious attacks, maintain data integrity, and prevent double-spending or fraud.

    By focusing on cryptographic principles and a streamlined consensus process, Monadring enables the nucleus networks to remain secure and reliable, even as they operate with fewer resources than traditional systems.

  2. High Throughput in Small Networks
    Monadring allows for high throughput by minimizing communication overhead and optimizing consensus protocols for efficiency. The protocol can handle a large number of transactions per second (TPS) even with a small set of validators or nodes, making it highly suitable for applications that require fast, real-time processing without the latency and bottlenecks typically seen in larger, traditional networks.

    This efficiency is critical for decentralized applications that need to scale effectively while minimizing infrastructure costs.

Advantages of Monadring for Verisense

The adoption of Monadring within the Verisense ecosystem brings several significant advantages:

  • Lower Operational Costs: With smaller, more efficient networks, Verisense can reduce the computational and storage costs of maintaining large validator sets.
  • Improved Network Flexibility: Monadring allows Verisense’s nucleus networks to support diverse application needs without the traditional overhead associated with larger blockchain networks.
  • Enhanced Decentralization: The protocol enables a high level of decentralization with fewer validators, meaning that Verisense can maintain trust and security without sacrificing performance.

In summary, Monadring is a tailored consensus solution that addresses the unique needs of nucleus networks within Verisense. By providing high security and throughput in smaller, more efficient networks, Monadring empowers Verisense to overcome the limitations of traditional consensus models and deliver a more scalable and cost-effective blockchain platform for real-world applications.

Why FHE

Fully Homomorphic Encryption (FHE) is a cutting-edge cryptographic technique that extends the concept of traditional Homomorphic Encryption (HE) by enabling arbitrary computations on encrypted data. While standard HE only supports a limited set of operations (such as addition and multiplication), FHE allows for the execution of complex algorithms directly on encrypted data without ever decrypting it. This feature makes FHE especially valuable in scenarios where data privacy and security are paramount.

The Need for FHE in Verisense

While the Monadring consensus protocol provides high security and throughput in small networks, it is not sufficient by itself to guarantee the level of privacy and security required for real-world applications. The core challenge is ensuring that even with a small network of participants, sensitive information can be kept private and the integrity of the system can be maintained without relying on a large set of validators or exposing data to any single entity.

To address this, Verisense employs FHE as a critical component of its security architecture. FHE is used in conjunction with Monadring to form a more robust and privacy-preserving solution that enhances the overall security model.

Key Benefits of FHE in Verisense

  1. Enhanced Security through Data Privacy
    FHE ensures that all operations on the network, including consensus mechanisms and data validations, can be performed on encrypted data without revealing any sensitive information. This means that even if an attacker gains access to the network, they cannot extract meaningful data from the encrypted transactions.

    This level of privacy is crucial for applications in industries such as finance, healthcare, and other sectors that handle sensitive personal or business data. By using FHE, Verisense can guarantee that user information and transaction details remain confidential, even during the validation process.

  2. Prevention of Second-Order Advantages
    FHE also plays a key role in preventing second-order advantages. In a traditional consensus model, information about previous participants can be exploited, leading to unfair advantages or manipulation. However, FHE ensures that the information about the participants (such as their votes, stakes, or actions) remains hidden throughout the process, preventing any participant from gaining an unfair advantage based on knowledge of others' activities.

    This ability to “blind” the data ensures fairness and reduces the risk of collusion or manipulation by any single participant or group of participants, even in a small, decentralized network.

  3. Core Blind Voting Mechanism
    At the heart of Verisense's security strategy is the use of blind voting, which is made possible by FHE. In traditional systems, voting or decision-making processes may expose participants' choices to the entire network, creating potential for coercion or tampering. FHE allows for secure, blind voting, where the actual votes or decisions are kept hidden from all parties, except for the final result. This ensures that participants can make choices without fear of retribution or influence from other parties.

  4. Scalability and Efficiency
    While FHE introduces additional computational complexity, its integration into the Verisense ecosystem ensures that data privacy and security are maintained, even as the network scales. With the ability to perform computations directly on encrypted data, Verisense can scale to support a large number of users and applications without compromising the confidentiality of user data.

Conclusion: The Role of FHE in Verisense

FHE is an essential component of Verisense’s security infrastructure, complementing the Monadring consensus protocol by providing an additional layer of privacy and ensuring that sensitive data remains confidential throughout the entire network. By leveraging FHE for secure computations, blind voting, and protection against second-order advantages, Verisense offers a highly secure and scalable solution for decentralized applications, making it suitable for industries that demand the highest standards of data privacy and integrity.

In summary, FHE ensures that Verisense can support real-world use cases in a secure, private, and efficient manner, providing a future-proof solution for decentralized applications.

Why Restaking

The nucleus networks within Verisense can operate autonomously within the Verisense matrix, but there are two critical challenges to ensuring the network's security and scalability:

  1. Securely Bootstrapping the Verisense Network
    To launch and maintain a secure network, Verisense needs a strong foundation of validators that can reliably protect the system. Bootstrapping a new Proof of Stake (PoS) network from scratch is an inherently difficult task. Without an established validator base or a trustworthy way to incentivize participation, a PoS network risks vulnerabilities such as low decentralization, potential attacks, or weak security guarantees.

  2. Maximizing the Number of Nodes
    The security and decentralization of a blockchain network heavily depend on the number of nodes (validators) actively participating in consensus. A larger number of nodes ensures that no single actor or group of actors can control or manipulate the network. However, attracting a sufficient number of validators to a new network is a significant challenge, particularly in the early stages when the network may lack sufficient trust or incentives.

The Role of Restaking in Verisense

To address these challenges, Verisense introduces restaking as a key strategy for securing its network. Restaking allows Verisense to leverage existing PoS networks to bootstrap its own security and validator network efficiently. By integrating with established restaking protocols, Verisense can tap into an already-validated pool of validators, providing immediate security and decentralization without having to wait for its own validators to stake and secure the network from scratch.

How Restaking Solves the Challenges

  1. Secure Bootstrapping with Restaking
    With restaking, Verisense can rely on trusted protocols to provide the initial security for its network. Rather than starting from zero, Verisense can use existing restaking protocols to secure the network through validators who are already participating in other PoS networks. These protocols enable validators to "restake" their existing stakes across multiple blockchains, effectively providing Verisense with the security it needs while minimizing the risks of a fresh, unproven network.

    Restaking also enables Verisense to establish a more diverse and robust validator set right from the start, allowing for faster and more secure growth compared to building a network of validators from scratch.

  2. Maximizing Validator Participation
    A crucial aspect of maintaining security and decentralization is ensuring that as many validators as possible are incentivized to participate in the network. By supporting restaking protocols like EigenLayer, Symbiotic, Karak, and others, Verisense can attract validators who are already staked in other blockchain ecosystems. These protocols allow validators to earn rewards not just from their primary network but from multiple networks simultaneously, providing them with additional incentives to participate in the Verisense ecosystem.

    This maximizes the number of nodes contributing to Verisense's security without requiring validators to fully commit to a single blockchain. As a result, Verisense can scale rapidly without the long wait times associated with traditional PoS network growth.

Benefits of Restaking for Verisense

  • Faster Network Growth: By leveraging existing validator networks through restaking, Verisense can grow its validator base more quickly, ensuring robust security from the outset.
  • Enhanced Decentralization: Restaking enables a diverse group of validators from different ecosystems to participate in securing Verisense, further enhancing decentralization.
  • Increased Incentives for Validators: The ability to restake across multiple networks allows validators to earn rewards from multiple sources, increasing their motivation to participate in Verisense.
  • Secure and Efficient Bootstrapping: Verisense doesn’t have to rely on a slow, uncertain process to attract validators. By utilizing established restaking protocols, the network can bootstrap its security and functionality more efficiently.

Supported Restaking Protocols

Verisense integrates with a range of restaking protocols to provide a broad and flexible security model. These include:

  • EigenLayer: A protocol that allows Ethereum stakers to restake their ETH to secure additional networks, such as Verisense.
  • Symbiotic: A restaking solution that enables validators from various PoS networks to secure other chains and participate in the ecosystem’s security.
  • Karak: Another promising restaking protocol that enables validators to extend their stake across different networks, providing additional liquidity and security.

By using these protocols, Verisense can access a large pool of experienced validators and rapidly scale the security and decentralization of its network.

Conclusion: The Strategic Advantage of Restaking

Restaking plays a pivotal role in Verisense’s strategy for securing its network and scaling its validator base quickly. By leveraging established restaking protocols, Verisense is able to mitigate the challenges of bootstrapping a new PoS network and maximize validator participation from day one. This not only ensures a secure and decentralized network but also enables Verisense to scale efficiently and rapidly, positioning itself for success in the highly competitive Web3 ecosystem.

Overview of Verisense

While leading restaking infrastructures like EigenLayer, Karak, and Symbiotics compete fiercely on the TVLs, the supply side of security, less attention has been given to the AVS (Active Validation Services) solution, the demand side of security. It is the demand that drives supply, not the other way around. EigenLayer recognizes this critical aspect but struggles to address it due to the absence of essential components in its underlying design, which are necessary to form a rapid and secure consensus and a fair and functional slashing mechanism. Verisense Network seizes this market opportunity by proposing an innovative mechanism to fill the gap, addressing the AVS market's needs effectively.

What is Verisense Network?

Verisense is the world’s first FHE-enabled (Fully Homomorphic Encryption) VaaS (Validation-as-a-Service) network designed to plug and play with any restaking layers. Our goal is to serve AVS in variety (chain-natured, non-chain-natured, and hybrid) and onboard diversified paying AVS clients, a sector currently underserved yet ultimately critical to win in the restaking. Here’s what this entails:

Serving Diverse AVS Clients:

  • AVS clients vary in many formats, broadly categorized into i) chain-natured (i.e. sequencers, side chains, oracles), ii) non-chain-natured (i.e. keeper networks, trust execution environment, threshold cryptography schemes, new virtual machine, decentralized web2 social apps and etc) and iii) hybrid (i.e. a bridge can be implemented as a chain or non-chain format)
  • Chain-based AVS clients are easier to onboard but less motivated to pay for decentralized security.
  • The true paying demand lies with non-chain-based AVS clients as they don’t have an existing ready-to-use AVS-based infrastructure solution, a sector that leading restaking infrastructures all struggle to serve now.

Standardizing AVS Demand:

  • The lack of standardization and productization in the AVS demand side hinders the growth of an AVS marketplace.
  • Verisense aims to offer a standard onboarding process, an easy plug-and-play interface, flexible and reasonable pricing, and value-added functions such as on-duty-operator dashboards and cluster analytics tools.

Innovating the Consensus Mechanism:

  • Forming consensus inexpensively and securely is a challenge. Faster consensus often means less security, and vice versa.
  • Verisense proposes an innovative mechanism that decouples runtimes and consensus, crucial for serving non-chain-based AVS clients, for the purpose of implementing the reward/slashing mechanism.

Elevating Security and Resilience with FHE Enablement:

  • FHE technology allows computations and operations on encrypted data as if it were plain data, thus elevating the security to a new height in the blockchain in general, by enabling flash auctions of block produce and facilitating privacy transactions.
  • The FHE enablement at the underlying protocol level (node level) turns a perfect information game (i.e. blockchain due to the data transparency) to an imperfect information game hence to reach and maintain the Nash equilibrium. This will be particularly useful to prevent the malicious behaviors, often arise from AVS nodes.
  • Moreover, for a fair and resilient slashing mechanism, Verisense involves role game models at the fundamental node-level design. The three-party game includes Restaker (TVL supplier), Operator (Verisense AVS Node), and Resolver (Slasher) to prevent the “second-mover advantage.” FHE enablement allows Verisense to cleverly avoid many problems and build a functional slashing mechanism while maintaining fairness and resilience. Key issues addressed include:
    • The problem of convergence of bribe values due to the exposure of the Operator's bribery strategy for Restaker.
    • As a Resolver, the need to hide its veto of a slash's voting.
    • Front-running problems with MEVs.

Considering the recent developments in restaking, EigenLayer, as the pioneer in implementing and productizing the restaking concept, broke new ground and established LRT/restaking as a widely accepted method to enhance the trust of off-chain components. Now, VeriSense's innovation with FHE-enablement and VaaS technology marks another significant step forward, essential for realizing the true vision of decentralized security services.

Architecture

The architecture of Verisense is shown as follow:

We can dig into it deeper.

Validation-as-a-Service (VaaS)

Moreover the Verisense chain serving as a VaaS module, another important component of the Verisense Network, works as an aggregator of the restaking component (Verisense layer) of the various participating L1s to offer the natural atomic-level interoperability and superfluid value transferring via the native VRSN tokens. This will be particularly appealing to dApps deployed on multiple chains such as defi protocols (DEX - GMX on Arbitrum and Avalanche, uniswap on Polkadot and Avalanche and etc) hence looking for multichain-based AVS.

VaaS is a new concept corresponding to the popular concept of PaaS (Platform as a service) or SaaS (Software as a service).

This creates an even broader space for us to imagine how the Verisense Network will boost the security guide over the crypto industry and disrupt the current landscape.

AVS

The concept of DApp development can refer to both smart contract development and application chain (a dedicated blockchain system developed for a specific large-scale application) development. Over the past decade, smart contract development has brought significant progress to blockchain applications, but people have also gradually realized the many limitations of the smart contract model: the cost is determined by the host chain, the speed is relatively low, and the programming model is restrictive. The barrier to entry for application chain development is relatively higher, and it requires finding independent verification nodes, making the cost of application chains not low either. Moreover, due to the relatively low degree of decentralization and the dependence on high cross-chain costs, application chains find it difficult to achieve natural compatibility with mainstream public chains.

The term AVS(Actively Validated Services) is originally proposed by Eigenlayer. It is a new paradigm of DApp development but Eigenlayer doesn't provide such a standard template or framework to develop AVS.

Verisense provides developers with a complete SDK and toolset, making it easier and more efficient to build distributed applications based on AVS. By using Verisense, developers can focus on the core functionality of their applications without having to worry about the underlying verification and monitoring mechanisms.

The key concepts of Verisense AVS include:

  1. Actor Model: In Verisense, the AVS system adopts the common distributed systems concept of the Actor model. However, Verisense uses the term "Nucleus" instead of "Actor" since Verisense is a decentralized system. Each Nucleus corresponds to a DApp, and developers need to use the nucleus-core library and other tools to develop and compile the Nucleus into a WebAssembly (WASM) binary file. As a decentralized Actor model, the Nucleus has the ability to actively send messages to other Nuclei or even external systems. This allows the Verisense AVS to break free from the limitations of blockchain systems, which are often passive in nature.

  2. Verisense as a Blockchain: Verisense itself is a blockchain network. In addition to executing the consensus of the Verisense network, each Verisense node can also register as a member of a specific AVS and download the corresponding Nucleus WASM binary code to execute. In other words, Verisense's node operators don't have to run an extra independent AVS node.

  3. FHE-enhanced Lightweight Consensus Protocol: Verisense has developed a lightweight consensus protocol called Monadring specifically for the AVS system. In this consensus protocol, data does not reach consensus in blocks, allowing the AVS to respond to data write requests more quickly. In addition to the security benefits of restaking, Verisense has also incorporated Fully Homomorphic Encryption (FHE) into its Monadring consensus protocol. The unique properties of FHE help to eliminate the potential for gaming in the voting process, even in small-scale decentralized systems, allowing Verisense to run small-scale decentralized applications with the same level of theoretical security.

Source of Security

Overview of Verisense

While leading restaking infrastructures like EigenLayer, Karak, and Symbiotics compete fiercely on the TVLs, the supply side of security, less attention has been given to the AVS (Active Validation Services) solution, the demand side of security. It is the demand that drives supply, not the other way around. EigenLayer recognizes this critical aspect but struggles to address it due to the absence of essential components in its underlying design, which are necessary to form a rapid and secure consensus and a fair and functional slashing mechanism. Verisense Network seizes this market opportunity by proposing an innovative mechanism to fill the gap, addressing the AVS market's needs effectively.

What is Verisense Network?

Verisense is the world’s first FHE-enabled (Fully Homomorphic Encryption) VaaS (Validation-as-a-Service) network designed to plug and play with any restaking layers. Our goal is to serve AVS in variety (chain-natured, non-chain-natured, and hybrid) and onboard diversified paying AVS clients, a sector currently underserved yet ultimately critical to win in the restaking. Here’s what this entails:

Serving Diverse AVS Clients:

  • AVS clients vary in many formats, broadly categorized into i) chain-natured (i.e. sequencers, side chains, oracles), ii) non-chain-natured (i.e. keeper networks, trust execution environment, threshold cryptography schemes, new virtual machine, decentralized web2 social apps and etc) and iii) hybrid (i.e. a bridge can be implemented as a chain or non-chain format)
  • Chain-based AVS clients are easier to onboard but less motivated to pay for decentralized security.
  • The true paying demand lies with non-chain-based AVS clients as they don’t have an existing ready-to-use AVS-based infrastructure solution, a sector that leading restaking infrastructures all struggle to serve now.

Standardizing AVS Demand:

  • The lack of standardization and productization in the AVS demand side hinders the growth of an AVS marketplace.
  • Verisense aims to offer a standard onboarding process, an easy plug-and-play interface, flexible and reasonable pricing, and value-added functions such as on-duty-operator dashboards and cluster analytics tools.

Innovating the Consensus Mechanism:

  • Forming consensus inexpensively and securely is a challenge. Faster consensus often means less security, and vice versa.
  • Verisense proposes an innovative mechanism that decouples runtimes and consensus, crucial for serving non-chain-based AVS clients, for the purpose of implementing the reward/slashing mechanism.

Elevating Security and Resilience with FHE Enablement:

  • FHE technology allows computations and operations on encrypted data as if it were plain data, thus elevating the security to a new height in the blockchain in general, by enabling flash auctions of block produce and facilitating privacy transactions.
  • The FHE enablement at the underlying protocol level (node level) turns a perfect information game (i.e. blockchain due to the data transparency) to an imperfect information game hence to reach and maintain the Nash equilibrium. This will be particularly useful to prevent the malicious behaviors, often arise from AVS nodes.
  • Moreover, for a fair and resilient slashing mechanism, Verisense involves role game models at the fundamental node-level design. The three-party game includes Restaker (TVL supplier), Operator (Verisense AVS Node), and Resolver (Slasher) to prevent the “second-mover advantage.” FHE enablement allows Verisense to cleverly avoid many problems and build a functional slashing mechanism while maintaining fairness and resilience. Key issues addressed include:
    • The problem of convergence of bribe values due to the exposure of the Operator's bribery strategy for Restaker.
    • As a Resolver, the need to hide its veto of a slash's voting.
    • Front-running problems with MEVs.

Considering the recent developments in restaking, EigenLayer, as the pioneer in implementing and productizing the restaking concept, broke new ground and established LRT/restaking as a widely accepted method to enhance the trust of off-chain components. Now, VeriSense's innovation with FHE-enablement and VaaS technology marks another significant step forward, essential for realizing the true vision of decentralized security services.

Lightweight Consensus Protocol for AVS

The current AVS still cannot escape from the form of a blockchain. We have recognized the limitations of the traditional blockchain programming paradigm for the development and use of decentralized applications, so the AVS service provided by Verisense requires a new consensus mechanism.

We Verisense team proposed a new lightweight consensus protocol called Monadring as a gadget running inside the Verisense node for organizing AVS.

For more details, see our paper.

What's FHE

FHE stands for Fully Homomorphic Encryption. It is a form of encryption that allows computations to be carried out on ciphertexts, generating an encrypted result which, when decrypted, matches the result of operations performed on the plaintext. This technology enables data to be processed while still encrypted, preserving privacy and security.

Key Features of FHE:

  • Privacy and Security: FHE ensures that sensitive data can be processed without being exposed, thus maintaining confidentiality.
  • Versatility: It supports arbitrary computation on encrypted data, making it suitable for a wide range of applications.
  • Encryption Lifecycle: Data remains encrypted throughout its entire lifecycle—from storage to processing to output.

Introduction

This chapter primarily elucidates the combination of Verisense and FHE to implement three components: 1. Auctioning of Validator stakes; 2. Role Playing of AVS; and 3. Imperfect Information Game based on DeFi. The chapter about the basic principles of FHE introduces the current development of FHE and how it can be integrated with Zero-Knowledge Proofs (ZKPs). Verisense Network is dedicated to provide demand-side solutions in the FHE ecosystem. In the area of FHE, it offers more effective tools for specific scenarios to achieve faster homomorphic computations and privacy-preserving computations.We later introduce the application of game theory in blockchain, and how FHE changes games of perfect information into games of imperfect information.Finally, we present the application of FHE in the Verisense Network.

FHE Basic

Homomorphic encryption (HE) is a method of encryption that allows computations to be carried out on encrypted data, generating an encrypted result which, when decrypted, matches the outcome of computations performed on the plaintext. This property enables sophisticated computations on encrypted data while maintaining data security. HE schemes are a type of encryption method that can protect data privacy because they allow computations to be performed directly on the encrypted data. For example, an HE scheme might allow a user to perform operations like addition and multiplication on encrypted numbers, and these operations would have the same result as if they were performed on the original, unencrypted numbers. This technology is seen as a key component for secure cloud computing since it allows complex data manipulations to be carried out on completely secure encrypted data.

Fully Homomorphic Encryption (FHE) is a more advanced form of Homomorphic Encryption. FHE allows arbitrary computations to be carried out on encrypted data, which is not the case with normal HE that might be limited in the types of computation it supports. FHE computations generate a result that, when decrypted, corresponds to the result of the same computations performed on the plaintext. This makes FHE extremely useful for cases where sensitive data must be processed or analyzed, but security and privacy considerations prevent the data from being decrypted. With FHE, you can perform unlimited calculations on this encrypted data just like you would on unencrypted data. For instance, in the field of cloud computing, FHE allows users to operate computations on encrypted data stored in the cloud, preserving data confidentiality and privacy.

We present here a few popular FHE schemes.

BGV (Brakerski-Gentry-Vaikuntanathan):

The BGV scheme is a Fully Homomorphic Encryption (FHE) method, proposed by Zvika Brakerski, Craig Gentry, and Vinod Vaikuntanathan. It offers a choice of FHE schemes based on the learning with error (LWE) or ring-LWE problems that have substantial security against known attacks. BGV allows the encryption of a single bit at a time and the efficiency of the encryption is largely considered in cloud storage models.

BFV (Brakerski/Fan-Vercauteren):

BFV is another homomorphic encryption scheme that is often considered for its practical performance alongside the BGV scheme. BFV supports a set of mathematical operations such as addition and multiplication to be performed directly on the encrypted data. It has been implemented efficiently and there have also been several optimizations proposed to enhance its performance in different applications.

The Brakerski-Gentry-Vaikuntanathan (BGV) and Brakerski/Fan-Vercauteren (BFV) schemes differ mainly in how they encode information. BGV encodes messages in the least significant digit (LSD) of the integer, while BFV encodes messages in the most significant digit (MSD) of the integer. This difference can affect how the encrypted data is handled and manipulated during computations.

Verisense utilizes the BFV scheme for its FHE functionalities.

CKKS (Cheon-Kim-Kim-Song):

The CKKS scheme is known for being a Leveled Homomorphic Encryption method that supports approximate arithmetic operations over encrypted data. The CKKS scheme is especially suitable for computations involving real or complex numbers. Its ability to perform operations on encrypted data without the necessity for decryption makes it highly useful for maintaining data security during computations.

The Cheon-Kim-Kim-Song (CKKS) scheme is particularly useful in the field of Artificial Intelligence (AI), largely due to its ability to handle computations with real or complex numbers - including floating-point numbers. In many AI applications, computations involve floating-point numbers. Especially in machine learning and deep learning scenarios, data is represented as floating-point numbers, and neural networks operate over these numbers. The CKKS scheme allows these computations to be carried out on encrypted data, thus providing a privacy-preserving solution for AI applications. Its capabilities make it a significant tool for implementing machine learning algorithms that can operate directly on encrypted data, which is critical for situations where the privacy of the data is paramount.

The encryption process of BFV can be described as \[ \mathbf{a}\cdot \mathbf{s} +\Delta m +\mathbf{e} \] where \(\mathbf{a}\) is a uniformly random polynomial ring element: \(\mathbf{a}\in R_Q\), \( R_Q=(\mathbb{Z}/Q\mathbb{Z})[X]/(X^N+1)\). Similarly, \(\mathbf{s}\) is secret key and \(\mathbf{e}\) is a Gaussian distributed noise: \(\mathbf{s}\in R,\mathbf{e}\in R\). \(\mathbf{m}\) is the message \(\mathbf{m}\in R_t\), \(\Delta\) is the scaling factor \(\Delta = \lfloor Q/t\rfloor\). The choice of \( t \) is often a balancing act between these two constraints. On one hand, we want \(t\) to be large enough so that the encrypted data remains secure (that is, the noise isn't so small that it makes the encryption scheme vulnerable), but on the other hand, we want \( t \) to be small enough so that the noise after homomorphic computations( especially for homomorphic multiplication) does not lead to inaccuracies in the decrypted results. Thus, the selection of \( t \) often depends on the specific application, the security requirements, and the nature of the computations to be performed.

When homomorphic computations (especially multiplications) are performed on the encrypted data (ciphertext), it leads to an increase in the "noise" present in the ciphertext. This increase in the noise can interfere with the decryption process, leading to an inaccurate output. This is where the bootstrapping technique comes into play in Fully Homomorphic Encryption (FHE) systems. Bootstrapping is a unique process designed to "refresh" the ciphertext by reducing this increased noise while still preserving the computed result in an encrypted form. It essentially involves applying the FHE decryption circuit homomorphically to the "noisy" ciphertext to yield a "cleaner" version of it - one that embodies the same output but with significantly reduced noise. In this way, bootstrapping ensures that the resulting ciphertext can be decrypted correctly and accurately, despite the many computations that it underwent. Some Fully Homomorphic Encryption (FHE) schemes, such as FHEW and TFHE - which is used by Zama, don't easily support CRT packing schemes, making it challenging to perform parallel homomorphic computations efficiently. On the brighter side, TFHE offers swift bootstrapping, which aids in managing noise effectively and enhances overall computational efficiency.

Homomorphic operations, such as multiplication, tend to create ciphertexts that are no longer associated with the original linear secret key but some higher-degree variant of the key (for instance, a degree-2 key or squared key after a multiplication operation). This higher degree key form can disrupt further computations and complicate the decryption process. This is where key switching steps in. Key switching is a technique that allows the transformation of the ciphertext from being associated with a higher-degree key back to a simpler form associated with the linear key. This technique ensures that the ciphertext can be either decrypted correctly with the simple secret key or subjected to further homomorphic operations.

Differences between FHE and ZKPs and integration solutions

  1. FHE allows arbitrary computations on encrypted data without needing to decrypt it. The results, once decrypted, are the same as if they were performed on the original plaintext data. This makes FHE highly valuable for preserving confidentiality in situations where sensitive data must be analyzed or manipulated. ZKPs, on the other hand, allow one party to prove to another that they know a value or a secret, without conveying any information apart from the truth of the claim. This makes ZKPs essential in contexts where you need to confirm information without revealing it, thereby maintaining privacy.
  2. The security of most FHE schemes, including the popular ones like BGV, BFV, and CKKS, is based on the hardness of lattice problems. Lattice-based cryptography is believed to be resistant to attacks from quantum computers, which makes FHE schemes potentially useful for post-quantum cryptography. Their resilience to quantum attacks is due to the fact that no efficient quantum algorithm is known for solving the hard lattice problems that underpin these cryptographic systems. Many ZKPs, including some of the most efficient ones like zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge), rely on the hardness of problems in pairing-based elliptic curve cryptography. These approaches offer powerful privacy-preserving properties and have been used in various cryptographic system constructions. However, it should be noted that these schemes are not necessarily resistant to quantum computing attacks. The security of the elliptic curve depends on the difficulty of the elliptic curve discrete logarithm problem, which can be solved using Shor's algorithm on a sufficiently powerful quantum computer.
  3. In many practical scenarios, Fully Homomorphic Encryption (FHE) is used in combination with Zero-Knowledge Proofs (ZKPs) to achieve secure data processing and validation. The reason behind this is that while FHE allows computations on encrypted data without revealing the original data, it does not provide a means of independently verifying the correctness of these computations without access to the secret key. This is where ZKPs come into play. By using ZKPs along with FHE, a system can offer proofs that computations were performed correctly without revealing any sensitive data, including the secret key. This is highly valuable in blockchain or distributed ledger technologies where trustless validation is necessary. For example, when an entity performs computations on encrypted data using their private key, they can generate a ZKP to attest that the computation was performed correctly according to the rules of the specific protocol, without revealing any information about the private key or the original data. When this ZKP is submitted to the network (or 'chain'), other participants can verify the computation's correctness without accessing the encrypted data or the private key. Therefore, the combination of FHE and ZKPs can create a powerful cryptographic toolset capable of both preserving data privacy and ensuring computational integrity, particularly in decentralized environments where trustless verification is required.

Threshold Key Sharing based on Shamir Secret Sharing

Shamir's Secret Sharing is an algorithm in cryptography devised by Adi Shamir. It's a form of secret sharing, where a secret is divided into parts, giving each participant its own unique part. The unique feature of the algorithm is the minimal amount of parts, or shares, needed to reconstruct the secret. Here's a simple walkthrough of how Shamir's Secret Sharing can be used for threshold private key sharing:

  1. Choose the Threshold: Define the threshold number \(t\) below which knowing \(t\) points gives no information about the secret, but \( t \) points yields the secret.
  2. Generate a Polynomial: Generate a random polynomial of degree \(t-1\) with the constant term being the secret (private key) to be shared. i.e. \[ f(x)=a_0+a_1x+a_2x^2+\ldots+a_{t-1}x^{t-1} \]
  3. Create Shares: Evaluate the polynomial at different points to get \(n\) shares, where \(n\) is the total number of participants. Each participant is given one share, which is a point on the polynomial. i.e. \[\mathcal{s}_i=f(x_i)\]
  4. Distribute the Shares: The shares of the private key are then distributed among the participants. The key property here is that any \(t\) shares (points) are enough to reconstruct the polynomial (and hence discover the secret), whereas \(t-1\) or fewer shares reveal no information about the secret.
  5. Reconstruct the Secret: When the need arises to use the private key (secret), any \(t\) participants come together and combine their shares using polynomial interpolation (for example, via Lagrange interpolation) to reconstruct the polynomial and discover the constant term, which is the secret. \[ f(x)=\sum^{t-1}_{i=0} s_i \prod_j \frac{x-s_j}{s_i-s_j} \] It is worth noting that all polynomials are defined over the ring of polynomials in \((\mathbb{Z}/p\mathbb{Z})[X]/X^{t}\) and the Lagrange interpolation still holds.

This way, the private key (secret) is never explicitly revealed to any single party and no single party can access the secret alone. This is particularly useful in managing the risks associated with key management in cryptographic systems. It provides a balance between accessibility (through the threshold number of participants) and security (no single point of failure).

Game Theory

Game theory is a mathematical theory that studies the interaction behavior among decision-makers. In game theory, decision-makers, known as "players", make choices (strategies) according to rules (the form of the game) and obtain certain outcomes (payoffs or utilities) based on the choices of all players. Game theory covers the rationality assumptions of players, players' expectations, and players' strategy choices. Game theory can be divided into cooperative and non-cooperative games. Cooperative games focus on teamwork and the formation of alliances, with a particular emphasis on how to distribute payoffs. Non-cooperative games presume that players will selfishly pursue their interests, with the classic example being the prisoner's dilemma.

There are a large number of gaming application scenarios in blockchain

  1. Voting or Election: Consensus mechanisms in blockchain include forms of election or voting, such as which block to add next or deciding the truthful chain in case of forks. Game theory provides a model to understand the strategic interactions and potential behaviors of participants in these decisions.
  2. Auctions: Auctions, as in the case of token sales or gas fees bidding in Ethereum, play a significant role in blockchain. A strategic analysis using game theory can optimize the auction designs and predict bidding behaviors, potentially increasing the overall efficiency of such systems.
  3. DeFi (Decentralized Finance) Models: Complex mechanisms like Miner Extractable Value (MEV) and models like Ve(3,3) in Curve Finance are analyzed using game theory. It helps understand how rational users behave in various market conditions, considering different incentives provided by these models.

Nash Equilibrium

Nash equilibrium is a significant concept in game theory, introduced by mathematician John Nash in 1950, which describes a stable state of a game. In this state, each player selects the optimal strategy according to the strategies of all other players, and under the premise of knowing all other players' choices, no player can increase their own payoffs by unilaterally changing strategies. Simply put, a Nash equilibrium is the intersection of each player's best responses - when the game reaches a Nash equilibrium, no player wants to change their strategy. While Nash equilibrium provides a theoretical framework to understand how decision-makers make rational choices, there are relatively few examples in reality that can reach a Nash equilibrium. This is mainly because players' rational selections can be influenced by various factors, such as limited information and bounded rationality. Nevertheless, Nash equilibrium remains an important tool for understanding and analyzing strategic decision interactions.

Perfect Information Games vs Imperfect Information Games

Perfect Information Games

In games of perfect information, every player has complete knowledge about the game and the actions of other players. That is, each player is aware of all the previous moves made by all the players. Classic examples of such games are chess and go, where each player observes the entire history of the game and can make the optimal decision at each step. In the context of blockchain, perfect information might apply to consensus mechanisms where the actions of all validators are transparent and open to the network, such as Proof-of-Work or a public ledger’s transaction history.

In games of perfect information, a Nash equilibrium represents a state of the game where no player can unilaterally deviate from their current strategy to improve their payoff, given the strategies of all other players. The process of finding a Nash equilibrium in such games is relatively straightforward because all players have complete knowledge about the game structure and the past actions of all players. The concept of Nash equilibrium in this setting aligns with the idea of strategy profiles that are stable under mutual best response dynamics.

Imperfect Information Games

Contrastingly, in games of imperfect information, some aspects of the game are not fully known to all players. Players may have private knowledge or there may be uncertainty about the past actions of other players. This can lead to strategic behavior where players infer from incomplete information or signal to other players. Poker is an example of such a game, where each player does not know the cards that others hold. In blockchain applications, imperfect information often arises. An example would be a privacy-preserving blockchain where transaction data is hidden, or in peer-to-peer trading where one does not know the reservation price of a counterparty.

In games of imperfect information, a Nash equilibrium is a more complex concept. It's still defined as a state of the game where no player can profit from unilaterally deviating from their current strategy, given the strategies of others. However, because players now have private information, a player's strategy must specify what to do for every possible private information they can have. The Nash equilibrium concept in this case extends to the notion of a Bayesian Nash equilibrium, which considers players' beliefs about each other's private information, and each player's strategy is the best response to their beliefs about others' strategies. Therefore, in imperfect information games, Nash equilibria can involve complex strategic behavior like randomization (mixing between different actions) and sophisticated beliefs about others' private information.

In summary, while the basic intuition of Nash equilibrium - no profitable unilateral deviation - applies in both perfect and imperfect information games, the nature of strategies and the process to derive equilibria can be significantly more complex in games of imperfect information.

An example in Ve(3,3)

A very typical example is the Ve(3,3) model proposed by OHM. alt text

The Nash equilibrium of this game model is located at point (3,3), which in turn, allows the maximization of the Total Value Locked (TVL) for the whole ecosystem. Within just five months, the TVL of OHM rapidly escalated to $800M. Nonetheless, due to the inherent data transparency of the blockchain, this game is one of perfect information, meaning that all participants know each other's information. As such, this game only reaches Nash equilibrium briefly. The reason is that once you are aware of other participants starting to exit the system, you, too, would leave the game. alt text

With the help of Fully Homomorphic Encryption (FHE), we can transform the perfect information game into an imperfect information game, thereby achieving the Nash equilibrium point.

FHE for Verisense

Overview of Verisense

While leading restaking infrastructures like EigenLayer, Karak, and Symbiotics compete fiercely on the TVLs, the supply side of security, less attention has been given to the AVS (Active Validation Services) solution, the demand side of security. It is the demand that drives supply, not the other way around. EigenLayer recognizes this critical aspect but struggles to address it due to the absence of essential components in its underlying design, which are necessary to form a rapid and secure consensus and a fair and functional slashing mechanism. Verisense Network seizes this market opportunity by proposing an innovative mechanism to fill the gap, addressing the AVS market's needs effectively.

What is Verisense Network?

Verisense is the world’s first FHE-enabled (Fully Homomorphic Encryption) VaaS (Validation-as-a-Service) network designed to plug and play with any restaking layers. Our goal is to serve AVS in variety (chain-natured, non-chain-natured, and hybrid) and onboard diversified paying AVS clients, a sector currently underserved yet ultimately critical to win in the restaking. Here’s what this entails:

Serving Diverse AVS Clients:

  • AVS clients vary in many formats, broadly categorized into i) chain-natured (i.e. sequencers, side chains, oracles), ii) non-chain-natured (i.e. keeper networks, trust execution environment, threshold cryptography schemes, new virtual machine, decentralized web2 social apps and etc) and iii) hybrid (i.e. a bridge can be implemented as a chain or non-chain format)
  • Chain-based AVS clients are easier to onboard but less motivated to pay for decentralized security.
  • The true paying demand lies with non-chain-based AVS clients as they don’t have an existing ready-to-use AVS-based infrastructure solution, a sector that leading restaking infrastructures all struggle to serve now.

Standardizing AVS Demand:

  • The lack of standardization and productization in the AVS demand side hinders the growth of an AVS marketplace.
  • Verisense aims to offer a standard onboarding process, an easy plug-and-play interface, flexible and reasonable pricing, and value-added functions such as on-duty-operator dashboards and cluster analytics tools.

Innovating the Consensus Mechanism:

  • Forming consensus inexpensively and securely is a challenge. Faster consensus often means less security, and vice versa.
  • Verisense proposes an innovative mechanism that decouples runtimes and consensus, crucial for serving non-chain-based AVS clients, for the purpose of implementing the reward/slashing mechanism.

Elevating Security and Resilience with FHE Enablement:

  • FHE technology allows computations and operations on encrypted data as if it were plain data, thus elevating the security to a new height in the blockchain in general, by enabling flash auctions of block produce and facilitating privacy transactions.
  • The FHE enablement at the underlying protocol level (node level) turns a perfect information game (i.e. blockchain due to the data transparency) to an imperfect information game hence to reach and maintain the Nash equilibrium. This will be particularly useful to prevent the malicious behaviors, often arise from AVS nodes.
  • Moreover, for a fair and resilient slashing mechanism, Verisense involves role game models at the fundamental node-level design. The three-party game includes Restaker (TVL supplier), Operator (Verisense AVS Node), and Resolver (Slasher) to prevent the “second-mover advantage.” FHE enablement allows Verisense to cleverly avoid many problems and build a functional slashing mechanism while maintaining fairness and resilience. Key issues addressed include:
    • The problem of convergence of bribe values due to the exposure of the Operator's bribery strategy for Restaker.
    • As a Resolver, the need to hide its veto of a slash's voting.
    • Front-running problems with MEVs.

Considering the recent developments in restaking, EigenLayer, as the pioneer in implementing and productizing the restaking concept, broke new ground and established LRT/restaking as a widely accepted method to enhance the trust of off-chain components. Now, VeriSense's innovation with FHE-enablement and VaaS technology marks another significant step forward, essential for realizing the true vision of decentralized security services.

Quick Start Guide: Developing Nucleus on Verisense

Welcome to Verisense! This guide will help you quickly get started with developing your first Nucleus using Rust. By the end, you’ll have a simple deployed Nucleus and be able to interact with it.

1. Set Up Your Rust Environment

First, install Rust and configure it for WebAssembly (Wasm) compilation:

Install Rust:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

Add the WebAssembly target:

rustup target add wasm32-unknown-unknown

2. Create and Compile a Rust Project

Create a new Rust library:

cargo new --lib hello-avs
cd hello-avs

Update Cargo.toml

[package]
name = "hello-avs"
version = "0.1.0"
edition = "2021"

[lib]
crate-type = ["cdylib"]

[dependencies]
vrs-core-sdk = { version = "0.0.2" }
parity-scale-codec = { version = "3.6", features = ["derive"] }

Write your first Nucleus code:

#![allow(unused)]
fn main() {
use parity_scale_codec::{Decode, Encode};
use vrs_core_sdk::{get, post, storage};

#[derive(Debug, Decode, Encode)]
pub struct User {
    pub id: u64,
    pub name: String,
}

#[post]
pub fn add_user(user: User) -> Result<u64, String> {
    let max_id_key = [&b"user:"[..], &u64::MAX.to_be_bytes()[..]].concat();
    let max_id = match storage::search(&max_id_key, storage::Direction::Reverse)
        .map_err(|e| e.to_string())?
    {
        Some((id, _)) => u64::from_be_bytes(id[5..].try_into().unwrap()) + 1,
        None => 1u64,
    };
    let key = [&b"user:"[..], &max_id.to_be_bytes()[..]].concat();
    storage::put(&key, user.encode()).map_err(|e| e.to_string())?;
    Ok(max_id)
}

#[get]
pub fn get_user(id: u64) -> Result<Option<User>, String> {
    let key = [&b"user:"[..], &id.to_be_bytes()[..]].concat();
    let r = storage::get(&key).map_err(|e| e.to_string())?;
    let user = r.map(|d| User::decode(&mut &d[..]).unwrap());
    Ok(user)
}
}

Build the project for WebAssembly:

cargo build --release --target wasm32-unknown-unknown

3. Install Command-Line Tools and Get Free Gas

Install the Verisense CLI:

cargo install --git https://github.com/verisense-network/vrs-cli.git

Generate an account:

vrx account generate --save

This command will generate an account and save the private key to ~/.vrx/default-key. Example output:

Phrase: exercise pipe nerve daring census inflict cousin exhaust valve legend ancient gather
Seed: 0x35929b4e23d26c5ba94d22d32222128e56f5a7dce35f9b36b467ac2be2b4d29b
Public key: 0x9cdaa67b771a2ae3b5e93b3a5463fc00e6811ed4f2bd31a745aa32f29541150d
Account Id: kGj5epfCkuae7DJpezu5Qx6mp96gHmLv2kDPHHTdJaEVNptRt

Request free gas:

Chat with the Verisense Faucet Bot and provide your account ID to request free $VRS.

4. Create and Deploy a Nucleus

Create a Nucleus:

vrx nucleus --devnet create --name hello_avs --capacity 1

Example output:

Nucleus created.
  id: kGieDqL1fX8J7n1vRbXri7DVphwnZJpkDcoMoQZWo9XkTt1Sv
  name: hello_avs
  capacity: 1

Deploy the compiled Wasm:

vrx install --wasm target/wasm32-unknown-unknown/release/hello_avs.wasm --id kGieDqL1fX8J7n1vRbXri7DVphwnZJpkDcoMoQZWo9XkTt1Sv

If successful, you will see output like this:

Digest: 0xff878e546806da8b13f02765ea84f616963abcfdcac196ba3ea9f3f5d94b661e
Peer ID: 12D3KooWCz46orfkSfaahJqkph1bQqXU9t7ct98YQKTaDepNE6du
Transaction submitted: "0x0bc7d23b900a880e5274582755fc1a6c17df9453b0aa43f4cc382efc0bf1ec39"

5. Test Your Nucleus

Call add_user:

curl https://alpha-devnet.verisense.network -H 'Content-Type: application/json' -XPOST -d '{"jsonrpc":"2.0", "id":"whatever", "method":"nucleus_post", "params": ["kGieDqL1fX8J7n1vRbXri7DVphwnZJpkDcoMoQZWo9XkTt1Sv", "add_user", "000000000000000014416c696365"]}'

The networking component follows the standard JSON-RPC specification, and all post and get methods share a same endpoint separately. In the add_user case, the method is nucleus_post and so all other post methods.

The first parameter is the nucleus_id we just deployed, the second indicates the function name in the source code which is add_user. While the third is an parity-scale-encoded bytes whose value is: User {0, "Alice"}. You can find different implementations for various programming languages.

Call get_user:

Calling get_user is similar, we just need to change the method and parameter:

curl https://alpha-devnet.verisense.network -H 'Content-Type: application/json' -XPOST -d '{"jsonrpc":"2.0", "id":"whatever", "method":"nucleus_get", "params": ["kGieDqL1fX8J7n1vRbXri7DVphwnZJpkDcoMoQZWo9XkTt1Sv", "get_user", "0100000000000000"]}'

Conclusion

Congratulations! You’ve created, deployed, and interacted with your first Nucleus on Verisense. You can now expand your AVS functionality and explore advanced features of the platform.

For more information, check the Verisense documentation.

What's next

For more advanced topics, see:

Making a Request

To make a http request in Verisense nucleus, you have to split the process into two parts:

  1. make request: making a http request and return the request_id immediately;
  2. get the callback: a #[callback] function will be called with the request_id when the response is ready.

For example, let's request the https://www.google.com.

use vrs_core_sdk::{CallResult, http::{*, self}, callback, post};

#[post]
pub fn request_google() {
    let id = http::request(HttpRequest {
        head: RequestHead {
            method: HttpMethod::Get,
            uri: "https://www.google.com".to_string(),
            headers: Default::default(),
        },
        body: vec![],
    })
    .unwrap();
    vrs_core_sdk::println!("http request {} enqueued", id);
}

#[callback]
pub fn on_response(id: u64, response: CallResult<HttpResponse>) {
    match response {
        Ok(response) => {
            let body = String::from_utf8_lossy(&response.body);
            vrs_core_sdk::println!("id = {}, response: {}", id, body);
        }
        Err(e) => {
            vrs_core_sdk::eprintln!("id = {}, error: {:?}", id, e);
        }
    }
}

You have to maintain the request ids by a global structure such as a Hashmap.

Timer

Verisense has a powerful timer module. This module consists of

  • #[init]
  • #[timer]
  • set_timer!()

set_timer! and #[timer]

The macro set_timer! is used to set a new timer, this timer will be triggered in the delay time. Its signature is as follows:

set_timer!(Duration, timer_handler(params));

let's look at how to use it.

#[post]
pub fn test_set_timer() {
    storage::put(b"delay", format!("init").as_bytes());

    let a = "abc".to_string();
    let b = 123;
    set_timer!(std::time::Duration::from_secs(4), test_delay(a, b));
}

#[timer]
pub fn test_delay(a: String, b: i32) {
    storage::put(b"delay", format!("delay_complete {} {}", a, b).as_bytes()).unwrap();
}

In the above example, in the post function test_set_timer, we use a set_timer! to create a new timer, this timer will be triggered after 4 seconds, the triggered function is test_delay. You can write parameters of the handler in the set_timer! macro directly.

Let's inspect the timer handler further. The handler function test_delay must be decorated by #[timer], which assigns the decorated function the timer handler role. test_delay will be called after 4 seconds when test_set_timer is called.

So the set_timer! is just a delay function, how to implement intervals?

Interval means executing a function periodically. We can use set_timer! with recursive call to implement it. For example:

#[post]
pub fn test_set_timer() {
    set_timer!(std::time::Duration::from_secs(2), run_interval());
}

#[timer]
pub fn run_interval(){
    // do something
    set_timer!(std::time::Duration::from_secs(1), run_interval());
}

In this example, we set a timer which would be triggered in 2 seconds. While run_interval was triggered, you can do business in run_interval(), and at the last line of this function, just set a new timer, which would be executed in 1 second, and then triggered the same run_interval() by tail recursion. Then later the program will run forever by intervals of 1 second. We implement intervals by this way.

#[init]

A rust function decorated by this attribue macro is a special timer handler, it will be triggered when a new version of wasm code is upgraded.

For example:

#[init]
pub fn timer_init() {
    storage::put(b"delay", format!("init").as_bytes());
}

The timer_init() will be called automatically when a new version of one AVS wasm code is upgraded in Verisense.

You can refer to a more complex example at here.

Key Value Storage

Verisense has a full set of API on KV storage. Let's look at it.

APIs

put

Put a value into database via a key.

pub fn put(key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) -> CallResult<()> 

Example:

use vrs_core_sdk::{get, post, storage};

#[post]
pub fn add_user(mut u: User) -> Result<(), String> {
    let key = b"user:001";
    let val: Vec<u8> = u.encode();     
    storage::put(&key, &val).map_err(|e| e.to_string())?;

    Ok(())
}

Note: storage::put() can only be used in the function decorated by #[post].

del

Delete a value from database via a key.

pub fn del(key: impl AsRef<[u8]>) -> CallResult<()>

Example:

use vrs_core_sdk::{get, post, storage};

#[post]
pub fn delete_user() -> Result<(), String> {
    let key = b"user:001";

    storage::del(&key).map_err(|e| e.to_string())?;

    Ok(())
}

Note: storage::del() can only be used in the function decorated by #[post].

get

Get a value from database via a key.

pub fn get(key: impl AsRef<[u8]>) -> CallResult<Option<Vec<u8>>> {

Example:

#[get]
pub fn get_user() -> Result<Option<User>, String> {
    let key = b"user:001";
    let r = storage::get(&key).map_err(|e| e.to_string())?;
    let instance = r.map(|d| User::decode(&mut &d[..]).unwrap());
    Ok(instance)
}

get_range

Get a batch of entries from the database with "start_key" and direction, the limit maximum is 1000

pub fn get_range(
    start_key: impl AsRef<[u8]>,
    direction: Direction,
    limit: usize,
) -> CallResult<Vec<(Vec<u8>, Vec<u8>)>> 

Example:

#[get]
pub fn get_user_range() -> Result<(), String>  {
    let prefix_key = b"user:";
    let r = storage::get_range(&key, Direction::Forward, 100).map_err(|e| e.to_string())?;
    ...
}

delete_range

Removes the database entries in the range [start_key, end_key)

pub fn delete_range(start_key: impl AsRef<[u8]>, end_key: impl AsRef<[u8]>) -> CallResult<()> 

Example:

#[post]
pub fn delete_user_range() -> Result<(), String>  {
    let start_key = b"user:001";
    let end_key = b"user:100";
    let r = storage::delete_range(&start_key, &end_key).map_err(|e| e.to_string())?;
    ...
}

Note: storage::delete_range() can only be used in the function decorated by #[post].

Search a value with a key prefix and direction.

pub fn search(
    key_prefix: impl AsRef<[u8]>,
    direction: Direction,
) -> CallResult<Option<(Vec<u8>, Vec<u8>)>> 

Example:

use vrs_core_sdk::storage::Direction;

pub fn search_blog_id() {
   let key = [&b"blog:"[..], &0u64.to_be_bytes()[..]].concat();
   let first_blog = storage::search(&key, Direction::Forward).unwrap();
   let key = [&b"blog:"[..], &u64::MAX.to_be_bytes()[..]].concat();
   let last_blog = storage::search(&key, Direction::Reverse).unwrap();
   assert!(first_blog.is_some());
   assert!(last_blog.is_some());
}

Demo: A Decentralized Forum

In this tutorial, I will guide you through the process of building a decentralized forum on Verisense.

To develop a decentralized application (dApp) on Verisense, you'll need to implement four main components:

  1. AVS: This is a compiled WASM file created using the vrs-core-sdk. The front-end interacts with this component to write data.

  2. Surrogate: This is a proxy program responsible for syncing the latest data from the AVS and pushing it to MeiliSearch for efficient searching.

  3. MeiliSearch: A fast and powerful search engine that will handle data queries from the front-end, enabling a seamless search experience.

  4. Front-end app: This is the user-facing interface that allows interaction with the forum.

The AVS will be deployed on Verisense, while the Surrogate and MeiliSearch instances will be deployed on the same server node where Verisense is running.

Before we begin, please ensure that you have all the required tools installed.

AVS

Create an empty project

First, create a new Rust project,

cargo new --lib veavs

Put these into the Cargo.toml.

[package]
name = "veavs"
version = "0.1.0"
edition = "2021"

[lib]
crate-type = ["cdylib"]

[dependencies]
vrs-core-sdk = { git = "https://github.com/verisense-network/verisense.git", package = "vrs-core-sdk" }
parity-scale-codec = { version = "3.6", features = ["derive"] }

vemodel = { path = "../vemodel" }

You can refer to the original file content here.

Define models

For a decentralized forum, we need to define the following models:

use parity_scale_codec::{Decode, Encode};
use serde::{Deserialize, Serialize};

#[derive(Debug, Decode, Encode, Deserialize, Serialize)]
pub enum Method {
    Create,
    Update,
    Delete,
}

#[derive(Debug, Decode, Encode, Deserialize, Serialize)]
pub struct VeSubspace {
    pub id: u64,
    pub title: String,
    pub slug: String,
    pub description: String,
    pub banner: String,
    pub status: i16,
    pub weight: i16,
    pub created_time: i64,
}

#[derive(Debug, Decode, Encode, Deserialize, Serialize)]
pub struct VeArticle {
    pub id: u64,
    pub title: String,
    pub content: String,
    pub author_id: u64,
    pub author_nickname: String,
    pub subspace_id: u64,
    pub ext_link: String,
    pub status: i16,
    pub weight: i16,
    pub created_time: i64,
    pub updated_time: i64,
}

#[derive(Debug, Decode, Encode, Deserialize, Serialize)]
pub struct VeComment {
    pub id: u64,
    pub content: String,
    pub author_id: u64,
    pub author_nickname: String,
    pub post_id: u64,
    pub status: i16,
    pub weight: i16,
    pub created_time: i64,
}

You can refer to the original file content here.

Implement business

We will implement CRUD actions on each model.

// subspace
#[post]
pub fn add_subspace(mut sb: VeSubspace) -> Result<(), String> {
    let max_id = get_max_id(PREFIX_SUBSPACE_KEY);
    // update the id field from the avs
    sb.id = max_id;
    let key = build_key(PREFIX_SUBSPACE_KEY, max_id);
    storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;

    add_to_common_key(Method::Create, key)?;

    Ok(())
}

#[post]
pub fn update_subspace(sb: VeSubspace) -> Result<(), String> {
    let id = sb.id;
    let key = build_key(PREFIX_SUBSPACE_KEY, id);
    storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;

    add_to_common_key(Method::Update, key)?;

    Ok(())
}

#[post]
pub fn delete_subspace(id: u64) -> Result<(), String> {
    let key = build_key(PREFIX_SUBSPACE_KEY, id);
    storage::del(&key).map_err(|e| e.to_string())?;

    add_to_common_key(Method::Delete, key)?;

    Ok(())
}

#[get]
pub fn get_subspace(id: u64) -> Result<Option<VeSubspace>, String> {
    let key = build_key(PREFIX_SUBSPACE_KEY, id);
    let r = storage::get(&key).map_err(|e| e.to_string())?;
    let instance = r.map(|d| VeSubspace::decode(&mut &d[..]).unwrap());
    Ok(instance)
}

// article
#[post]
pub fn add_article(mut sb: VeArticle) -> Result<(), String> {
    let max_id = get_max_id(PREFIX_ARTICLE_KEY);
    // update the id field from the avs
    sb.id = max_id;
    let key = build_key(PREFIX_ARTICLE_KEY, max_id);
    storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;
    add_to_common_key(Method::Create, key)?;

    Ok(())
}

#[post]
pub fn update_article(sb: VeArticle) -> Result<(), String> {
    let id = sb.id;
    let key = build_key(PREFIX_ARTICLE_KEY, id);
    storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;
    add_to_common_key(Method::Update, key)?;

    Ok(())
}

#[post]
pub fn delete_article(id: u64) -> Result<(), String> {
    let key = build_key(PREFIX_ARTICLE_KEY, id);
    storage::del(&key).map_err(|e| e.to_string())?;
    add_to_common_key(Method::Delete, key)?;

    Ok(())
}

#[get]
pub fn get_article(id: u64) -> Result<Option<VeArticle>, String> {
    let key = build_key(PREFIX_ARTICLE_KEY, id);
    let r = storage::get(&key).map_err(|e| e.to_string())?;
    let instance = r.map(|d| VeArticle::decode(&mut &d[..]).unwrap());
    Ok(instance)
}

// comment
#[post]
pub fn add_comment(mut sb: VeComment) -> Result<(), String> {
    let max_id = get_max_id(PREFIX_COMMENT_KEY);
    // update the id field from the avs
    sb.id = max_id;
    let key = build_key(PREFIX_COMMENT_KEY, max_id);
    storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;
    add_to_common_key(Method::Create, key)?;

    Ok(())
}

#[post]
pub fn update_comment(sb: VeComment) -> Result<(), String> {
    let id = sb.id;
    let key = build_key(PREFIX_COMMENT_KEY, id);
    storage::put(&key, sb.encode()).map_err(|e| e.to_string())?;
    add_to_common_key(Method::Update, key)?;

    Ok(())
}

#[post]
pub fn delete_comment(id: u64) -> Result<(), String> {
    let key = build_key(PREFIX_COMMENT_KEY, id);
    storage::del(&key).map_err(|e| e.to_string())?;
    add_to_common_key(Method::Delete, key)?;

    Ok(())
}

#[get]
pub fn get_comment(id: u64) -> Result<Option<VeComment>, String> {
    let key = build_key(PREFIX_COMMENT_KEY, id);
    let r = storage::get(&key).map_err(|e| e.to_string())?;
    let instance = r.map(|d| VeComment::decode(&mut &d[..]).unwrap());
    Ok(instance)
}

You can find the full code here.

Compile to wasm

In the root of this project, run:

cargo build --release --target wasm32-unknown-unknown

You can find the compiled wasm file located at target/wasm32-unknown-unknown/release/veavs.wasm.

Deploy it to the Verisense

Register a new AVS protocol on Verisense.

vrx create-nucleus --name veavs --capacity 1

This command will return the registered AVS (nucleus) ID like:

Nucleus created.
  id: 5FsXfPrUDqq6abYccExCTUxyzjYaaYTr5utLx2wwdBv1m8R8
  name: hello_avs
  capacity: 1

Deply the generated wasm file to Verisense using the generated Nucleus ID.

vrx deploy --name veavs --wasm-path ../target/wasm32-unknown-unknown/release/veavs.wasm --nucleus-id 5FsXfPrUDqq6abYccExCTUxyzjYaaYTr5utLx2wwdBv1m8R8  --version 1

Wait for the process to complete successfully.

At this point, we have successfully deployed a new AVS onto Verisense.

Surrogate

The AVS functions as a raw database, but to make use of this data, we need to create a proxy that will index the data into the MeiliSearch engine.

You can check out the surrogate implementation here.

The basic concept behind the surrogate is to retrieve data from the AVS and inject it into MeiliSearch for efficient indexing and searching.

MeiliSearch

MeiliSearch provides a standardized approach for handling data queries.

You can find the API documentation here.

Front-end

You can find the reference code for the front-end here.

In the front-end application, the logic involves writing data to the AVS and querying data from MeiliSearch to display the results.

What It Looks Like

For a preview of the app in action, check out this video.