NOMOS-BLEND-PROTOCOL

FieldValue
NameNomos Blend Protocol
Slug95
Statusraw
CategoryStandards Track
EditorMarcin Pawlowski
ContributorsAlexander Mozeika [email protected], Youngjoon Lee [email protected], Frederico Teixeira [email protected], Mehmet Gonen [email protected], Daniel Sanchez Quiros [email protected], Álvaro Castro-Castilla [email protected], Daniel Kashepava [email protected], Thomas Lavaur [email protected], Antonio Antonino [email protected], Filip Dimitrijevic [email protected]

Timeline

  • 2026-01-19f24e567 — Chore/updates mdbook (#262)
  • 2026-01-1689f2ea8 — Chore/mdbook updates (#258)

Abstract

The Blend Protocol is an anonymous broadcasting protocol for the Nomos network that provides network-level privacy for block proposers. It addresses network-based de-anonymization by making it difficult and costly to link block proposals to their proposers through network analysis. The protocol increases the time to link a sender to a proposal by at least 300 times, making stake inference highly impractical.

The protocol achieves probabilistic unlinkability in a highly decentralized environment with low bandwidth cost but high latency. It hides the sender of a block proposal through cryptographic obfuscation and timing delays, routing encrypted messages through multiple blend nodes before revelation.

Keywords: Blend, anonymous broadcasting, privacy, mix network, unlinkability, stake privacy, encryption

Motivation

All Proof of Stake (PoS) systems have an inherent privacy problem where stake determines node behavior. By observing node behavior, one can infer the node's stake. The Blend Protocol addresses network-based de-anonymization where an adversary observes network activity to link nodes to their proposals and estimate stake.

The protocol achieves:

  1. Unlinkability: Block proposers cannot be linked to their proposals through network analysis
  2. Stake privacy: Inferring relative stake takes more than 10 years for adversaries controlling 10% stake (targeting 0.1% stake node)

The Blend Protocol is one of the Nomos Bedrock Services, providing censorship resistance and network-level privacy for block producers. It must be used alongside mempool protections (like NomosDA) to achieve truly privacy-preserving system.

Semantics

The keywords "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.

Definitions

TermDescription
Data messageA message generated by a consensus leader containing a block proposal. Indistinguishable from other messages until fully processed.
Cover messageA message with meaningless content that creates noise for data messages to hide in. Indistinguishable from data messages.
Core nodeA Nomos node that declared willingness to participate in Blend Network through SDP. Responsible for message generation, relaying, processing, and broadcasting.
Edge nodeA Nomos node that is not a core node. Connects to core nodes to send messages.
Block proposer nodeA core or edge node generating a new data message.
Blend nodeA core node that processes a data or cover message.
BlendingCryptographically transforming and randomly delaying messages to shuffle temporal order.
BroadcastingSending a data message payload (block proposal) to all Nomos nodes.
DisseminatingRelaying messages by core nodes through the network.
Epoch648,000 slots (each 1 second), with average 21,600 blocks per epoch.
SessionTime period with same set of core nodes executing the protocol. Length follows epoch length (21,600 blocks average).
RoundPrimitive time measure (1 second) during which a node can emit a new message.
Interval30 rounds, approximating time between two consecutive block production events.
Blending tokenInformation extracted from processed messages, used as proof of processing for rewards.

Node Types

TypeDescription
Honest nodeFollows the protocol fully.
Lazy nodeDoes not follow protocol due to lack of incentives; only participates when directly beneficial.
Spammy nodeEmits more messages than protocol expects.
Unhealthy nodeEmits fewer messages than expected (may be under attack).
Malicious nodeDoes not follow protocol regardless of incentives.
Unresponsive nodeDoes not follow protocol due to technical reasons.

Adversary Types

TypeDescription
Passive adversaryCan only observe, cannot modify node behavior.
Active adversaryCan modify node behavior and observe network.
Local observerPassive adversary with limited network view and ability to observe internals of limited nodes.

Document Structure

This specification is organized into two distinct parts to serve different audiences and use cases:

Part I: Protocol Specification contains the normative requirements necessary for implementing an interoperable Blend Protocol node. This section defines the cryptographic primitives, message formats, network protocols, and behavioral requirements that all implementations MUST follow to ensure compatibility and maintain the protocol's privacy guarantees. Protocol designers, auditors, and those seeking to understand the core mechanisms should focus on this part.

Part II: Implementation Details provides non-normative guidance for implementers. This section offers practical recommendations, optimization strategies, and detailed examples that help developers build efficient and robust implementations. While these details are not required for interoperability, they represent best practices learned from reference implementations and can significantly improve performance and reliability.

This separation provides several benefits:

  1. Clarity of Requirements: Implementers can clearly distinguish between mandatory requirements for interoperability (Part I) and optional optimizations (Part II)
  2. Protocol Evolution: The core protocol specification (Part I) can remain stable while implementation guidance (Part II) evolves with new techniques and optimizations
  3. Multiple Implementations: Different implementations can make different trade-offs in Part II while maintaining full compatibility through adherence to Part I
  4. Audit Focus: Security auditors can concentrate on the normative requirements in Part I that are critical for the protocol's privacy guarantees
  5. Accessibility: Protocol researchers can understand the essential mechanisms without being overwhelmed by implementation details, while developers get the practical guidance they need

Part I: Protocol Specification


Protocol Overview

The Blend Protocol works as follows:

  1. Core nodes form a network by establishing encrypted connections with other core nodes at random
  2. A block proposer node selects several core nodes and creates a data message containing a block proposal that can only be processed by selected nodes in specified order
  3. The block proposer sends the data message to its neighbors (or connects to random core nodes if edge node)
  4. Core nodes disseminate (relay) the message to the rest of the network
  5. Core nodes generate new cover messages every round, blended with other messages
  6. When a data message reaches a designated blend node:
    • Message is cryptographically transformed (incoming/outgoing messages unlinkable by content)
    • Message is randomly delayed (unlinkable by timing observation)
  7. The blend node disseminates the processed message so next blend node can process it
  8. When message reaches the last blend node:
    • Node processes (decrypts and delays) the message
    • Extracts the block proposal payload
    • Broadcasts block proposal to Nomos Network

Note: Current protocol version is optimized for privacy of core nodes. Edge nodes gain lower privacy level, which is acceptable as they are assumed mobile without static long-term network identifiers and have lower stake.

Network Protocol

Network Formation

Core nodes form a peer-to-peer network at the beginning of each session:

  1. All core nodes retrieve the set of participating core nodes from SDP protocol
  2. Each core node establishes encrypted connections with randomly selected core nodes
  3. Network is considered formed when nodes reach minimum connectivity requirements

Edge nodes connect to core nodes on-demand when they need to send messages.

Minimal Network Size

The protocol requires a minimum number of core nodes to operate safely. If this minimum is not met, nodes MUST NOT use the Blend protocol and MUST broadcast data messages directly.

Network Maintenance

Nodes monitor connection quality and adjust their connections based on:

  • Message frequency and correctness
  • Network health indicators
  • Protocol compliance of peers

Nodes may close connections with misbehaving peers and establish new connections to maintain network quality.

Session Transitions

When a new session or epoch begins, the network implements a transition period to allow messages generated with old credentials to safely complete their journey through the network.

Quota Protocol

The protocol limits the number of messages that can be generated during a session through a quota system. Two types of quota exist:

  1. Core Quota: Limits cover message generation and blending operations for core nodes during a session
  2. Leadership Quota: Limits blending operations a block proposer can perform per proof of leadership

Nodes generate session-specific key pools, where each key is associated with a proof of quota. This ensures messages are properly rate-limited and nodes cannot exceed their allowed message generation capacity.

Message Protocol

Message Structure

Messages consist of three components:

  1. Public Header (H): Contains public key, proof of quota, and signature
  2. Encrypted Private Header (h): Contains blending headers for each hop, with proofs of selection
  3. Payload (P): The actual content (block proposal or cover message data)

Message Lifecycle

Messages follow a defined lifecycle through the network:

  1. Generation: Triggered by consensus lottery (data) or schedule (cover)
  2. Relaying: Nodes validate and forward messages to neighbors
  3. Processing: Designated nodes decrypt and extract next-hop information
  4. Delaying: Random delays hide timing correlations
  5. Releasing: Messages released according to delay schedule
  6. Broadcasting: Final nodes extract and broadcast block proposals

Proof Mechanisms

Proof of Quota (PoQ)

Guarantees that honestly generated messages use valid quota allocation. Two types exist:

  • Core Quota Proof: Validated message is within core node's session quota
  • Leadership Quota Proof: Validated message is within leader's quota per won slot

Combined proof uses logical OR of both proof types.

Proof of Selection (PoSel)

Makes node selection for message processing random and verifiable. Prevents:

  • Targeting specific nodes
  • Selfish behavior (sending all messages to self)
  • Predictable routing patterns

Rewarding Protocol

Nodes are rewarded for participating in the protocol:

  1. Message Processing: Nodes collect blending tokens as proof of work
  2. Activity Proof: Probabilistic attestation using Hamming distance
  3. Two-Tier Rewards: Base reward for all active nodes, premium reward for nodes with minimal Hamming distance

Security Considerations

DoS Protection

Multiple mechanisms prevent denial-of-service attacks:

  • Quota system limits message generation
  • Connection monitoring detects spammy/malicious nodes
  • Minimal network size requirement
  • Message uniqueness verification

Privacy Properties

The protocol provides probabilistic unlinkability with quantifiable privacy guarantees. Time to link sender to proposal and time to infer stake increase significantly with each additional hop in the blending path.

Attack Resistance

Protection against various attack vectors:

  • Grinding attacks: Prevented by unpredictable session randomness
  • Tagging attacks: Addressed by mempool protections (NomosDA)
  • Timing attacks: Mitigated by random delays
  • Content inspection: Prevented by layered encryption
  • Replay attacks: Prevented by TLS and key uniqueness verification

Rationale

Design Decisions

Blending vs Mixing: Protocol uses blending (spatial anonymity through multiple nodes) rather than mixing (temporal anonymity through single node) for higher decentralization and censorship resistance.

Two-tier reward system: Base reward ensures fairness; premium reward continues motivating nodes through lottery mechanism.

Edge node privacy trade-off: Lower privacy acceptable as edge nodes are assumed mobile, without static identifiers, with lower stake.

Cover traffic motivation: Nodes must generate cover messages for own privacy; protocol enforces statistical indistinguishability.

Statistical bias: Modulo operation for node selection introduces negligible bias (< 2^{-128}) for expected network sizes.


Part II: Implementation Details


Network Implementation

Core Network Bootstrapping

At the beginning of a session:

  1. All core nodes retrieve fresh set of core nodes' connectivity information from SDP protocol
  2. Each core node selects at random a set of other core nodes and connects through fully encrypted connections
  3. After all core nodes connect, a new network is formed

Detailed Bootstrapping Procedure

  1. Core node retrieves set of core nodes' information from SDP protocol at session start
  2. If number of core nodes is below minimum (32), stop and use regular broadcasting
  3. Start opening new connections:
    • Select at random (without replacement) a node from set of core nodes
    • Establish secure TLS 1.3 connection using ephemeral Ed25519 keys
    • Identify neighbor using Neighbor Distinction Process (NDP)
    • Stop connecting after reaching maximum retries (3 by default)
  4. Repeat until connected to minimal core peering degree (4 by default, both incoming and outgoing count)
  5. Start accepting incoming connections and maintaining all connections:
    • Can maintain up to maximum connections with core nodes (8 by default)
    • Can receive up to maximum connections with edge nodes (300 by default)
  6. If two nodes open connections to each other:
    • Node with lower public key value (provider_id from SDP, compared via Base58 encoding) closes outgoing connection
    • Node with higher public key value closes incoming connection

Connection Details

  • Protocol: libp2p with TLS 1.3 (not older)
  • Cryptographic scheme: Ed25519 with ephemeral keys
  • libp2p protocol name:
    • Mainnet: /nomos/blend/1.0.0
    • Testnet: /nomos-testnet/blend/1.0.0

Connectivity Maintenance Implementation

Core nodes monitor connection quality by verifying message correctness and frequency:

  1. Count messages after successful connection-level decryption during observation window (30 rounds)
  2. If frequency exceeds maximum: mark neighbor as spammy, close connection, establish new one
  3. If frequency below minimum: mark connection as unhealthy, establish additional connection
  4. Unhealthy connections are monitored continuously and may recover
  5. If maximum connections exceeded: log situation, pause new connections until below maximum
  6. Edge nodes MUST send message immediately after connection then close; otherwise core node closes connection
  7. Messages with invalid proof of quota or signature from core node: mark as malicious, close connection
  8. Messages with duplicate identifier: close connection with neighbor (with grace period for network delay)

Edge Network Bootstrapping Implementation

Edge nodes connect to core nodes when needing to send messages:

  1. Retrieve set of core nodes from SDP at session start
  2. If below minimum size (32), stop and use regular broadcasting
  3. When needing to send message, select random core node
  4. Establish secure TLS connection
  5. Identify and authenticate using NDP
  6. Send message and close connection
  7. Repeat for communication redundancy number (4 by default)

Transition Period Implementation

When new session or epoch begins, protocol implements Transition Period (30 rounds) to allow messages generated with old keys to safely exit the network:

New session:

  • Validate message proofs against both new and past session-related public input for TP duration
  • Open new connections for new session
  • Maintain old connections and process messages for TP duration

New epoch:

  • Validate message proofs against both new and past epoch-related public info for TP duration

Quota Implementation

Quota limits the number of messages that can be generated during a session for network health and fair reward calculation.

Core Quota

Core quota (Q_C) defines messaging allowance for a core node during single session:

Q_C = ⌈(C · (β_C + R_C · β_C)) / N⌉

Where:

  • C = S · F_C = expected number of cover messages per session by all core nodes
  • β_C = 3 = expected blending operations per cover message
  • R_C = redundancy parameter for cover messages
  • N = number of core nodes from SDP

Total core quota (all nodes): Q^Total_C = N · Q_C = C · (β_C + R_C · β_C)

Leadership Quota

Leadership quota (Q_L) defines blending operations a block proposer can perform. Single quota used per proof of leadership:

Q_L = β_D + β_D · R_D

Where:

  • β_D = 3 = expected blending operations per data message
  • R_D = redundancy parameter for data messages

Average data messages per session: D_Avg = L_Avg · Q_L, where L_Avg = 21,600 (average leaders per session)

Quota Application Details

Nodes create session-specific key pools:

K^{n,s}_q = {(K^n_0, k^n_0, π_Q^{K^n_0}), ..., (K^n_{q-1}, k^n_{q-1}, π_Q^{K^n_{q-1}})}

Where:

  • q = Q_C + Q^n_L = sum of core quota and leadership quota for node n
  • K^n_i = i-th public key
  • k^n_i = corresponding private key
  • π_Q^{K^n_i} = proof of quota (confirms i < h without disclosing node identity)

Message Structure Implementation

A node n constructs message M = (H, h, P):

Public Header (H)

  • K^n_i: public key from set K^n_h
  • π^{K^n_i}_Q: proof of quota for key (contains key nullifier)
  • σ_{K^n_i}(P_i): signature of i-th encapsulation, verifiable by K^n_i

Encrypted Private Header (h)

Contains β_max blending headers (b_1, ..., b_{β_max}), each with:

  • K^n_l: public key from set K^n_h
  • π^{K^n_l}_Q: proof of quota for key
  • σ_{K^n_l}(P_l): signature of l-th encapsulation
  • π^{K^n_{l+1}, m_{l+1}}S: proof of selection of node index m{l+1}
  • Ω: flag indicating last blending header

Payload (P)

Message content (block proposal or random data for cover messages)

Encapsulation Overhead: Using Groth16 SNARKs, total overhead is ~1123 bytes for 3 hops (~3% increase for typical block proposal of 33,129 bytes).

Message Lifecycle Implementation

Generation Details

Message generation is triggered by:

  1. Data message: Core/edge node won consensus lottery and has proof of leadership
  2. Cover message: Released at random by core node per Cover Message Schedule

Generation process:

  1. Generate keys according to Key Types and Generation Specification
    • Each key uses message-type-specific allowance (quota)
    • Correct usage proven by Proof of Quota
  2. Format payload according to Payload Formatting Specification
  3. Encapsulate payload using Message Encapsulation Mechanism
    • Each key for single encapsulation, processable by single node
    • Node selection is random and deterministic, provable by Proof of Selection
  4. Format message according to Message Formatting Specification
  5. Release message according to Releasing logic

Relaying Details

When node receives message from neighbor:

  1. Check public header:
    • Version MUST equal 0x01
    • Proof of quota MUST be valid
    • Signature MUST be valid
    • Public key MUST be unique
  2. Release message to network (Releasing section)
  3. Concurrently, add message to processing queue (Processing section)

Duplicate Detection: Node MUST cache public key for every relayed message for duration of session plus safety buffer and transition period (~65 MB).

Processing Details

When message M is received with correct public header:

  1. Decapsulate message per Message Encapsulation Mechanism
  2. If decapsulation succeeds:
    • Validate proof of selection (points to node index in SDP list)
    • Store blending token: τ = (π^{K^n_l}_Q, π^{K^n_l,l}_S)
    • If last flag set (Ω == 1):
      • If payload is block proposal: verify structure and broadcast
      • If payload is cover message: discard
    • Else:
      • Validate decapsulated public header (key uniqueness, signature, proof of quota)
      • Format message per Message Formatting Specification
      • Attempt subsequent decapsulation recursively
      • If decapsulation fails: randomly delay and release to neighbors
  3. If decapsulation fails: return failure message

Delaying Details

Purpose: Hide timing correlations between incoming/outgoing messages.

Maximum delay between release attempts: Δ_max = 3 rounds

Delaying logic:

  1. Select random delay: δ ∈ (1, Δ_max)
  2. Start counting rounds from r_s
  3. Every round check if r_c == r_s + δ:
    • Release messages from queue (Releasing logic)
    • Select new random delay
    • Restart round counting

Release round selection works independently of queue state.

Releasing Details

Release process:

  • Received messages: Immediately released to all neighbors (except sender) after header validation
  • Processed messages: Queued and released at next release round (per Delaying logic)
  • Generated messages: Released at beginning of next round after generation
  • Statistical indistinguishability: When data message generated, one random unreleased cover message MUST be removed from schedule
  • Multiple messages: If multiple messages scheduled for same round, randomly shuffle before release

Expected messages per release round for single node:

μ = ⌈(Δ_max · β_C · α) / N⌉

Where:

  • Δ_max = 3 (maximal delay)
  • β_C = 3 (blending operations per cover message)
  • α ≈ 1.03 (normalization constant for data messages)
  • N = number of core nodes

Results:

  • N=16: μ=1 message per round
  • N=8: μ=2 messages per round
  • N=4: μ=3 messages per round

Broadcasting Implementation

When payload added to broadcasting queue:

  1. Verify payload contains valid block proposal structure (proposal not validated yet)
  2. Extract block proposal
  3. Broadcast to Nomos broadcasting channel after random delay

Cover Message Schedule Implementation

Core nodes generate cover messages in fully random manner to maintain privacy. Messages evenly distributed across session duration.

Safety Buffer Implementation

Problem: Session length in rounds is non-deterministic due to random block production. Safety buffer (100 intervals) reserves cover messages for when session lasts longer than expected.

Cover Message Generation Algorithm

Given:

  • Core quota Q_C
  • Expected blending operations β_C = 3
  • Last interval I_end = 21,600
  • Last interval of safety buffer I_max > I_end

For every session:

  1. Calculate maximum cover messages: c = ⌈Q_C / β_C⌉
  2. For i ∈ {1, ..., c}:
    • Select random interval I ∈ {1, ..., I_max}
    • Select random round r ∈ {1, ..., |I|}
    • If (I, r) already selected, repeat; else add to winning slots W
  3. During each interval I and round r: if (I, r) ∈ W, generate cover message

Important: Number of generated cover messages MUST be reduced by number of data messages node generates during session (for statistical indistinguishability).

Proof Mechanisms Implementation

Proof of Quota Implementation (PoQ)

Guarantees honestly generated messages are relayed and disseminated. Two parts:

Core Quota Proof (π^{K^n_a}_{Q_C}) is true when:

  • n ∈ N = SDP(s): node is in registered set (identity hidden)
  • K^n_a ∈ K^{n,s}_h: key generated by node for session
  • a < Q_C: index limits proof nullifiers per session

Public input: s, K^n_a, Q_C Private input: n, a Public output: ν_s (key nullifier uniquely identifying PoQ)

Leadership Quota Proof (π^{K^n_b}_{Q_L}) is true when:

  • ∃ π^{n,e}_L: valid proof of leadership for node n in epoch e
  • K^n_b ∈ K^{n,s}_h: key generated by node for session
  • b < Q^n_L: index limits proof nullifiers per won slot

Public input: e, s, K^n_b, Q^n_L Private input: π^{n,e}_L, n, b Public output: ν_s (key nullifier)

Combined Proof: π^{K^n_i}Q = π^{K^n_i}{Q_C} ∨ π^{K^n_i}_{Q_L} (logical OR)

Proof of Selection Implementation (PoSel)

Makes node selection for message processing random and verifiable. Prevents targeting specific nodes and selfish behavior.

PoSel (π^{K^n_i, m_i}_S) is true when:

  • m_i = CSPRBG(H_N(ρ))_8 mod N, where:
    • ρ = secret selection randomness (little-endian)
    • m_i = recipient node index (little-endian)
    • CSPRBG()_8 = cryptographically secure pseudo-random bytes generator (8 bytes, little-endian)
    • H_N() = domain separated blake2b hash
    • N = number of core nodes
  • v == v', where:
    • v = key nullifier of π^{K^n_i}_Q
    • v' = H_Ψ(b"KEY_NULLIFIER\V1", ρ)
    • H_Ψ() = Poseidon2 hash function

PoSel MUST be used alongside PoQ as they are tightly coupled.

Rewarding Implementation

Rewarding Motivation Details

Nodes must be rewarded for protocol actions:

  1. Message generation: Especially for cover messages (data messages rewarded through consensus)
  2. Message relaying: Motivated by connection quality monitoring (fear of losing reward)
  3. Message processing: Motivated by collecting blending tokens (activity-based reward)
  4. Message broadcasting: Motivated by increasing service income pool

Blending Tokens Implementation

When node processes message, it stores blending token:

τ = (π^{K^n_l}_Q, π^{K^n_l,l}_S)

Tokens stored with context (session number) in set Τ^{l,s}.

Session Randomness Implementation

Rewarding requires common unbiased randomness provided by consensus:

R_s = H('BLEND_SESSION_RANDOMNESS\V1' || R_e(s) || s)_512

Where:

  • H()_512 = blake2b hash (512 bits output)
  • R_e(s) = epoch nonce from consensus for epoch corresponding to session s
  • s = session number

Activity Proof Implementation

Node activity proof (π^{l,τ,s}_A) attests in probabilistic manner that node l was active during session s by presenting blending token τ.

Activity proof is true when:

  • Node l has blending token τ ∈ Τ^{l,s} collected during session s where:
    • Proof of Quota π^{K^n_l}_Q ∈ τ is true for session s
    • Proof of Selection π^{K^n_l,l}_S ∈ τ is true for session s
  • Hamming distance between token and next session randomness is below activity threshold:
Δ_H(H(τ)_ε, H(R_{s+1})_ε) < A_ε

Where:

  • H() = blake2b hash
  • ε = ⌈log_2(Q^Total_C + 1) / 8⌉ · 8 (bits, rounded to full bytes)

Activity Threshold:

A_ε = χ - ν - θ

Where:

  • ν = ⌈log_2(N + 1)⌉ (bits needed for number of nodes)
  • χ = ⌈log_2(Q^Total_C + 1)⌉ (bits needed for all blending tokens)
  • θ = 1 (sensitivity parameter)

Active Message Implementation

Node l constructs active message M_A = {l, τ, s, π^{l,τ,s}_A} for every session following Active Message format.

Active message metadata field MUST start with one byte version field (fixed to 0x01), followed by Activity Proof.

Node l selects activity proof minimizing Hamming distance to new randomness:

π^{l,τ,s}_A = min_{Δ_H}(true(π^{i,τ,s}_A))

Active message for session s MUST only be sent during session s+1; otherwise rejected.

Ledger MUST only accept single active message per-node per-session. Duplicates rejected.

Reward Calculation Details

Rewards for session s calculated as:

  1. No calculation if number of nodes from SDP below Minimal Network Size (32)
  2. Count base proofs: B = number of true activity proofs
  3. Count premium proofs: P = number of true activity proofs with minimal Hamming distance
  4. Calculate base reward: R = I / (B + P), where I = service income for session s
  5. Calculate node reward:
R(n) = R · [true(π^{i,τ,s}_A) + min_{Δ_H}(true(π^{i,τ,s}_A))]

Base reward (R) paid to all nodes with true activity proof; reward doubled for nodes with minimal Hamming distance proof.

Rewarding Distribution Logic Details

  1. Node sends Active Message with activity proof in metadata field
    • Must point to single declaration (declaration_id) and single provider identity (provider_id)
    • Any reuse of provider_id makes Active Message invalid
  2. Active Message sent after end of session s (during s+1), after transition period
    • Delay allows including tokens from transition period
  3. When session s+2 begins, Mantle distributes rewards per Service Reward Distribution Protocol
    • Delay required to calculate reward partition
  4. No Active Message on time = no reward

Security Considerations Implementation

DoS Protection Details

The protocol includes multiple DoS mitigation mechanisms:

  • Quota system limits message generation
  • Connectivity maintenance monitors and drops spammy/malicious nodes
  • Minimal network size requirement (32 nodes)
  • Connection limits prevent resource exhaustion
  • Message uniqueness verification prevents replay attacks

Privacy Properties Details

Unlinkability: For adversary controlling 10% stake targeting 0.1% stake node with 3-hop blending:

  • Time to Link (TTL): > 9 epochs
  • Time to Infer (TTI): > 10 years (487 epochs)

Trade-offs:

  • Each additional hop increases TTL/TTI by ~10x
  • Latency penalty: ~1.5s per hop
  • Optimal configuration: 3-hop blending (4.5s average latency increase)

Attack Resistance Details

  • Grinding attacks: Prevented by unpredictable session randomness
  • Tagging attacks: Addressed by NomosDA (separate mempool protection)
  • Timing attacks: Mitigated by random delays (Δ_max = 3 rounds)
  • Content inspection: Prevented by layered encryption
  • Replay attacks: Prevented by TLS and public key uniqueness verification

Rationale Implementation

Design Decisions Details

Blending vs Mixing: Anonymity in blending comes from processing same message by multiple nodes (spatial anonymity), while mixing processes multiple messages by same node (temporal anonymity). Blending chosen for higher decentralization and censorship resistance.

Two-tier reward system: Base reward ensures fairness (all active nodes receive it); premium reward continues motivating lazy nodes through lottery mechanism.

Edge node privacy trade-off: Lower privacy acceptable as edge nodes assumed mobile, without static identifiers, with lower stake, and sporadic connections.

Cover traffic motivation: Nodes must generate cover messages for own privacy protection; protocol enforces indistinguishability by requiring cover message removal when data message generated.

Statistical bias: Modulo operation for node selection introduces negligible bias (< 2^{-128} for N < 2^{128}), acceptable for expected network sizes (< 10 million nodes).

Parameters Summary

Global Parameters

  • Session length (S): 648,000 rounds (average 21,600 blocks)
  • Interval length: 30 rounds
  • Maximum delay (Δ_max): 3 rounds
  • Maximum blending operations (β_max): 3
  • Expected blending operations (β_C, β_D): 3
  • Observation window (W): 30 rounds
  • Safety buffer: 100 intervals
  • Transition period: 30 rounds
  • Minimal network size: 32 nodes

Core Node Parameters

  • Minimal core peering degree (Φ_{CC}^{Min}): 4
  • Maximum core peering degree (Φ_{CC}^{Max}): 8
  • Maximum edge connections (Φ_{CE}^{Max}): 300
  • Maximum connection retries (Ω_C): 3

Edge Node Parameters

  • Connection redundancy (Φ_{EC}): 4
  • Maximum connection retries (Ω_E): 3

References

Normative

Informative

Copyright and related rights waived via CC0.