LOGOS-CAPABILITY-DISCOVERY
| Field | Value |
|---|---|
| Name | Logos Capability Discovery Protocol |
| Slug | 107 |
| Status | raw |
| Category | Standards Track |
| Editor | Arunima Chaudhuri [email protected] |
| Contributors | Ugur Sen [email protected], Hanno Cornelius [email protected] |
Timeline
- 2026-01-27 —
ab9337e— Add recipe for algorithms (#232) - 2026-01-19 —
f24e567— Chore/updates mdbook (#262) - 2026-01-16 —
89f2ea8— Chore/mdbook updates (#258) - 2025-12-22 —
0f1855e— Chore/fix headers (#239) - 2025-12-22 —
b1a5783— Chore/mdbook updates (#237) - 2025-12-18 —
d03e699— ci: add mdBook configuration (#233) - 2025-12-09 —
aaf158a— VAC/RAW/LOGOS-DISCOVERY-CAPABILITY RFC (#212)
Note: This specification is currently a WIP and undergoing a high rate of changes.
Abstract
This RFC defines the Logos Capability Discovery protocol, a discovery mechanism inspired by DISC-NG service discovery built on top of Kad-dht.
The protocol enables nodes to:
- Advertise their participation in specific services
- Efficiently discover other peers participating in those services
In this RFC, the terms "capability" and "service" are used interchangeably. Within Logos, a node’s “capabilities” map directly to the “services” it participates in. Similarly, "peer" and "node" refer to the same entity: a participant in the Logos Discovery network.
Logos discovery extends Kad-dht toward a multi-service, resilient discovery layer, enhancing reliability while maintaining compatibility with existing Kad-dht behavior. For everything else that isn't explicitly stated herein, it is safe to assume behaviour similar to Kad-dht.
Motivation
In decentralized networks supporting multiple services, efficient peer discovery for specific services is critical. Traditional approaches face several challenges:
- Inefficiency: Random-walk–based discovery is inefficient for unpopular services.
- Load imbalance: A naive approach where nodes advertise their service at DHT peers whose IDs are closest to the service ID leads to hotspots and overload at popular services.
- Scalability: Discovery must scale logarithmically across many distinct services.
Logos discovery addresses these through:
- Service-specific routing tables
- Adaptive advertisement placement with admission control
- Improved lookup operations balancing efficiency and resilience
Format Specification
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in 2119.
Protocol Roles
The Logos capability discovery protocol defines three roles that nodes can perform:
Advertiser
Advertisers are nodes that participate in a service and want to be discovered by other peers.
Responsibilities:
- Advertisers SHOULD register advertisements for their service across registrars
- Advertisers SHOULD handle registration responses
Discoverer
Discoverers are nodes attempting to find peers that provide a specific service.
Responsibilities:
- Discoverers SHOULD query registrars for advertisements of a service
Registrar
Registrars are nodes that store and serve advertisements.
Responsibilities:
- Registrars MUST use a waiting time based admission control mechanism to decide whether to store an advertisement coming from an advertiser or not.
- Registrars SHOULD respond to query requests for advertisements coming from discoverers.
Definitions
DHT Routing Table
Every participant in the kad-dht peer discovery layer
maintains the peer routing table KadDHT(peerID).
It is a distributed key-value store with
peer IDs
as key against their matching
signed peer records values.
It is centered on the node's own peerID.
“Centered on” means the table is organized using that ID as the reference point for computing distances with other peers and assigning peers to buckets.
Bucket
Each routing table is organized into buckets. A bucket is a logical container that stores information about peers at a particular distance range from a reference ID.
Conceptually:
- Each bucket corresponds to a specific range in the XOR distance metric
- Peers are assigned to buckets based on their XOR distance from the table's center ID
- Buckets enable logarithmic routing by organizing the keyspace into manageable segments
The number of buckets and their organization is described in the Distance section. Implementation considerations for bucket management are detailed in Bucket Management section.
Service
A service is a logical sub-network within the larger peer-to-peer network. It represents a specific capability a node supports — for example, a particular protocol or functionality it offers. A service MUST be identified by a libp2p protocol ID via the identify protocol.
Service ID
The service ID service_id_hash MUST be the SHA-256 hash of the protocol ID.
For example:
| Service Identifier | Service ID |
|---|---|
/waku/store/1.0.0 | 313a14f48b3617b0ac87daabd61c1f1f1bf6a59126da455909b7b11155e0eb8e |
/libp2p/mix/1.2.0 | 9c55878d86e575916b267195b34125336c83056dffc9a184069bcb126a78115d |
Advertisement Cache
An advertisement cache ad_cache is a bounded storage structure
maintained by registrars to store accepted advertisements.
Advertise Table
For every service it participates in, an advertiser node MUST maintain an
advertise table AdvT(service_id_hash) centered on service_id_hash.
The table MAY be initialized using
peers from the advertiser’s KadDHT(peerID) routing table.
It SHOULD be updated opportunistically through interactions with
registrars during the advertisement process.
Each bucket in the advertise table contains a list of registrar peers at a
particular XOR distance range from service_id_hash, which are candidates for placing
advertisements.
Search Table
For every service it attempts to discover, a discoverer node MUST maintain a
search table DiscT(service_id_hash) centered on service_id_hash.
The table MAY be initialized using
peers from the discoverer’sKadDHT(peerID) routing table.
It SHOULD be updated through interactions with
registrars during lookup operations.
Each bucket in the search table contains a list of registrar peers at a
particular XOR distance range from service_id_hash, which discoverers can query to
retrieve advertisements for that service.
Registrar Table
For every service for which it stores and serves advertisements,
a registrar node SHOULD maintain a registrar table RegT(service_id_hash)
centered on service_id_hash.
The table MAY be initialized using peers from the
registrar’s KadDHT(peerID) routing table.
It SHOULD be updated opportunistically
through interactions with advertisers and discoverers.
Each bucket in the registrar table contains a list of peer nodes
at a particular XOR distance range from service_id_hash.
Registrars use this table to return
closerPeers in REGISTER and GET_ADS responses,
enabling advertisers and discoverers to
refine their service-specific routing tables.
Address
A multiaddress is a standardized way used in libp2p to represent network addresses. Implementations SHOULD use multiaddresses for peer connectivity. However, implementations MAY use alternative address representations if they:
- Remain interoperable
- Convey sufficient information (IP + port) to establish connections
Signature
Refer to Peer Ids and Keys
to know about supported signatures.
In the base Kad DHT specification, signatures are optional,
typically implemented as a PKI signature over the tuple (key || value || author).
In this RFC digital signatures MUST be used to
authenticate advertisements and tickets.
Expiry Time E
E is advertisement expiry time in seconds.
The expiry time E is a system wide parameter,
not an individual advertisement field or parameter of an individual registrar.
Data Structures
Advertisement
An advertisement indicates that a specific node participates in a service.
In this RFC we refer to advertisement objects as ads.
For a single advertisement object we use ad.
An advertisement logically represents:
- Service identification: Which service the node participates in (via
service_id_hash) - Peer identification: The advertiser's unique peer ID
- Network addresses: How to reach the advertiser (multiaddrs)
- Authentication: Cryptographic proof that the advertiser controls the peer ID
Implementations are RECOMMENDED to use ExtensiblePeerRecord (XPR) encoding for advertisements. See the Advertisement Encoding section in the wire protocol specification for transmission format details.
Ticket
Tickets are digitally signed objects issued by registrars to advertisers to track accumulated waiting time for admission into the advertisement cache.
A ticket logically represents:
- Advertisement reference: The advertisement this ticket is associated with
- Time tracking: When the ticket was created (
t_init) and last modified (t_mod) - Waiting time: How long the advertiser must wait before retrying (
t_wait_for) - Authentication: Cryptographic proof that the registrar issued this ticket
Tickets enable stateless admission control at registrars. Advertisers accumulate waiting time across registration attempts by presenting tickets from previous attempts.
See the RPC Messages section for the wire format specification of tickets and the Registration Flow section for how tickets are used in the admission protocol.
Protocol Specifications
System Parameters
The system parameters are derived directly from the DISC-NG paper. Implementations SHOULD modify them as needed based on specific requirements.
| Parameter | Default Value | Description |
|---|---|---|
K_register | 3 | Max number of active (i.e. unexpired) registrations + ongoing registration attempts per bucket. |
K_lookup | 5 | For each bucket in the search table, number of random registrar nodes queried by discoverers |
F_lookup | 30 | number of advertisers to find in the lookup process. We stop lookup process when we have found these many advertisers |
F_return | 10 | max number of service-specific peers returned from a single registrar |
E | 900 seconds | Advertisement expiry time (15 minutes) |
C | 1,000 | Advertisement cache capacity |
P_occ | 10 | Occupancy exponent for waiting time calculation |
G | 10⁻⁷ | Safety parameter for waiting time calculation |
δ | 1 second | Registration window time |
m | 16 | Number of buckets for service-specific tables |
Distance
The distance d between any two keys in Logos Capability Discovery
MUST be calculated using the bitwise XOR applied to their 256-bit SHA-256 representations.
This provides a deterministic, uniform, and symmetric way to measure proximity in the keyspace.
The keyspace is the entire numerical range of possible peerID and service_id_hash
— the 256-bit space in which all SHA-256–derived IDs exist.
XOR MUST be used to measure distances between them in the keyspace.
For every node in the network, the peerID is unique.
In this system, both peerID and the service_id_hash are 256-bit SHA-256 hashes.
Thus both belong to the same keyspace.
KadDHT(peerID) table is centered on peerID.
AdvT(service_id_hash), DiscT(service_id_hash)
and RegT(service_id_hash) are service-specific tables
and MUST be centered on service_id_hash.
Bootstrapping of service-specific tables
MAY be done from the KadDHT(peerID) table.
When inserting a peer into the service-specific tables, the bucket index into which the peer will be inserted MUST be determined by:
- x = reference ID which is the
service_id_hash - y = target peer ID
peerID - L = 256 = bit length of IDs
m= 16 = number of buckets in the advertise/search tabled = x ⊕ y = service_id_hash ⊕ peerID= bitwise XOR distance (interpreted as an unsigned integer)
The bucket index i where y is placed in x's table is calculated as:
- Let
lz = CLZ(d)= number of leading zeros in the 256-bit representation ofd i = min( ⌊ (lz * m) / 256 ⌋ , m − 1 )- Special case: For
d = 0(same ID),i = m - 1
Lower-index buckets represent peers far away from service_id_hash in the keyspace,
while higher-index buckets contain peers closer to service_id_hash.
This property allows efficient logarithmic routing:
each hop moves to a peer that shares a longer prefix of bits
with the target service_id_hash.
Service-specific tables may initially have low peer density,
especially when service_id_hash and peerID are distant in the keyspace.
Buckets SHOULD be filled opportunistically through response.closerPeers
during interactions (see Peer Table Updates)
using the same formula.
RPC Messages
All RPC messages use the libp2p Kad-DHT message format with extensions for Logos discovery operations.
Base Message Structure
syntax = "proto2";
message Register {
// Used in response to indicate the status of the registration
enum RegistrationStatus {
CONFIRMED = 0;
WAIT = 1;
REJECTED = 2;
}
// Ticket protobuf definition
message Ticket {
// Copy of the original advertisement
bytes advertisement = 1;
// Ticket creation timestamp (Unix time in seconds)
uint64 t_init = 2;
// Last modification timestamp (Unix time in seconds)
uint64 t_mod = 3;
// Remaining wait time in seconds
uint32 t_wait_for = 4;
// Ed25519 signature over (ad || t_init || t_mod || t_wait_for)
bytes signature = 5;
}
// Used to indicate the encoded advertisement against the key (service ID)
bytes advertisement = 1;
// Used in response to indicate the status of the registration
RegistrationStatus status = 2;
// Used in response to provide a ticket if status is WAIT
// Used in request to provide a ticket if previously received
optional Ticket ticket = 3;
}
message GetAds {
// Used in response to provide a list of encoded advertisements
repeated bytes advertisements = 1;
}
// Record represents a dht record that contains a value
// for a key value pair
message Record {
// The key that references this record
bytes key = 1;
// The actual value this record is storing
bytes value = 2;
// Note: These fields were removed from the Record message
//
// Hash of the authors public key
// optional string author = 3;
// A PKI signature for the key+value+author
// optional bytes signature = 4;
// Time the record was received, set by receiver
// Formatted according to https://datatracker.ietf.org/doc/html/rfc3339
string timeReceived = 5;
};
message Message {
enum MessageType {
PUT_VALUE = 0;
GET_VALUE = 1;
ADD_PROVIDER = 2;
GET_PROVIDERS = 3;
FIND_NODE = 4;
PING = 5;
REGISTER = 6; // New DISC-NG capability discovery type
GET_ADS = 7; // New DISC-NG capability discovery type
}
enum ConnectionType {
// sender does not have a connection to peer, and no extra information (default)
NOT_CONNECTED = 0;
// sender has a live connection to peer
CONNECTED = 1;
// sender recently connected to peer
CAN_CONNECT = 2;
// sender recently tried to connect to peer repeatedly but failed to connect
// ("try" here is loose, but this should signal "made strong effort, failed")
CANNOT_CONNECT = 3;
}
message Peer {
// ID of a given peer.
bytes id = 1;
// multiaddrs for a given peer
repeated bytes addrs = 2;
// used to signal the sender's connection capabilities to the peer
ConnectionType connection = 3;
}
// defines what type of message it is.
MessageType type = 1;
// defines what coral cluster level this query/response belongs to.
// in case we want to implement coral's cluster rings in the future.
int32 clusterLevelRaw = 10; // NOT USED
// Used to specify the key associated with this message.
// PUT_VALUE, GET_VALUE, ADD_PROVIDER, GET_PROVIDERS
// New DISC-NG capability discovery contains service ID hash for types REGISTER, GET_ADS
bytes key = 2;
// Used to return a value
// PUT_VALUE, GET_VALUE
Record record = 3;
// Used to return peers closer to a key in a query
// GET_VALUE, GET_PROVIDERS, FIND_NODE
repeated Peer closerPeers = 8;
// Used to return Providers
// GET_VALUE, ADD_PROVIDER, GET_PROVIDERS
repeated Peer providerPeers = 9;
// Used in REGISTER request and response
Register register = 21;
// Used in GET_ADS response
GetAds getAds = 22;
}
Key Design Principles:
- Standard Kad-DHT message types (
PUT_VALUE,GET_VALUE,ADD_PROVIDER,GET_PROVIDERS,FIND_NODE,PING) remain completely unchanged - The
Recordmessage is preserved as-is for Kad-DHT routing table operations - Logos adds two new message types (
REGISTER,GET_ADS) without modifying existing types - Advertisements are encoded as generic
bytes(RECOMMENDED: ExtensiblePeerRecord/XPR) to avoid coupling the protocol to specific formats - The existing
keyfield is reused forservice_id_hashin Logos operations - Nodes without Logos Capability Discovery support will ignore
REGISTERandGET_ADSmessages
Advertisement Encoding
Advertisements in the Register.advertisement and GetAds.advertisements fields are encoded as bytes. Implementations are RECOMMENDED to use ExtensiblePeerRecord (XPR):
ExtensiblePeerRecord {
peer_id: <advertiser_peer_id>
seq: <monotonic_sequence>
addresses: [
AddressInfo { multiaddr: <addr1> },
AddressInfo { multiaddr: <addr2> }
]
services: [
ServiceInfo {
id: "/waku/store/1.0.0" // service protocol identifier
data: <optional_metadata>
}
]
}
Size constraints:
- Each
ServiceInfo.datafield SHOULD be ≤ 33 bytes - Total encoded XPR SHOULD be ≤ 1024 bytes
The XPR MUST be wrapped in a signed envelope with:
- Domain:
libp2p-routing-state - Payload type:
/libp2p/extensible-peer-record/
Alternative encodings MAY be used if they provide equivalent functionality and can be verified by discoverers.
REGISTER Message
Used by advertisers to register their advertisements with registrars.
REGISTER Request
| Field | Usage | Value |
|---|---|---|
type | REQUIRED | REGISTER (6) |
key | REQUIRED | service_id_hash (32 bytes) |
register.advertisement | REQUIRED | Encoded advertisement as bytes (RECOMMENDED: XPR) |
register.ticket | OPTIONAL | Ticket from previous registration attempt |
| All other fields | UNUSED | Empty/not set |
Example (First attempt):
Message {
type: REGISTER
key: <service_id_hash>
register: {
advertisement: <encoded_xpr_bytes>
}
}
Example (Retry with ticket):
Message {
type: REGISTER
key: <service_id_hash>
register: {
advertisement: <encoded_xpr_bytes>
ticket: {
ad: <encoded_xpr_bytes>
t_init: 1234567890
t_mod: 1234567900
t_wait_for: 300
signature: <registrar_signature>
}
}
}
REGISTER Response
| Field | Usage | Value |
|---|---|---|
type | REQUIRED | REGISTER (6) |
register.status | REQUIRED | CONFIRMED, WAIT, or REJECTED |
closerPeers | REQUIRED | List of peers for advertise table |
register.ticket | CONDITIONAL | Present if status = WAIT |
| All other fields | UNUSED | Empty/not set |
Status values:
CONFIRMED: Advertisement stored in cacheWAIT: Not yet accepted, wait and retry with ticketREJECTED: Invalid signature, duplicate, or error
Example (WAIT):
Message {
type: REGISTER
register: {
status: WAIT
ticket: {
ad: <encoded_xpr_bytes>
t_init: 1234567890
t_mod: 1234567905
t_wait_for: 295
signature: <registrar_signature>
}
}
closerPeers: [
{id: <peer1_id>, addrs: [<addr1>]},
{id: <peer2_id>, addrs: [<addr2>]}
]
}
Example (CONFIRMED):
Message {
type: REGISTER
register: {
status: CONFIRMED
}
closerPeers: [
{id: <peer1_id>, addrs: [<addr1>]},
{id: <peer2_id>, addrs: [<addr2>]}
]
}
Example (REJECTED):
Message {
type: REGISTER
register: {
status: REJECTED
}
}
GET_ADS Message
Used by discoverers to retrieve advertisements from registrars.
GET_ADS Request
| Field | Usage | Value |
|---|---|---|
type | REQUIRED | GET_ADS (7) |
key | REQUIRED | service_id_hash (32 bytes) |
| All other fields | UNUSED | Empty/not set |
Example:
Message {
type: GET_ADS
key: <service_id_hash>
}
GET_ADS Response
| Field | Usage | Value |
|---|---|---|
type | REQUIRED | GET_ADS (7) |
getAds.advertisements | REQUIRED | List of encoded advertisements (up to F_return = 10) |
closerPeers | REQUIRED | List of peers for search table |
| All other fields | UNUSED | Empty/not set |
Each advertisement in getAds.advertisements is encoded as bytes (RECOMMENDED: XPR).
Discoverers MUST verify signatures before accepting.
Example:
Message {
type: GET_ADS
getAds: {
advertisements: [
<encoded_xpr_bytes_1>,
<encoded_xpr_bytes_2>,
... // up to F_return
]
}
closerPeers: [
{id: <peer1_id>, addrs: [<addr1>]},
{id: <peer2_id>, addrs: [<addr2>]}
]
}
Message Validation
REGISTER Request Validation
Registrars MUST validate:
type=REGISTER(6)key= 32 bytes (valid SHA-256)register.advertisementis present and non-empty- If
register.ticketpresent:- Valid signature issued by this registrar
ticket.admatchesregister.advertisement- Retry within window:
ticket.t_mod + ticket.t_wait_for ≤ NOW() ≤ ticket.t_mod + ticket.t_wait_for + δ
- Advertisement signature is valid (see Advertisement Signature Verification)
- Advertisement not already in
ad_cache
Respond with register.status = REJECTED if validation fails.
GET_ADS Request Validation
Registrars MUST validate:
type=GET_ADS(7)key= 32 bytes (valid SHA-256)
Return empty getAds.advertisements list or close stream if validation fails.
Advertisement Signature Verification
Both registrars (on REGISTER) and discoverers (on GET_ADS response) MUST verify advertisement signatures. For XPR-encoded advertisements:
VERIFY_ADVERTISEMENT(encoded_ad_bytes, service_id_hash):
envelope = DECODE_SIGNED_ENVELOPE(encoded_ad_bytes)
assert(envelope.domain == "libp2p-routing-state")
assert(envelope.payload_type == "/libp2p/extensible-peer-record/")
xpr = DECODE_XPR(envelope.payload)
public_key = DERIVE_PUBLIC_KEY(xpr.peer_id)
assert(VERIFY_ENVELOPE_SIGNATURE(envelope, public_key))
// Verify service advertised
service_found = false
for service in xpr.services:
if SHA256(service.id) == service_id_hash:
service_found = true
assert(service_found)
Discard advertisements with invalid signatures or that don't advertise the requested service.
Note: ExtensiblePeerRecord uses protocol strings (e.g.,
/waku/store/1.0.0) inServiceInfo.id. Logos discovery usesservice_id_hash = SHA256(ServiceInfo.id)for routing. When verifying, implementations MUST hash the protocol string and compare with thekeyfield (service_id_hash).
Stream Management
Following standard Kad-DHT behavior:
- Implementations MAY reuse streams for sequential requests
- Implementations MUST handle multiple requests per stream
- Reset stream on protocol errors or validation failures
- Prefix messages with length as unsigned varint per multiformats spec
Error Handling
| Error | Handling |
|---|---|
| Invalid message format | Close stream |
| Signature verification failure | REJECTED for REGISTER; discard invalid ads for GET_ADS |
| Timeout | Close stream, retry with exponential backoff |
| Cache full (registrar) | Issue ticket with waiting time |
| Unknown service_id_hash | Empty advertisements list but include closerPeers |
| Missing required fields | Close stream |
Protocol identifier: /logos/capability-discovery/1.0.0
Implementations SHOULD support both standard Kad-DHT operations
and Logos discovery operations simultaneously.
Nodes operating in Kad-DHT-only mode will
simply not respond to REGISTER or GET_ADS requests.
Sequence Diagram
Advertisement Placement
Overview
For each service, identified by service_id_hash,
that an advertiser wants to advertise,
the advertiser MUST instantiate a
new AdvT(service_id_hash),
centered on that service_id_hash.
The advertiser MAY bootstrap AdvT(service_id_hash)
from KadDHT(peerID) using the formula
described in the Distance section.
The advertiser SHOULD try to maintain up to K_register
active registrations per bucket.
It does so by selecting random registrars
from each bucket of AdvT(service_id_hash)
and following the registration maintenance procedure.
These ongoing registrations MAY be tracked in a separate data structure.
Ongoing registrations include those registrars
which has an active ad or the advertiser is
trying to register its ad into that registrar.
Registration Maintenance Requirements
To maintain each registration, the advertiser:
- MUST send a REGISTER message to the registrar.
If there is already a cached
ticketfrom a previous registration attempt for the sameadin the same registrar, theticketMUST also be included in the REGISTER message. - On receipt of a Response,
SHOULD add the closer peers indicated in the response to
AdvT(service_id_hash)using the formula described in the Distance section. - MUST interpret the response
statusfield and schedule actions accordingly:- If the
statusisConfirmed, the registration is maintained in the registrar'sad_cacheforEseconds. AfterEseconds the advertiser MUST remove the registration from the ongoing registrations for that bucket. - If the
statusisWait, the advertiser MUST schedule a next registration attempt to the same registrar based on theticket.t_wait_forvalue included in the response. The Response contains aticket, which MUST be included in the next registration attempt to this registrar. - If the
statusisRejected, the advertiser MUST remove the registrar from the ongoing registrations for that bucket and SHOULD NOT attempt further registrations with this registrar for this advertisement.
- If the
Advertisement Algorithm
Advertisers place advertisements across multiple registrars
using the ADVERTISE() algorithm.
The advertisers run ADVERTISE() periodically.
We RECOMMEND that the following algorithms be used
to implement the advertisement placement requirements specified above.
procedure ADVERTISE(service_id_hash):
ongoing ← MAP<bucketIndex; LIST<registrars>>
AdvT(service_id_hash) ← KadDHT(peerID)
for i in 0, 1, ..., m-1:
while ongoing[i].size < K_register:
registrar ← AdvT(service_id_hash).getBucket(i).getRandomNode()
if registrar = None:
break
end if
ongoing[i].add(registrar)
ad.service_id_hash ← service_id_hash
ad.peerID ← peerID
ad.addrs ← node.addrs
SIGN(ad)
async(ADVERTISE_SINGLE(registrar, ad, i, service_id_hash))
end while
end for
end procedure
procedure ADVERTISE_SINGLE(registrar, ad, i, service_id_hash):
ticket ← None
while True:
response ← registrar.Register(ad, ticket)
AdvT(service_id_hash).add(response.closerPeers)
if response.status = Confirmed:
SLEEP(E)
break
else if response.status = Wait:
SLEEP(min(E, response.ticket.t_wait_for))
ticket ← response.ticket
else:
break
end if
end while
ongoing[i].remove(registrar)
end procedure
Refer to the Advertiser Algorithms Explanation section for a detailed explanation.
Service Discovery
Overview
Discoverers are nodes attempting to find peers that
provide a specific service identified by service_id_hash.
Discovery Table DiscT(service_id_hash) Requirements
For each service that a discoverer wants to find,
it MUST instantiate a search table DiscT(service_id_hash),
centered on that service_id_hash.
Lookup Requirements
The LOOKUP(service_id_hash) is carried out by discoverer nodes to query registrar nodes
to get advertisements of a particular service_id_hash.
The LOOKUP(service_id_hash) procedure MUST work as a gradual search
on the search table DiscT(service_id_hash) of the service whose advertisements it wants.
The LOOKUP(service_id_hash) MUST start from far buckets (b_0)
which has registrar nodes with fewer shared bits with service_id_hash
and moving to buckets (b_(m-1)) containing registrar nodes with
higher number of shared bits or closer to service_id_hash.
To perform a lookup, discoverers:
- SHOULD query
K_lookuprandom registrar nodes from every bucket ofDiscT(service_id_hash). - MUST verify the signature of each advertisement received before accepting it.
- SHOULD add closer peers returned by registrars
in the response to
DiscT(service_id_hash)to improve future lookups. - SHOULD retrieve at most
F_returnadvertisement peers from a single registrar. - SHOULD run the lookup process periodically. Implementations can choose the interval based on their requirements.
Lookup Algorithm
We RECOMMEND that the following algorithm be used to implement the service discovery requirements specified above. Implementations MAY use alternative algorithms as long as they satisfy requirements specified in the previous section.
procedure LOOKUP(service_id_hash):
DiscT(service_id_hash) ← KadDHT(peerID)
foundPeers ← SET<Peers>
for i in 0, 1, ..., m-1:
for j in 0, ..., K_lookup - 1:
peer ← DiscT(service_id_hash).getBucket(i).getRandomNode()
if peer = None:
break
end if
response ← peer.GetAds(service_id_hash)
for ad in response.ads:
assert(ad.hasValidSignature())
foundPeers.add(ad.peerID)
if foundPeers.size ≥ F_lookup:
break
end if
end for
DiscT(service_id_hash).add(response.closerPeers)
if foundPeers.size ≥ F_lookup:
return foundPeers
end if
end for
end for
return foundPeers
end procedure
Refer to the Lookup Algorithm Explanation section for the detailed explanation.
Admission Protocol
Overview
Registrars are nodes that store and serve advertisements. They play a critical role in the Logos discovery network by acting as intermediaries between advertisers and discoverers.
Admission Control Requirements
Registrars MUST use a waiting time based admission protocol
to admit advertisements into their ad_cache.
The mechanism does not require registrars to maintain
any state for each ongoing request preventing DoS attacks.
When a registrar node receives a REGISTER request from an advertiser node
to admit its ad for a service into the ad_cache,
the registrar MUST process the request according to the following requirements:
- The registrar MUST NOT admit an advertisement
if an identical
adalready exists in thead_cache. - The Registrar MUST calculate waiting time using the formula in Waiting Time Calculation.
- If no
ticketis provided in theREGISTERrequest then this is the advertiser's first registration attempt for thead. The registrar MUST create a newticketand return the signedticketto the advertiser with statusWait. - If a
ticketis provided, the registrar MUST verify:- valid signature issued by this registrar
ticket.admatches currentadadis not in thead_cache- retry is within the registration window
- Reject if any verification fails
- The registrar MUST recalculate the waiting time based on current cache state
- The registrar MUST calculate remaining wait time:
t_remaining = t_wait - (NOW() - ticket.t_init). This ensures advertisers accumulate waiting time across retries
- If
t_remaining ≤ 0, the registrar MUST add theadto thead_cache. The registrar MUST track the admission time internally for expiry management. The registrar SHOULD return response withstatus = Confirmed - If
t_remaining > 0, the advertiser SHOULD continue waiting. The registrar MUST issue a newticketwith updatedticket.t_modandticket.t_wait_for = MIN(E, t_remaining). The registrar MUST sign the newticket. The registrar SHOULD return response with statusWaitand the new signedticket. - The registrar SHOULD include a list of closer peers (
response.closerPeers) using the algorithm described in Peer Table Updates section.
ad_cache Maintenance:
- Size limited by capacity
C - Ads expire after time
Efrom admission and are removed - No duplicate ads allowed
Registration Flow
We RECOMMEND that the following algorithm be used by registrars to implement the admission control requirements specified above.
Refer to the Register Message section
for the request and response structure of REGISTER.
procedure REGISTER(ad, ticket):
assert(ad not in ad_cache)
response.ticket.ad ← ad
t_wait ← CALCULATE_WAITING_TIME(ad)
if ticket.empty():
t_remaining ← t_wait
response.ticket.t_init ← NOW()
response.ticket.t_mod ← NOW()
else:
assert(ticket.hasValidSignature())
assert(ticket.ad = ad)
assert(ad.notInAdCache())
t_scheduled ← ticket.t_mod + ticket.t_wait_for
assert(t_scheduled ≤ NOW() ≤ t_scheduled + δ)
t_remaining ← t_wait - (NOW() - ticket.t_init)
end if
if t_remaining ≤ 0:
ad_cache.add(ad)
response.status ← Confirmed
else:
response.status ← Wait
response.ticket.t_wait_for ← MIN(E, t_remaining)
response.ticket.t_mod ← NOW()
SIGN(response.ticket)
end if
response.closerPeers ← GETPEERS(ad.service_id_hash)
return response
end procedure
Refer to the Register Algorithm Explanation section for detailed explanation.
Lookup Response Algorithm
Overview
Registrars SHOULD respond to GET_ADS requests from discoverers.
When responding, registrars:
- SHOULD return up to
F_returnadvertisements from theirad_cachefor the requestedservice_id_hash. - SHOULD include a list of closer peers to help discoverers populate their search table using the algorithm described in Peer Table Updates section.
Recommended Lookup Response Algorithm
Registrars respond to GET_ADS requests from discoverers
using the LOOKUP_RESPONSE() algorithm.
We RECOMMEND using the following algorithm.
procedure LOOKUP_RESPONSE(service_id_hash):
response.ads ← ad_cache.getAdvertisements(service_id_hash)[:F_return]
response.closerPeers ← GETPEERS(service_id_hash)
return response
end procedure
Peer Table Updates
Overview
While responding to both REGISTER requests by advertisers
and GET_ADS request by discoverers,
registrars play an important role in helping nodes discover the network topology.
The registrar table RegT(service_id_hash) is a routing structure
that SHOULD be maintained by registrars
to provide better peer suggestions to advertisers and discoverers.
Registrars SHOULD use the formula specified in the Distance section
to add peers to RegT(service_id_hash).
Peers are added under the following circumstances:
- Registrars MAY initialize their registrar table
RegT(service_id_hash)from theirKadDHT(peerID)using the formula described in the Distance Section. - When an advertiser sends a
REGISTERrequest, the registrar SHOULD add the advertiser'speerIDtoRegT(service_id_hash). - When a discoverer sends a
GET_ADSrequest, the registrar SHOULD add the discoverer'speerIDtoRegT(service_id_hash). - When registrars receive responses from other registrars
(if acting as advertiser or discoverer themselves),
they SHOULD add peers from
closerPeersfields to relevantRegT(service_id_hash)tables.
Note: The
ad_cacheandRegT(service_id_hash)are completely different data structures that serve different purposes and are independent of each other.
When responding to requests, registrars:
- SHOULD return a list of peers to help advertisers
populate their
AdvT(service_id_hash)tables and discoverers populate theirDiscT(service_id_hash)tables. - SHOULD return peers that are diverse and distributed across different buckets to prevent malicious registrars from polluting routing tables.
Recommended Peer Selection Algorithm
We RECOMMEND that the following algorithm be used to select peers to return in responses.
procedure GETPEERS(service_id_hash):
peers ← SET<peers>
RegT(service_id_hash) ← KadDHT(peerID)
for i in 0, 1, ..., m-1:
peer ← b_i(service_id_hash).getRandomNode()
if peer ≠ None:
peers.add(peer)
end if
end for
return peers
end procedure
peersis initialized as an empty setRegT(service_id_hash)is initialized from the node’sKadDHT(peerID).- Go through all
mbuckets in the registrar’s table —- Pick one random peer from bucket
i.getRandomNode()function remembers already returned nodes and never returns the same one twice. - If
peerreturned is not null then we move on to next bucket. Else we try to get another peer in the same bucket
- Pick one random peer from bucket
- Return
peerswhich contains one peer from every bucket ofRegT(service_id_hash).
The algorithm returns one random peer per bucket to provide diverse suggestions
and prevent malicious registrars from polluting routing tables.
Contacting registrars in consecutive buckets divides the search space by a constant factor,
and allows learning new peers from more densely-populated routing tables towards the destination.
The procedure mitigates the risk of having malicious peers polluting the table
while still learning rare peers in buckets close to service_id_hash.
Waiting Time Calculation
Formula
The waiting time is the time advertisers have to
wait before their ad is admitted to the ad_cache.
The waiting time is given based on the ad itself
and the current state of the registrar’s ad_cache.
The waiting time for an advertisement MUST be calculated using:
w(ad) = E × (1/(1 - c/C)^P_occ) × (c(ad.service_id_hash)/C + score(getIP(ad.addrs)) + G)
c: Current cache occupancyc(ad.service_id_hash): Number of advertisements forservice_id_hashin cachegetIP(ad.addrs)is a function to get the IP address from the multiaddress of the advertisement.score(getIP(ad.addrs)): IP similarity score (0 to 1). Refer to the IP Similarity Score section
Section System Parameters can be referred for the definitions of the remaining parameters in the formula.
Issuing waiting times promote diversity in the ad_cache.
It results in high waiting times and slower admission for malicious advertisers
using Sybil identities from a limited number of IP addresses.
It also promotes less popular services with fast admission
ensuring fairness and robustness against failures of single registrars.
Scaling
The waiting time is normalized by the ad’s expiry time E.
It binds waiting time to E and allows us to reason about the number of incoming requests
regardless of the time each ad spends in the ad_cache.
Occupancy Score
occupancy_score = 1 / (1 - c/C)^P_occ
The occupancy score increases progressively as the cache fills:
- When
c << C: Score ≈ 1 (minimal impact) - As the
ad_cachefills up, the score will be amplified by the divisor of the equation. The higher the value ofP_occ, the faster the increase. Implementations should consider this while setting the value forP_occ - As
c → C: Score → ∞ (prevents overflow)
Service Similarity
service_similarity = c(ad.service_id_hash) / C
The service similarity score promotes diversity:
- Low when
service_id_hashhas few advertisements in cache. Thus lower waiting time. - High when
service_id_hashdominates the cache. Thus higher waiting time.
IP Similarity Score
The IP similarity score is used to detect and limit Sybil attacks where malicious actors create multiple advertisements from the same network or IP prefix.
Registrars MUST use an IP similarity score to
limit the number of ads coming from the same subnetwork
by increasing their waiting time.
The IP similarity mechanism MUST:
- Calculate a score (0-1): higher scores indicate similar IP prefixes (potential Sybil attacks)
- Track IP addresses of
adscurrently in thead_cache. - MUST update its tracking structure when:
- A new
adis admitted to thead_cache: MUST add the IP - An
adexpires after timeE: MUST remove IP if there are no other activeadsfrom the same IP
- A new
- Recalculate score for each registration attempt
Tree Structure
We RECOMMEND using an IP tree data structure
to efficiently track and calculate IP similarity scores.
An IP tree is a binary tree that stores
IPs used by ads currently present in the ad_cache.
This data structure provides logarithmic time complexity
for insertion, deletion, and score calculation.
Implementations MAY use alternative data structures
as long as they satisfy the requirements specified above.
Apart from root, the IP tree is a 32-level binary tree where:
- Each vertex stores
IP_counter(number of IPs passing through). It is initially set to 0. - Edges represent bits (0/1) in IPv4 binary representation
- When an
adis admitted to thead_cache, its IPv4 address is added to the IP tracking structure using theADD_IP_TO_TREE()algorithm. - Every time a waiting time is calculated for a registration attempt,
the registrar calculates the IP similarity score for the advertiser's IP address.
using
CALCULATE_IP_SCORE()algorithm. - When an
adis removed from thead_cacheafterE, The registrar also removes the IP from IP tracking structure using theREMOVE_FROM_IP_TREE()algorithm if there are no other activeadinad_cachefrom the same IP. - All the algorithms work efficiently with O(32) time complexity.
- The root
IP_countertracks total IPs currently inad_cache
ADD_IP_TO_TREE() algorithm
The algorithm traverses the tree following the IP's binary representation, incrementing counters at each visited node.
procedure ADD_IP_TO_TREE(tree, IP):
v ← tree.root
bits ← IP.toBinary()
for i in 0, 1, ..., 31:
v.IP_counter ← v.IP_counter + 1
if bits[i] = 0:
v ← v.left
else:
v ← v.right
end if
end for
end procedure
CALCULATE_IP_SCORE() algorithm
The algorithm traverses the tree following the IP's binary representation
to detect how many IPs share common prefixes, providing a Sybil attack measure.
At each node, if the IP_counter is larger than expected in a perfectly balanced tree,
it indicates too many IPs share that prefix, incrementing the similarity score.
procedure CALCULATE_IP_SCORE(tree, IP):
v ← tree.root
score ← 0
bits ← IP.toBinary()
for i in 0, 1, ..., 31:
if bits[i] = 0:
v ← v.left
else:
v ← v.right
end if
if v.IP_counter > tree.root.IP_counter / 2^i:
score ← score + 1
end if
end for
return score / 32
end procedure
REMOVE_FROM_IP_TREE() algorithm
This algorithm traverses the tree following the IP's binary representation, decrementing counters at each visited node.
procedure REMOVE_FROM_IP_TREE(tree, IP):
v ← tree.root
bits ← IP.toBinary()
for i in 0, 1, ..., 31:
v.IP_counter ← v.IP_counter - 1
if bits[i] = 0:
v ← v.left
else:
v ← v.right
end if
end for
end procedure
Implementations can extend the IP tree algorithms to IPv6 by using a 128-level binary tree, corresponding to the 128-bit length of IPv6 addresses.
Safety Parameter
The safety parameter G ensures waiting times never reach zero even when:
- Service similarity is zero (new service).
- IP similarity is zero (completely distinct IP)
It prevents ad_cache overflow in cases when attackers try to
send ads for random services or from diverse IPs.
Lower Bound Enforcement
To prevent "ticket grinding" attacks where advertisers repeatedly request new tickets hoping for better waiting times, registrars MUST enforce lower bounds:
Invariant: A new waiting time w_2 at time t_2
cannot be smaller than a previous waiting time w_1 at time t_1
(where t_1 < t_2) by more than the elapsed time:
w_2 ≥ w_1 - (t_2 - t_1)
Thus registrars MUST maintain lower bound state for:
- Each service in the cache:
bound(service_id_hash)andtimestamp(service_id_hash) - Each IP prefix in the IP tree:
bound(IP)andtimestamp(IP)
The total waiting time will respect the lower bound if lower bound is enforced on these.
These two sets have a bounded size as number of ads present in the ad_cache
at a time is bounded by the cache capacity C.
How SHOULD lower bound be calculated for service IDs:
When new service_id_hash enters the cache, bound(service_id_hash) is set to 0,
and a timestamp(service_id_hash) is set to the current time.
When a new ticket request arrives for the same service_id_hash,
the registrar calculates the service waiting time w_s and then applies the lower-bound rule:
w_s = max(w_s, bound(service_id_hash) - timestamp(service_id_hash))
The values bound(service_id_hash) and timestamp(service_id_hash)
are updated whenever a new ticket is issued
and the condition w_s > (bound(service_id_hash) - timestamp(service_id_hash))is satisfied.
How SHOULD lower bound be calculated for IPs: Registrars enforce lower-bound state for the advertiser’s IP address using IP tree (refer to the IP Similarity Score section).
Implementation Notes
Client and Server Mode
Logos discovery respects the client/server mode distinction from the base Kad-dht specification:
- Server mode nodes: MAY be Discoverer, Advertiser or Registrar
- Client mode nodes: MUST be only Discoverer
Implementations MAY include incentivization mechanisms to encourage peers to participate as advertisers or registrars, rather than operating solely in client mode. This helps prevent free-riding behavior, ensures a fair distribution of network load, and maintains the overall resilience and availability of the discovery layer. Incentivization mechanisms are beyond the scope of this RFC.
Bucket Management
Bucket Representation
For simplicity in this RFC, we represent each bucket as a list of peer IDs. However, in a full implementation, each entry in the bucket MUST store complete peer information necessary to enable communication.
Bucket Size
The number of entries a bucket can hold is implementation-dependent:
- Smaller buckets → lower memory usage but reduced resilience to churn
- Larger buckets → better redundancy but increased maintenance overhead
Implementations SHOULD ensure that each bucket contains only unique peers. If the peer to be added is already present in the bucket, the implementation SHOULD NOT create a duplicate entry and SHOULD instead update the existing entry.
Bucket Overflow Handling
When a bucket reaches its maximum capacity and a new peer needs to be added, implementations SHOULD decide how to handle the overflow. The specific strategy is implementation-dependent, but implementations MAY consider one of the following approaches:
-
Least Recently Used (LRU) Eviction: Replace the peer that was least recently contacted or updated. This keeps more active and responsive peers in the routing table.
-
Least Recently Seen (LRS) Eviction: Replace the peer that was seen (added to the bucket) earliest. This provides a time-based rotation of peers.
-
Ping-based Eviction: When the bucket is full, ping the least recently contacted peer. If the ping fails, replace it with the new peer. If the ping succeeds, keep the existing peer and discard the new one. This prioritizes responsive, reachable peers.
-
Reject New Peer: Keep existing peers and reject the new peer. This strategy assumes existing peers are more stable or valuable.
-
Bucket Extension: Dynamically increase bucket capacity (within reasonable limits) when overflow occurs, especially for buckets closer to the center ID.
Implementations MAY combine these strategies or use alternative approaches based on their specific requirements for performance, security, and resilience.
References
[0] Kademlia: A Peer-to-Peer Information System Based on the XOR Metric
[1] DISC-NG: Robust Service Discovery in the Ethereum Global Network
[2] libp2p Kademlia DHT specification
Appendix
This appendix provides detailed explanations of some algorithms and helper procedures referenced throughout this RFC. To maintain clarity and readability in the main specification, the body contains only the concise pseudocode and high-level descriptions.
Advertiser Algorithms Explanation
Refer to the Advertisement Algorithm section for the pseudocode.
ADVERTISE() algorithm explanation
- Initialize a map
ongoingfor tracking which registrars are currently being advertised to. - Initialize
AdvT(service_id_hash)by bootstrapping peers from the advertiser’sKadDHT(peerID)using the formula described in the Distance section. - Iterate over all buckets (i = 0 through
m-1), wheremis the number of buckets inAdvT(service_id_hash)andongoingmap. Each bucket corresponds to a particular distance from theservice_id_hash.ongoing[i]contains list of registrars with active (unexpired) registrations or ongoing registration attempts at a distanceifrom theservice_id_hashof the service that the advertiser is advertising for.- Advertisers continuously maintain up to
K_registeractive (unexpired) registrations or ongoing registration attempts in every bucket of theongoingmap for its service. IncreasingK_registermakes the advertiser easier to find at the cost of increased communication and storage costs. - Pick a random registrar from bucket
iofAdvT(service_id_hash)to advertise to.AdvT(service_id_hash).getBucket(i)→ returns a list of registrars in bucketifromAdvT(service_id_hash).getRandomNode()→ function returns a random registrar node. The advertiser tries to place its advertisement into that registrar. The function remembers already returned nodes and never returns the same one twice during the sameadplacement process. If there are no peers, it returnsNone.
- if we get a peer then we add that to that bucket
ongoing[i] - Build the advertisement object
adcontainingservice_id_hash,peerID, andaddrs(Refer to the Advertisement section). Then it is signed by the advertiser using the node’s private key (Ed25519 signature) - Then send this
adasynchronously to the selected registrar. The helperADVERTISE_SINGLE()will handle registration to a single registrar. Asynchronous execution allows multipleads(to multiple registrars) to proceed in parallel.
ADVERTISE_SINGLE() algorithm explanation
ADVERTISE_SINGLE() algorithm handles registration to one registrar at a time
- Initialize
tickettoNoneas we have not yet got any ticket from registrar - Keep trying until the registrar confirms or rejects the
ad.- Send the
adto the registrar usingRegisterrequest. Request structure is described in section Register Message Structure. If we already have a ticket, include it in the request. - The registrar replies with a
response. Refer to the Register Message Structure section for the response structure - Add the list of peers returned by the registrar
response.closerPeerstoAdvT(service_id_hash). Refer to the [Distance](#distance section) on how to add. - If the registrar accepted the advertisement successfully,
wait for
Eseconds, then stop retrying because theadis already registered. - If the registrar says “wait” (its cache is full or overloaded),
sleep for the time written in the ticket
ticket.t_wait_for(but not more thanE). Then updateticketwith the new one from the registrar, and try again. - If the registrar rejects the ad, stop trying with this registrar.
- Send the
- Remove this registrar from the
ongoingmap in bucket i (ongoing[i]), since we’ve finished trying with it.
Discoverer Algorithms
LOOKUP(service_id_hash) algorithm explanation
Refer to the Lookup Algorithm section for the pseudocode.
DiscT(service_id_hash)is initialized by bootstrapping peers from the discoverer’sKadDHT(peerID)using the formula described in the Distance section.- Create an empty set
foundPeersto store unique advertisers peer IDs discovered during the lookup. - Go through each bucket of
DiscT(service_id_hash)— from farthest (b₀) to closest (bₘ₋₁) to the service IDservice_id_hash. For each bucket, query up toK_lookuprandom peers.- Pick a random registrar node from bucket
iofDiscT(service_id_hash)to queryDiscT(service_id_hash).getBucket(i)→ returns a list of registrars in bucketifromDiscT(service_id_hash).getRandomNode()→ function returns a random registrar node. The discover queries this node to getadsfor a particular service IDservice_id_hash. The function remembers already returned nodes and never returns the same one twice. If there are no peers, it returnsNone.
- A
GET_ADSrequest is sent to the selected registrar peer. Refer to the GET_ADS Message section to see the request and response structure forGET_ADS. The response returned by the registrar node is stored inresponse - The
responsecontains a list of advertisementsresponse.ads. A queried registrar returns at mostF_returnadvertisements. If it returns more we can just randomly keepF_returnof them. For each advertisement returned:- Verify its digital signature for authenticity.
- Add the advertiser’s peer ID
ad.peerIDto the listfoundPeers.
- The
responsealso contains a list of peersresponse.closerPeersthat is inserted intoDiscT(service_id_hash)using the formula described in the Distance section. - Stop early if enough advertiser peers (
F_lookup) have been found — no need to continue searching. For popular servicesF_lookupadvertisers are generally found in the initial phase from the farther buckets and the search terminates. But for unpopular ones it might take longer but not more thanO(log N)where N is number of nodes participating in the network as registrars. - If early stop doesn’t happen then the search stops when no unqueried registrars remain in any of the buckets.
- Pick a random registrar node from bucket
- Return
foundPeerswhich is the final list of discovered advertisers that provide serviceservice_id_hash
Making the advertisers and discoverers walk towards service_id_hash in a similar fashion
guarantees that the two processes overlap and contact a similar set of registrars that relay the ads.
At the same time, contacting random registrars in each encountered bucket using getRandomNode()
makes it difficult for an attacker to strategically place malicious registrars in the network.
The first bucket b_0(service_id_hash) covers the largest fraction of the key space
as it corresponds to peers with no common prefix to service_id_hash (i.e. 50% of all the registrars).
Placing malicious registrars in this fraction of the key space
to impact service discovery process would require considerable resources.
Subsequent buckets cover smaller fractions of the key space,
making it easier for the attacker to place Sybils
but also increasing the chance of advertisers already gathering enough ads in previous buckets.
Parameters F_return and F_lookup play an important role in setting a compromise between security and efficiency.
A small value of F_return << F_lookup increases the diversity of the source of ads received by the discoverer
but increases search time, and requires reaching buckets covering smaller key ranges where eclipse risks are higher.
On the other hand, similar values for F_lookup and F_return reduce overheads
but increase the danger of a discoverer receiving ads uniquely from malicious nodes.
Finally, low values of F_lookup stop the search operation early,
before reaching registrars close to the service hash, contributing to a more balanced load distribution.
Implementations should consider these trade-offs carefully when selecting appropriate values.
Registrar Algorithms
REGISTER() algorithm explanation
Refer to the Registration Flow section for the pseudocode
- Make sure this advertisement
adis not already in the registrar’s advertisement cachead_cache. - Prepare a response ticket
response.ticketlinked to thisad. - Then calculate how long the advertiser should wait
t_waitbefore being admitted. Refer to the Waiting Time Calculation section for details. - Check if this is the first registration attempt (no ticket yet):
- If yes then it’s the first try. The advertiser must wait for the full waiting time
t_wait. The ticket’s creation timet_initand last-modified timet_modare both set toNOW(). - If no, then this is a retry, so a previous ticket exists.
- Validate that the ticket is properly signed by the registrar,
belongs to this same advertisement and that the
adis still not already in thead_cache. - Ensure the retry is happening within the allowed time window
δafter the scheduled time. If the advertiser waits too long or too short, the ticket is invalid. - Calculate how much waiting time is left
t_remainingby subtracting how long the advertiser has already waited (NOW() - ticket.t_init) fromt_wait.
- Validate that the ticket is properly signed by the registrar,
belongs to this same advertisement and that the
- If yes then it’s the first try. The advertiser must wait for the full waiting time
- Check if the remaining waiting time
t_remainingis less than or equal to 0. This means the waiting time is over.t_remainingcan be 0 also when the registrar decides that the advertiser doesn’t have to wait for admission to thead_cache(waiting timet_waitis 0).- If yes, add the
adtoad_cacheand confirm registration. The advertisement is now officially registered. - If no, then there is still time to wait.
In this case registrar does not store
adbut instead issues a ticket.- set
reponse.statustowait - Update the ticket with the new remaining waiting time
t_wait_for - Update the ticket last modification time
t_mod - Sign the ticket again. The advertiser will retry later using this new ticket.
- set
- If yes, add the
- Add a list of peers closer to the
ad.service_id_hashusing theGETPEERS()function to the response (the advertiser uses this to updateAdvT(service_id_hash)). - Send the full response back to the advertiser
Upon receiving a ticket, the advertiser waits for the specified t_wait time
before trying to register again with the same registrar.
Tickets can only be used within registration window δ,
preventing attackers from accumulating and batch-submitting tickets.
Clock synchronization is not required as advertisers only use t_wait_for values.
Waiting times are recalculated on each retry based on current cache state, ensuring advertisers accumulate waiting time and are eventually admitted. This stateless design protects registrars from memory exhaustion and DoS attacks.
The waiting time t_wait is not fixed.
Each time an advertiser tries to register,
the registrar recalculates a new waiting time.
The remaining time t_remaining is then computed as the difference between
the new waiting time and the time the advertiser has already waited,
as recorded in the ticket.
With every retry, the advertiser accumulates waiting time and will eventually be admitted.
However, if the advertiser misses its registration window or fails to include the last ticket,
it loses all accumulated waiting time and
must restart the registration process from the beginning.
Implementations must consider these factors
while deciding the registration window δ time.
This design lets registrars prioritize advertisers that have waited longer
without keeping any per-request state before the ad is admitted to the cache.
Because waiting times are recalculated and tickets are stored only on the advertiser’s side,
the registrar is protected from memory exhaustion
and DoS attacks caused by inactive or malicious advertisers.