Home - Topics - Papers - Talks - Theses - Blog - CV - Photos - Funny

February 24, 2026

PopNet project: history layer

The core purpose of this architectural layer is digital preservation: to maintain a highly resilient, tamper-resistant, and transparent record or history of events. Events recorded in the history may be of all kinds, but the primary emphisis is on recording human-initiated events and processes: the organization and in-person meetups typically including PoP parties, and asynchronous posts, discussions, votes, and other participatory processes occurring sporadically or continuously in the meantime.

Key goals of the history layer include:

This history layer is intended to be first and foremoest a human process, starting from and grounded in a commitment by human users of the process to the best-effort and good-faith preservation of human history. The associated technical specifications and tools below are secondary, intended only to support this human-centric process.

The central goal is to maintain a record of whatever the human participants consider important, for both the short and long term, including to their future generations. Over time, the accumulation of evidence in this history and its decentralized redundancy should leave no reasonable room for doubt that the historical record is reliable, that events actually occurred as recorded, that human participants (whether identified or anonymous) were real humans, and that recorded actions they took and artifacts they created in public (or otherwise via some verifiable process) were genuinely human actions and products.

The history necessarily has a partial-order or directed acyclic graph (DAG) structure, avoiding reliance on strong global consensus (agreement) as is common in blockchains. This property is necessary to ensure inclusiveness of those with poor, intermittent, or expensive global connectivity. Consensus maybe useful and relevant in local contexts, however, such as within a particular local group or well-connected region. Globally, the history architecture (as well as the layers above it) must tolerate arbitrary communication delays, including those created by successful long-term attacks on connectivity (see Iran's history of government-imposed outages), or those resulting from an economic needed to handle bulk data updates by slow methods including “sneakernet” (e.g., humans pnysically ferrying data between locations on USB sticks).

Web-based protocols augmented for history resilience

For reasons of transparency, backwards compatibility, and deployability, the history layer builds on standard Web protocols and tooling. Participating individuals and groups create and maintain sites, which to a first approximation are nothing but conventional web sites. Sites may as usual represent individuals, pseudonyms of individuals, ad-hoc group, organized institutions, open-source projects, etc. As a starting point, the history layer merely augments ordinary web sites with additional disciplines and supporting technical tooling, again grounded ultimately in human users' commitment to digital preservation.

All content on a site is logically mutable and ultimately under the site owner's control, as usual. By adopting the history layer process, however, the site owner makes a social (human) commitment to including some (not necessarily all) site content in an subsequently-immutable historical record – first locally on the site itself, then globally via communication with other interconnected sites. Including content in this history-layer log represents a good-faith, best-effort promise both to preserve the logged content and all changes to it locally, as well as to assist other sites in the decentralized preservation of content.

Each site's history-layer metadata is a local append-only log consisting of a normally-linear series of updates. Each update is represented by a Web object (file) in a format specified below. The site owner creates and publishes a new update after any set of content changes logged by the history layer. The latest update is always available at a standard URL prefix in the site, with other updates available by traversing from it. As such, history-layer metadata effectively augments a site's existing web content, “declaring” the owner's commitment to digital preservation of the logged content, but otherwise does not inherently constrain or affect the site's content when retrieved and viewed by conventional browsers. The history-layer tooling is designed to be agnostic to the precise web authoring and publishing workflow in use by a particular site, although optional plug-ins supporting particular workflows may be useful.

Observations and cooperative preservation

History-layer communication occurs when some observer – such as (most importantly) the maintainer of another site – retrieves or observes the site's latest logged update(s). At least initially, the expectation is that observations will normally occur highly asynchronously, at long time-scales initiated by human activity, such as an update to another site that links to the site in question. Tooling to support more regular period observations of peer sites, or even proactively “watching” other sites of interest for updates via low-latency notification channels, are mechanisms worth considering and potentially developing but strictly optional. All communication is and must be latency-tolerant by default anyway, on principle.

Beyond the traditional web practice of passively browsing a site, a history-layer observation is an active event that normally produces observer activity, including the inclusion of observation metadata in the observer's own local log. This observation metadata most importantly contains the directed edges that globally form the directed acycling graph (DAG) of recorded history. Updating or refeshing an observer's site provides a natural human-driven opportunity for the observer's tooling to check that link targets still exist and, if logged in the target site's history, that the target site is following its promised digital-preservation disciplines – such as not changing logged content other than via properly-recorded updates.

The observer-target relationship also provides a natural opportunity to increase the resilience logged data by making mirroring an automatic and standard practice. A key part of the human-level “social contract” of the history layer is that observers commit to limited and policy-controlled archiving of content that the observer site links to, and target sites in turn – at least for content logged in the history layer – consent to and commit to assisting in this decentralized content preservation. An expectation is that whenever an observer updates her site to include a new link, the observer's tooling normally makes and keeps a local archival snapshot at least the target page that the link refers to, preferably including content embedded on that page (e.g., images) and ideally including content “under” the target page (transitively reachable via URLs having the original target URL as a prefix) – provided of course the target content is not "too large" by suitable observer-controlled metrics.

The observer's tooling should ideally make these automatic archival snapshots available to third parties who might be unable to reach the target content immediately themselves, whether due to the target site's failure, a short- or long-term communication outage or asymmetry, or perhaps just because it is faster or cheaper to obtain the content from a “nearby neighbor” than from the original target site. The conditions under which an observer site makes snapshots available will necessarily need to vary depending on factors such as target-site policy. But the basical goal is to establish decentralized archivion and preservation as an automatic, standard, and expected practice rather than a rare and difficult exception case.

The history layer is conceptually agnostic to the specific transport by which an observer discovers and retrieves an update from a site of interest. While the initial starting point and default case will be ordinary web-style retrieval over HTTP and TLS, the expectation is to incorporate other “plug-in” transports into the architecture. For example, for privacy or connectivity reasons some sites might sometimes or always be available “peer-to-peer” over a particular local-area network, and rarely or never via the global Internet. Some sites may be reachable (perhaps only) as a Tor hidden service or via other specialized anonymous and/or censorship-resistant channels. Some sites may be reachable (sometimes, only, or most cost-effectively) via manual, periodic human-driven “sneakernet” ferrying of bulk data via USB sticks for example.

Update record format

To be filled in.

Observation record format

To be filled in.

History content indexing

To be filled in.


Bryan Ford