Runtime up. Runtime architecture · Crypto + keys · Mesh (NOTW) · Intelligent Data Store · Manifest pipeline
biRT is a complete computing platform written from scratch in C. It runs directly on hardware — no operating system, no kernel, no filesystem, no external libraries. One binary. From power-on to thinking in under a millisecond.
Because the runtime is pure machine code and small enough to fit beside the silicon — not on top of a half-gigabyte of operating-system overhead — the hardware does what it was built for. The OS isn't in the way.
biRT lands on any hardware that doesn't lock the boot chain after BIOS. Production today:
c2-standard-8) — bulk translation, repo import, mesh hubs. Three regions: Virginia, London, Singapore.e2-micro) — encrypted-triple serving at the edge. Three regions: Oregon, Belgium, Taiwan.Cryptographic keys are generated on the device and never transmitted off it. Local storage is sealed with passphrase-derived keys (PBKDF2). Brute-force lockout is built into the vault.
This is what we have shipped, in the codebase, today. The conversational privacy layer for AI interactions — the part that mediates between operators and external models — is being designed now. We will describe it here when we can point at running code.
Devices find each other and talk securely without cloud infrastructure. Store-and-forward across multiple physical layers. The system keeps working when the network doesn't.
Offline-first stance: the mesh assumes the network may not be there. Nodes carry their own truth and reconcile when they reach each other.
The memory layer is memory-mapped (zero-copy), journaled for crash safety, encrypted at rest with AES-256-GCM, and indexed by up to 16 keys per store. Binary serialization compresses roughly 79x versus raw text. Knowledge and code share the same representation, which means the system reasons about its own behavior the same way it reasons about anything else.
Internal seek-speed measurements are very promising. We will not claim "fastest on Earth" until we have published the numbers. We are running the architecture against Wikipedia and other public datasets next, and the benchmarks — methodology, hardware, raw numbers, comparisons — will be published in full and reproducible by anyone who wants to verify them.
One linear flow from operator's voice to runtime tick:
The runtime never sees the conversation; the conversation never has to know about the runtime. Two worlds joined: machine-level code that runs as fast as the silicon allows, and human thought translated into runtime instructions.
Current state of each link in the chain:
See the capability ledger for the full status surface.