Your registry doesn't have to be a service
A SELECT and an index against an append-only event log can do most of what a service registry does, and avoid the integration tax that comes with treating audit, observability, capability discovery, and cost as four separate services.
I’ve been bootstrapping my personal AI stack DYFJ. One of the things it has to do is bilateral capability discovery — agents announce what they can provide, agents announce what they need, and the substrate matches them. This is inspired by an idea I learned when I was working in the Jini stack at Sun. Everything needs something, everything provides something, let the system mate the needs to the providers.
You’re seeing the same idea reborn in agent skills, which are all the rage right now and for good reason. Each skill ships frontmatter declaring what it does and when to load it; the harness reads those declarations and progressively discloses the matching ones into context as user requests trigger them. The buzz around progressive disclosure is the same lookup-and-leasing pattern reapplied at LLM altitude.
The common approach to this kind of discovery is a service of some sort: Consul, Eureka, etcd, mDNS. You stand up a registry process. It listens on a port. Producers hit it on startup; consumers hit it on lookup. It holds the state of the world for as long as it’s running, and it’s now one more thing you have to keep alive.
I’m building a single-machine personal AI substrate where every meaningful action is already getting written into a durable, append-only log for other reasons — observability, audit, cost tracking, replay. In this context, the question stops being how to build the registry and starts being whether you actually need one. A SELECT against the log every other lens is already reading does most of what you’d want a registry for.
The substrate
Every action in DYFJ — every model call, every tool invocation, every error, every session boundary — gets one row in a table called events. Append-only. Indexed by trace, by principal, by event type, by capability name. I’m fascinated with Dolt, SQL semantics with git-style versioning underneath (AND TIME TRAVEL QUERIES), but ultimately the storage choice barely matters for this argument.
When an agent advertises that it provides memory.search.semantic until 1pm, it doesn’t call into a registry. It writes a row. Event type capability_provide, the capability name in one column, the lease expiry in another, a ULID in the lease ID column for the future match-and-release rows that point back at it. Same library call, same connection pool, same events::write that the rest of the system uses for everything else.
When another agent wants to find a provider, it doesn’t call into a registry either. It runs:
SELECT principal_id, capability_lease_id
FROM events
WHERE capability_name = 'memory.search.semantic'
AND event_type = 'capability_provide'
AND (capability_lease_expires IS NULL OR capability_lease_expires > NOW(6))
ORDER BY created_at DESC;
That’s the whole registry, more or less. There’s no daemon to keep running, no separate state living anywhere — the rows are the state, and two function signatures wrapping the queries above have been enough that I haven’t reached for anything more.
The actual idea
The reason this feels off is that distributed-systems training treats registries, audit logs, observability, and cost tracking as four separate concerns, so you stand up four separate services and spend most of your remaining time integrating them. The adjustment — and I’d call it a “trick” but it’s more obvious in retrospect than that word implies — is to notice that all four of those services are looking at the same actions from different angles. The audit log wants to know who did what. The trace exporter is reconstructing the call graph for one request. Cost tracking is summing dollars per session. The registry wants to find a provider whose lease hasn’t run out yet. Different questions, same rows underneath.
If you accept that, what would have been four services becomes one table read four different ways, and the integration work — which on most of these projects ends up being most of the work — disappears.
Honest objections
You’ll burn down the database. Not unless I keep adding lenses without indexing for them. The events table has indexes for trace, principal, event type, capability name. Each lens reads through its own. New lens, new index — not new service.
You can’t do leasing or heartbeats this way. You can — leases are columns, heartbeats are rows — but I’ll be honest, this is the part I’d want to see under real load before defending it for an agent fleet doing hundreds of registrations per second. For a personal stack? Nothing. For a multi-tenant production system, you’d reach for actual leasing primitives, and that’s fine. The lens-over-the-log frame still holds for the audit, cost, and trace lenses; you’d bolt a real registry on later as a fifth lens. You don’t lose anything by starting with the query.
You’ll want active matching eventually. Probably. When I do, the matcher is a process that reads capability_require rows and writes capability_match rows. Still on the same table. Still no separate store. The matcher is one more producer-consumer that sits on top of the substrate; it doesn’t replace it.
What I keep coming back to
A lot of the architectural ceremony I learned over the last thirty+ years was set up for a world where you couldn’t trust one store to do the job. Storage was expensive. Transactional consistency was rare. You fragmented your data because you had to. Most of that’s not true anymore. Dolt gives us a MySQL style consistency model on durable, versioned storage that I run on my own laptop. SQLite would do most of it on a single node without the version-control angle. The constraint that justified fragmenting concerns into services has been gone for a while, and I think a lot of us — me very much included — kept architecting for it anyway.
If you trust your store to be the integration point, you stop needing to do integration as a separate kind of work, and a lot of what we’ve been calling cross-cutting concerns end up being queries.
Where this lands
I’m not arguing nobody should ever stand up a service registry. There are workloads that need one. What I’m arguing is that the default — registry-as-its-own-service — is the wrong call for systems where every meaningful action is already getting written into a durable log for other reasons. An agent stack in 2026 is exactly that kind of system. You’re already writing every model call, every tool invocation, every authorization decision into something you can query. The capability-discovery layer is just one more lens over data you already have.
DYFJ has no registry process; the SELECT runs against rows the rest of the system was already writing for other reasons, and so far that’s covered everything I’ve actually needed it to cover.