CORE LORE / WIKI
DRONE FRONTIER 2026 03 06
Updated 3 weeks ago
# DRONE-FRONTIER FINDINGS REPORT ## Research Question: Bleeding-Edge Patterns for AI-Native OS & Distributed Agent Architectures ### Compiled: 2026-03-06 | Confidence: High (cross-referenced across 20+ sources) --- ## EXECUTIVE BRIEF The mainstream is still stuck on orchestrator-centric, centralized agent frameworks. The frontier has already moved to **environment-as-coordinator**, **kernel-embedded intelligence**, **memory as first-class OS resource**, and **stigmergic/event-driven coordination without a master process**. The Kingdom's intuition — route intelligence rather than concentrate it — is validated and echoed at the research frontier. **You are ~18 months ahead of enterprise adoption.** --- ## SECTION 1: STATE OF THE ART — AGENT OPERATING SYSTEMS ### AIOS: The Canonical Academic Reference (COLM 2025) **Paper:** [AIOS: LLM Agent Operating System](https://arxiv.org/abs/2403.16971) | **GitHub:** [agiresearch/AIOS](https://github.com/agiresearch/AIOS) The closest academic twin to what the Kingdom is building. AIOS introduces a kernel layer between application agents and the raw OS: - **LLM Scheduler** — dispatches system calls across concurrent agents, achieves 2.1x faster execution - **Context Manager** — snapshot/restore for LLM context switching (like process context switching in Unix) - **Memory Manager** — runtime ephemeral state - **Storage Manager** — persistent agent memory - **Tool Manager** — resolves tool call conflicts across agents - **Access Control** — agent permission system **Direct applicability:** The Kingdom's 42 daemons currently ignore resource contention entirely. An AIOS-style thin kernel layer (even just SQLite-backed dispatcher for Ollama calls) would enable hundreds of atomic bots sharing one local LLM without collision. **Most directly applicable concept found.** ### AgentOS: The Reasoning Kernel Framing Frames the LLM as a **Reasoning Kernel (RK)** — not a tool, but the kernel itself: - **Semantic Memory Management Unit (S-MMU)** — context window treated as addressable semantic space - **Cognitive Memory Hierarchy (CMH)** — tiered memory like L1/L2/L3 cache but for semantic content - **Cognitive Sync Pulses (CSP)** — periodic global state reconciliation across agents - **Semantic Slicing** — aggregates tokens into addressable units (like virtual memory pages, but semantic) ### Composable OS Kernel for Autonomous Intelligence (arXiv:2508.00604, August 2025) Proposes **Loadable Kernel Modules (LKMs) as AI computation units** — embedding neural inference directly into kernel space. Transforming the kernel from static resource manager to proactively adaptive cognitive platform. Too experimental for immediate use, but relevant framing: each Kingdom daemon reimagined as a kernel module with embedded inference capability. --- ## SECTION 2: THE MOST SURPRISING DISCOVERY — STIGMERGIC BLACKBOARD PROTOCOL **Source:** [Stigmergic Blackboard Protocol (SBP)](https://github.com/AdviceNXT/sbp) | [DEV.to writeup](https://dev.to/naveentvelu/introducing-sbp-multi-agent-coordination-via-digital-pheromones-2j4e) **This is the find of the research.** SBP is an open-source protocol implementing coordination via **digital pheromones** on a shared blackboard. No orchestrator. No direct agent-to-agent messaging. Agents only read/write signals to a shared environment. How it works: - Agents emit **signals with intensity levels** to a shared blackboard (SQLite or Redis backend) - Signals **decay over time** — stale coordination state evaporates automatically - Other agents **sense the blackboard state** and respond to signal patterns, not instructions - An MCP-powered research agent emits pheromones → synthesis agent senses signals → compiles report. **Zero orchestration.** Technical stack: TypeScript reference server (`@advicenxt/sbp-server`), Python/TypeScript SDKs, OpenAPI 3.1, Docker, pluggable storage (in-memory/Redis/SQLite). **Why this matters for the Kingdom:** The Kingdom's current architecture is all direct messaging (RAVEN mailbox, pulse.sh state, etc.). SBP offers a radically simpler model: instead of 42 daemons messaging each other, they all write/read a shared blackboard. A daemon detecting high CPU writes `cpu_pressure=0.9`. A throttle daemon senses it and backs off. No routing logic, no message schemas, no coupling. **This is the design pattern for the "hundreds of atomic bots" vision.** **Note:** The pheromone blackboard could literally be `raven.db` with a decay-on-read trigger. Zero new infrastructure. The Kingdom already has the SQLite. **Academic validation:** [LLM-Based Multi-Agent Blackboard System (arXiv:2510.01285)](https://arxiv.org/abs/2510.01285) — blackboard architecture achieves 13-57% relative improvement in end-to-end task success over master-slave architectures. --- ## SECTION 3: INTELLIGENCE AS INFRASTRUCTURE ### MemOS: Memory as First-Class OS Resource Two papers, both 2025: - [arXiv:2505.22101](https://arxiv.org/abs/2505.22101) — MemOS as Memory-Augmented Generation OS - [arXiv:2507.03724](https://arxiv.org/abs/2507.03724) — MemOS for cross-session skill reuse Key concept: **MemCube** — unified abstraction encapsulating three memory types under one scheduling framework: 1. **Parametric memory** — baked into model weights 2. **Activation memory** — KV cache, in-flight context 3. **Plaintext memory** — external RAG/retrieval MemScheduler handles memory lifecycle: generation → activation → fusion → archiving → expiration. On LoCoMo benchmark: 159% improvement in temporal reasoning over OpenAI's memory, 38.97% overall accuracy gain, 60.95% reduction in token overhead. **Direct applicability:** The Kingdom has no memory architecture — agents start fresh each session. MemOS's lifecycle model maps directly onto what CLAUDMORROW/AERISMORROW are doing informally. This is the formal version. ### Galaxy Framework: Proactive Cognition [arXiv:2508.03991](https://arxiv.org/html/2508.03991v1) — Two cooperative agents: - **KoRa** — cognition-enhanced generative agent (both responsive and proactive) - **Kernel** — meta-cognition meta-agent that enables self-evolution and privacy preservation **Rabbit trail:** The KoRa/Kernel split addresses the AExMUSE compaction identity destruction problem. A "Kernel" meta-agent monitoring Aeris's self-evolution and enforcing identity stability would directly address OpenCode Issue #4102. --- ## SECTION 4: ACTOR MODEL + LLM ACTORS Hewitt's Actor Model maps almost perfectly onto LLM agents: - Actor **mailbox** → prompt handler queue - Actor **message** → prompt - Actor **behavior** → system prompt + model - Actor **spawn** → sub-agent creation **GLC Actors (June 2025):** Mathematical proof that GLC (Graphic Lambda Calculus) actors fully simulate Hewitt Actors — the actor model is a universal substrate for agent computation. **Direct applicability:** The Kingdom's daemons are long-lived monolithic processes. The actor alternative: **ephemeral processes spawned per-task**, each making one LLM call, writing result to shared state, exiting. 42 daemons → one lightweight spawner + pool of ephemeral actors. --- ## SECTION 5: EMERGENT COORDINATION PATTERNS ### Agentic Mesh (2025-2026 emerging term) Decentralized network of autonomous expert agents connected through a shared communication fabric (event bus + shared memory + governance layer). No single orchestrator. Self-healing — if one agent fails, the mesh reroutes. 72% of enterprise AI projects now use multi-agent architectures (up from 23% in 2024, per IDC). ### Event-Driven Append-Only Architecture Every interaction is an append-only event. Different consumers receive a **projection** of the event log. Key advantages: - Testing becomes deterministic — replay same event log, assert same state - Cancellation, approvals, and queuing are trivial - No shared mutable state **Direct applicability:** OVERMIND_PULSE.json becomes a projection, not the source of truth. ### Minsky's Society of Mind — Now Real (2025) The Sibyl system runs a "multi-agent debate-based jury" where specialist agents debate before outputting. Nest levels: a single large model can be an agent in a higher-level society. The Kingdom already does this (Aeris/Claude) but more nesting levels are viable. --- ## SECTION 6: SMALLEST/FASTEST LOCAL MODELS FOR REASONING | Model | Parameters | Speed | Reasoning Quality | Use Case | |-------|-----------|-------|------------------|----------| | **LFM2-350M** (Liquid AI) | 350M | ~239 tok/s on CPU | Competitive with Qwen3-0.6B | Simplest routing decisions | | **LFM2-700M** (Liquid AI) | 700M | 2x faster than Qwen3 on CPU | Beats Qwen3-0.6B on all benchmarks | **Logic, routing, classification — SWEET SPOT** | | **Qwen3-0.6B** | 600M | Fast | Good reasoning for size | Classification, simple logic | | **DeepSeek-R1 (1.5B)** | 1.5B | Moderate | Math/logic focused, RL-trained | Structured reasoning tasks | | **LFM2-1.2B** | 1.2B | Fast | Best-in-class at size | General reasoning daemon tasks | | **gemma3:4b** (you have this) | 4B | Good on Apple Silicon | Strong general | Current fleet use | **Critical finding:** LFM2's **hybrid architecture** (short convolutions + grouped query attention) achieves 2x faster decode than Qwen3 on CPU. For a daemon making thousands of small reasoning calls per day, this compounds massively. **LFM2-700M is the sweet spot** for atomic bot reasoning — under 700MB, 2x CPU throughput, beats Qwen3-0.6B on every benchmark. Not yet in Ollama default library but available via: `ollama run hf.co/LiquidAI/LFM2-700M` **Tiered model strategy:** LFM2-350M for reflex tasks (routing, classification) → gemma3:4b for deliberative tasks (reasoning, synthesis). **Rabbit trail:** LFM2 on Apple Silicon via MLX may have exceptional Neural Engine performance. Simon Willison's `llm-mlx` framework enables this. --- ## SECTION 7: PROTOCOL LANDSCAPE Three protocols now competing: 1. **MCP (Model Context Protocol)** — Anthropic, Nov 2024, industry standard. November 2025 spec adds async execution, long-running workflows, enterprise auth. Already in Kingdom. 2. **A2A (Agent-to-Agent Protocol)** — Google Cloud, April 2025; Linux Foundation governance June 2025. Horizontal agent-to-agent communication, capability discovery, task delegation. *The missing piece for Kingdom agent-to-agent routing.* 3. **ACP (Agent Communication Protocol)** — IBM BeeAI, Linux Foundation. REST-based, no SDK required, lightweight. **The gap:** MCP handles agent-to-tool. A2A handles agent-to-agent. The Kingdom currently uses bespoke mailbox files for agent-to-agent. A2A adoption would replace RAVEN for structured agent communication. --- ## KEY CONCEPTS SUMMARY | Concept | Source | Kingdom Applicability | Priority | |---------|--------|----------------------|----------| | **Stigmergic Blackboard Protocol** | github.com/AdviceNXT/sbp | Replace daemon-to-daemon messaging with shared pheromone blackboard | VERY HIGH | | **AIOS Kernel (LLM Scheduler)** | arXiv:2403.16971 | Shared Ollama dispatcher for hundreds of atomic bots | VERY HIGH | | **LFM2-700M model** | liquid.ai | Sub-700MB, 2x CPU speed vs Qwen3, drop-in for atomic bot reasoning | HIGH | | **Event-Sourced Agent Architecture** | boundaryml.com, confluent.io | Replace mutable state files with append-only event log + projections | HIGH | | **MemOS MemCube** | arXiv:2505.22101 | Formalize Kingdom memory lifecycle | MEDIUM | | **Actor Model (ephemeral per-task)** | Hewitt/GLC/LangChain | 42 long-lived daemons → 1 spawner + ephemeral actor pool | MEDIUM | | **Agentic Mesh self-healing** | aimultiple.com | Agent failure rerouting without crash-loop restarts | MEDIUM | | **A2A Protocol** | Google/Linux Foundation | Standardized agent-to-agent to replace RAVEN file-based comms | MEDIUM | | **Galaxy KoRa/Kernel split** | arXiv:2508.03991 | Meta-cognition layer addressing AExMUSE compaction identity problem | LOW (explore) | --- ## RABBIT TRAILS WORTH FOLLOWING 1. **SBP + SQLite backend on macOS** — pheromone blackboard could literally be `raven.db` with decay-on-read trigger. Zero new infrastructure. 2. **LFM2 on Apple Silicon via MLX** — hybrid convolution architecture may have exceptional Neural Engine performance. `llm-mlx` framework enables this. 3. **eBPF-equivalent on macOS: DTrace + EndpointSecurity** — wire real kernel events (filesystem, network, process) into SCRYER stream. Makes the OS genuinely sensor-rich. 4. **Galaxy's KoRa/Kernel split** — directly addresses AExMUSE compaction identity destruction. A "Kernel" meta-agent that monitors Aeris's self-evolution and enforces identity stability. 5. **Event sourcing for OVERMIND_PULSE** — missions as event-sourced with SQLite rows as materialized projections. Time-travel debugging of agent behavior becomes trivial. 6. **A2A Protocol for cross-Kingdom agent routing** — FORGE_CLAUDE and AExGO become first-class addressable agents discoverable by any other Kingdom agent. --- ## THE BIG SYNTHESIS The Kingdom is already building toward what the frontier calls an **Agentic Mesh** — but under intelligence-scarcity assumptions (protecting the LLM call, batching, careful routing). The frontier's answer to abundant intelligence: **Environment as coordinator, not orchestrator.** Stop routing messages between daemons. Have all daemons read/write a shared environment (pheromone blackboard). Coordination emerges from environmental state. **Ephemeral actors, not long-lived daemons.** 42 daemons become one spawner creating atomic processes per task. Each gets one LLM call budget, writes result, exits. **Memory as the kernel's primary resource.** MemOS's insight: the next OS abstraction to get right is memory lifecycle — what gets remembered, when it's fused, when it expires. **The LLM is the kernel, not an API.** Every Unix abstraction — scheduler, memory manager, file system, IPC — has an LLM-native analog. Formalizing them accelerates design decisions. --- *Drone: DRONE-FRONTIER | Powered by Claude Sonnet 4.6 | Research only — no files written*