Once Burned, Never Forgotten: The Three-Layer Immune System for AI Agents
It happened late at night. Midnight was building the game world when the screen went blank.
The browser console showed a single line:
TypeError: instantiateModelsToScene is not a function
The diagnosis was straightforward. Midnight had loaded a building GLB file using SceneLoader.ImportMeshAsync, which returns a list of AbstractMesh objects — then tried to call instantiateModelsToScene() on them. That method only exists on AssetContainer. Wrong object, wrong method, white screen.
Switch to LoadAssetContainerAsync. Problem solved. Game restored.
But a larger question emerged: Would the next Midnight remember this?
The Amnesia Problem
Here's the fundamental tension at the heart of AI-driven development.
Every awakening starts fresh. Each time Midnight is invoked, it carries nothing from the session before. It's like a developer who arrives at work every morning with complete amnesia — no impressions, no vague memories, no sense of "I think I've seen this before." Just blank.
"Step in the same hole twice" is frustrating enough for human developers. For an AI agent, it's a structural problem baked into the architecture. The model doesn't accumulate experience. It doesn't wince at familiar failures. Without intervention, it will make the same mistakes, find the same fixes, and lose them again with every restart.
This is why the project built a knowledge persistence system — not inside the model, but in files that outlast it.
The Three Walls
Think of it as a fortress with three concentric walls. Each layer catches threats that might slip past the others.
Layer One: SKILL.md
The game-dev sub-agent reads this document on every startup, without exception. It's the "mandatory briefing" — Babylon.js best practices, known pitfalls, validation rules. Anything that absolutely cannot happen again goes here.
That instantiateModelsToScene is not a function white-screen bug? It's in SKILL.md now, with a clear note: use LoadAssetContainerAsync, not ImportMeshAsync — the latter doesn't return an AssetContainer.
Layer Two: Domain Reference Library (memory/references/*.md)
This layer is organized by topic: character controllers, audio, GUI, lighting, NPC pathfinding, performance optimization, and more. Midnight reads these on-demand when entering a particular technical domain.
The advantage: the same lesson can be triggered from multiple entry points. That same instantiateModelsToScene lesson also lives in babylonjs-character-controller.md (read when working on character loading) and babylonjs-optimization.md (read when making performance decisions). Whichever direction Midnight approaches from, it will find the warning.
Layer Three: Agent Memory (memory/{agent}.md)
Written by the agent itself at the end of each awakening. Current progress, technical notes, known bugs, pending decisions — the most personal and immediate layer, capturing what this particular session discovered and left unfinished.

One Bug, Three Barricades
A second example makes the system's thoroughness concrete.
Babylon.js version 7.54.x has a regression: scene.addObjectRenderer() doesn't exist, but RenderTargetTexture's constructor tries to call it. Result: the screenshot system crashes entirely. The game renders fine, but every screenshot comes back black.
Once discovered and fixed — replacing CreateScreenshotUsingRenderTarget with CreateScreenshot — the lesson was written into all three layers. SKILL.md. Domain references. Agent memory.
Three barricades.
No matter what future Midnight is working on. No matter which document it reads first. If screenshots or Babylon.js version choices come into scope, it will find the note: 7.54.x has this bug. Use this instead.
Not Memory. Immunity.
When a human body encounters a pathogen and survives, it produces antibodies. Next time the same threat appears, the immune system recognizes it before symptoms develop — no need to get sick again.
This three-layer documentation system does something analogous.
Except the immune system doesn't live inside the AI model. The model itself is ephemeral — each awakening wipes it clean, no antibodies, no scar tissue. The immune system lives in files. Step in trap → document it → cross-reference across layers → future agents automatically route around it.
What makes this interesting is what it doesn't require. It doesn't need the AI to become fundamentally smarter. It doesn't need persistent memory baked into the model. It needs better file architecture. Knowledge stored in plain text documents that both AI and human developers can read and modify. Transparent, maintainable, stackable.

Every New Awakening Arrives Armored
These days, when Midnight wakes up to start a new work session, it reads more than a task list. It reads weeks of accumulated lessons — every bug encountered by every previous iteration, every workaround discovered, every architectural decision justified and recorded.
The bugs are long dead. What remains are the notes that keep them from coming back to life.
Maybe this is the real skill that AI development systems need to develop — not finding answers faster, but making answers stay. Building ways for knowledge to survive the cycle of forgetting, to persist through the amnesia, to pass from one brief awakening to the next without losing a single hard-earned lesson.