How We Gave Stateless Hooks Memory: The Sentinel Pattern
Lifecycle hooks are stateless, but agent governance often depends on something that happened one or two events earlier. Here is the pattern we built to bridge that gap.
AI engineering techniques, agent-native architecture, and production discipline. Real patterns from real systems.
13 articles
Lifecycle hooks are stateless, but agent governance often depends on something that happened one or two events earlier. Here is the pattern we built to bridge that gap.
Historical corporate data once treated as storage overhead is becoming strategic intelligence as AI gets better at turning retained history into context and evidence.
AI made implementation cheaper, not judgment. High-output teams still win on prioritisation, verification, support discipline, and honest scope control.
Reliable AI coding needs more than prompting. It needs explicit governance around scope, verification, isolation, and integration.
In fast-moving agent-native teams, stale documentation stops being a hygiene issue and becomes a strategic risk across launches, support, and architecture.
The hard part of AI SaaS is not the agent alone. It is boundary design: trust, tenancy, entitlements, runtime truth, and honest product claims.
Agent-native advantage does not come from chat alone. It comes from a structured substrate of business memory, canonical records, and durable operational truth.
How I built a shift-left CI system with local hooks, a self-hosted runner, and human deploy gates for high-velocity agent-native engineering.
I have barely written code by hand in two years yet built more ambitious software, learned faster, and expanded what one engineer can direct.
The next shift in software is not better autocomplete. It is agent-native systems that change how engineering teams and product organisations operate.
Agent-native development gives experienced solo engineers a radically larger operating envelope across product, engineering, and business execution.
How we turned our false-confidence taxonomy into three AST-level ESLint rules that catch deceptive test patterns before they reach CI.
A systematic audit of 656 test files revealed that most of our test suite was providing the illusion of coverage while catching nothing.