Author here. I work in enterprise AI deployment and kept seeing the same pattern: teams adopt AI coding tools, ship faster for a few months, then incident rates climb because nobody fully understands the AI-generated code.
The research backs this up -- METR's 2025 RCT found experienced devs using AI tools worked 19% slower, Cortex's 2026 benchmark showed +23.5% incidents per PR, and GitClear found code churn nearly doubled.
I built this kit because I couldn't find an existing open-source framework that addressed the problem systematically. It includes:
- MEMORY.md: a living architecture context file that serves both humans and AI agents
- PR template with a comprehension gate (3 questions before you can merge AI code)
- 5-layer code review framework designed for AI-generated code specifically
- Incident response procedure with a cognitive debt assessment phase
- Team playbook with sprint ceremonies and a quarterly audit system
The approach isn't "don't use AI" -- it's "use AI actively." You define the architecture and interfaces, AI fills the implementation, you verify before merging. The kit operationalizes that pattern across the full development lifecycle.
Everything is MIT licensed and designed to be forked and customized. Happy to answer questions about the design decisions or the research behind it.
Author here. I work in enterprise AI deployment and kept seeing the same pattern: teams adopt AI coding tools, ship faster for a few months, then incident rates climb because nobody fully understands the AI-generated code.
The research backs this up -- METR's 2025 RCT found experienced devs using AI tools worked 19% slower, Cortex's 2026 benchmark showed +23.5% incidents per PR, and GitClear found code churn nearly doubled.
I built this kit because I couldn't find an existing open-source framework that addressed the problem systematically. It includes:
- MEMORY.md: a living architecture context file that serves both humans and AI agents - PR template with a comprehension gate (3 questions before you can merge AI code) - 5-layer code review framework designed for AI-generated code specifically - Incident response procedure with a cognitive debt assessment phase - Team playbook with sprint ceremonies and a quarterly audit system
The approach isn't "don't use AI" -- it's "use AI actively." You define the architecture and interfaces, AI fills the implementation, you verify before merging. The kit operationalizes that pattern across the full development lifecycle.
Everything is MIT licensed and designed to be forked and customized. Happy to answer questions about the design decisions or the research behind it.