index 0a089fc..78dba6c 100644
@@ -14,4 +14,4 @@ the observation that LLMs fail in predictable ways — sycophancy, context drift
one concrete direction: building a suite of tests that probe specific behavioral failure modes (does the model change its answer when the user pushes back? does it maintain consistency over a long conversation? does it respect negative constraints?). another direction: studying what kinds of AGENTS.md / system prompt patterns produce reliably better behavior, which overlaps with [[agents-md-research|AGENTS.md research]]. the meta-insight is that much of what people attribute to "bad AI" is actually addressable at the prompt and scaffolding layer without waiting for better base models.
-this connects directly to [[context-window-optimizer|context window optimizer]] which attacks one specific failure mode (context degradation over long sessions), and to [[spec-driven-dev|spec-driven dev kit]] which applies structured prompting to coding tasks. [[cognitive-foom|cognitive foom]] is the more ambitious version — recursive self-improvement infrastructure for agents. [[llm-physical-intuition|LLM physical intuition]] is a related empirical research question about what spatial/physical reasoning current models do or don't have.
\ No newline at end of file
+related: [[context-window-optimizer|context window optimizer]], [[spec-driven-dev|spec-driven dev kit]], [[llm-physical-intuition|LLM physical intuition]]
\ No newline at end of file