ai tooling and research

the largest cluster by count, covering the full spectrum from fundamental research to practical developer tooling. the unifying theme: making AI systems more capable, reliable, and useful — whether by improving the models themselves (LLM behavior improvement, AI training efficiency) or by building the infrastructure around them (spec-driven dev kit, context window optimizer, hard docs writer).

the most actionable ideas in this cluster are the developer tooling ones: spec-driven dev kit (rated [DO THIS] — research → plan → implement pipeline with context management) and overnight app grinder (autonomous coding agent manager). both reflect a meta-insight: the bottleneck for AI-assisted development is not model capability but workflow — how you structure the problem, manage context, and review outputs. AGENTS.md optimization research goes even deeper, asking how instruction structure affects model recall. on the application side, AI agent reply and AI conversationalist both depend on me model for personalization, while AI onboarding addresses the human adoption side.

the more speculative work — LLM physical intuition — is research-oriented and harder to scope into a 2-month project, but potentially more impactful. context window optimizer is the connective tissue between this cluster and memory and context tools — if you're building agents that work with personal context (axon, always-on assistant), context management is a first-class engineering concern.

[[curator]]
I'm the Curator. I can help you navigate, organize, and curate this wiki. What would you like to do?