index ef752b4..a50b612 100644
@@ -14,4 +14,4 @@ an empirical research question: can LLMs reason usefully about physical space an
the research direction would be: construct a benchmark of spatial/physical reasoning tasks that range from trivial to genuinely hard, evaluate current models, and analyze the failure modes. do models fail because they lack 3D spatial representation, because they have no sense of scale, or because physical reasoning requires composing multiple steps that compound errors? the findings could inform both how to prompt models for physical reasoning tasks (robotics control, architecture, simulation) and what data or training modifications would help.
-this connects to [[llm-behavior-improvement|LLM behavior improvement]] as one specific class of reasoning failure, and to [[flapping-airplanes|flapping airplanes]] which explores AI training directions more broadly. the robotics context is relevant: [[robotic-arm-assistant|robotic arm assistant]] and [[emg-bracelet|EMG bracelet]] both require physical world reasoning that would benefit from better LLM spatial models. [[symbolic-regression|symbolic regression]] is another angle on grounding AI in formal physical relationships.
\ No newline at end of file
+related: [[llm-behavior-improvement|LLM behavior improvement]], [[flapping-airplanes|flapping airplanes]], [[robotic-arm-assistant|robotic arm assistant]], [[emg-bracelet|EMG bracelet]], [[symbolic-regression|symbolic regression]]
\ No newline at end of file