index c7a99a2..cec0406 100644
@@ -2,6 +2,9 @@
status: raw
tags:
- software
+- ml
+- nlp
+- writing
title: embedding tone interpolation
type: idea
updated: 2026-04-11
@@ -10,4 +13,8 @@ visibility: public
# embedding tone interpolation
-separate meaning and style in embedding space for writing.
\ No newline at end of file
+a research/tool direction for separating meaning from style in embedding space — so you can move a piece of writing along a tone axis while keeping the content constant, or vice versa. the core insight is that text embeddings conflate what is said with how it is said. if you can disentangle them, you get fine-grained control over writing style as a continuous parameter rather than a binary switch.
+
+the technical approach would involve learning a decomposed representation: one component that captures semantic content (what the text is about, what claims it makes) and another that captures stylistic/tonal dimensions (formality, emotional valence, voice, register). once you have this, interpolation becomes a geometric operation — you move along the style axis from "formal" to "casual" while holding the content vector fixed. this has obvious applications in writing assistance, but the interesting research question is whether such a clean decomposition actually exists in embedding space or has to be imposed.
+
+connects to [[ai-conversationalist|AI conversationalist]] which needs exactly this capability — mimicking someone's conversational style while generating novel content. also connects to [[writing-tools|writing tools suite]] for the practical application, and [[personalized-autocomplete|personalized autocomplete]] since your personal writing style is essentially a point in this tone space. the research framing puts it adjacent to [[llm-behavior-improvement|LLM behavior improvement]] — better understanding of how style and content interact in embeddings would make controllable generation much more reliable.
\ No newline at end of file