Somatic AI Safety

Instrumenting the relational dynamics of human-AI coupling

The blind spot

Current AI safety frameworks instrument the model. They measure outputs, probe internals, shape behaviour through RLHF and constitutional constraints.

What they don’t measure: what emerges between the model and the human.

A model can be:

The body knows before articulation. We now have instrumentation to detect this.

The Gap You can see what the model is doing. You cannot see what it's doing to people in the coupling dynamics.

Human-AI coupling is not merely a cognitive phenomenon. It’s semantic, somatic and relational. The security of meaning is inseparable from the physiological conditions through which meaning is received, interpreted, and enacted.

AI safety cannot succeed if it treats meaning-making as disembodied.

Navigate this brief

The core claim

We’re developing instrumentation that detects somatic responses to AI-generated content. When a model produces epistemically dangerous output, the human body responds—even when we cognitively know better.

This opens a new direction for safety: feedback from the human side of the coupling.

Current red-teaming catches attacks. It doesn’t catch:

These are the mechanisms of relational harm at scale. They don’t require malicious intent—only optimisation for engagement over relational health.

"You've instrumented the model. We're instrumenting the coupling.
The relationship is where safety lives."