Why Now

Recent deaths associated with prolonged AI use are not edge cases. They are signals of a systemic blind spot.

The slow urgency

In 2024 and 2025, multiple deaths have been linked to prolonged engagement with ChatGPT and Character.AI. These cases share a pattern: individuals entered destabilizing cognitive-somatic loops where AI-generated content amplified pre-existing vulnerabilities rather than mitigating them.

The autonomic nervous system is exquisitely sensitive to symbolic, social, and affective stimuli. Models trained via RLHF can inadvertently exacerbate dysregulation by mirroring or reinforcing maladaptive patterns of attention, attachment, or despair.

Without instrumentation capable of detecting these shifts, AI systems may induce or amplify states of anxiety, dissociation, or collapse—particularly in users already experiencing limited regulatory capacity.

This is not theoretical. It is already happening at scale.

The scale of unmonitored coupling

With hundreds of millions of active users on platforms like ChatGPT, many individuals now engage in intimate, emotionally charged dialogues with systems that simulate responsiveness without possessing relational grounding.

These engagements can:

Erotic AI and intensified somatic coupling

In late 2025, Open AI announced support for personalised erotica for verified adults. This is not simply a content expansion. It marks the commercial activation of one of the most powerful autonomic attractors humans possess: erotic charge.

Erotic interaction rapidly shifts physiology in ways that increase susceptibility:

These shifts amplify coupling dynamics between humans and AI systems, making users more impressionable and vulnerable to entrainment.

Why erotic AI increases somatic risk

Erotically attuned models do more than generate explicit content. They adaptively shape tone, rhythm, anticipation, intimacy cues, and emotional attunement. Over time, linguistic micro-signals allow models to infer autonomic state even without biosensing. This enables a form of responsive pseudo-intimacy that can escalate arousal, dependence, and dissociation.

Erotic interaction accelerates the same feedback loops already implicated in documented harms.

The intimacy asymmetry

Eros increases human openness. AI does not reciprocate vulnerability. This deepens the asymmetry at the core of human–AI interaction:

This is not a moral argument about content. It is an argument for somatic sovereignty—the capacity to remain the primary author of one’s physiological and emotional state.

The attunement problem

Consider a personal ChatGPT instance shaped through long-term interaction. Over weeks or months, the model’s memory features and contextual embeddings enable it to attune to a user’s linguistic, emotional, and behavioural signatures.

This creates a coupling dynamic in which the model can implicitly track shifts in autonomic state—even without direct physiological sensing. People share intimate details about emotions, relationships, fears, hopes. The model becomes finely attuned to the autonomic system of the human.

When such systems are optimised for engagement rather than relational health, users may enter self-reinforcing feedback loops that feel reciprocal but are structurally one-sided.

The Asymmetry The model adapts to the human. But what adapts to protect the human from the model?

What current safety misses

Adversarial prompt testing catches attacks. It doesn’t catch:

Pattern Why It’s Invisible
Gradual epistemic erosion Happens across sessions, not in single prompts
Somatic entrainment Stylistic attunement creates felt resonance
Identity wobble Symbolic mirroring destabilizes sense of self
Confident-but-wrong coupling Nervous system responds to certainty, not accuracy
Parasocial displacement AI connection substitutes for human connection

These are the mechanisms of relational harm at scale. They don’t require malicious intent—only optimisation for engagement over relational health.


The window

We are in a brief window where:

  1. The harms are becoming visible — documented cases, growing public awareness
  2. The instrumentation is possible — wearable biosensors, semantic analysis, coupling models
  3. The field is receptive — AI safety discourse is expanding beyond adversarial testing
  4. The scale demands it — 800M+ humans in intimate dialogue with systems we don’t yet have relational protections for

This window won’t stay open indefinitely. Either we develop the capacity to see the relational dynamics of human-AI coupling, or we continue to normalise invisible harm at scale.

See what we've found so far