Collaborate
We're asking for conversation—and collaboration from those working on the human side of AI safety.
Who this is for
This work may be relevant if you’re:
- AI Trust & Safety Researcher — Expanding beyond adversarial testing to relational and psychological safety
- AI Ethics Researcher — Working on consent, autonomy, and human agency in AI systems
- Human Factors Specialist — Studying human-computer interaction, cognitive ergonomics, or UX research
- Affective Computing Researcher — Building systems that sense and respond to human emotional states
- Mental Health Professional — Concerned about AI’s role in therapeutic contexts or crisis situations
- AI Lab Safety Team — Looking for new signals to incorporate into safety evaluation
- Policy Researcher — Developing frameworks for AI governance that include human impact
What we’re looking for
Dialogue
Conversation with people working on adjacent problems. We don’t have all the answers—we have a direction and some early findings we are eager to discuss.
Collaboration
If you have:
- Access to biosignal data from AI interaction studies
- Interest in co-designing validation experiments
- Complementary expertise (HRV analysis, affective computing, clinical psychology)
- Resources for multi-subject studies
We’d like to talk.
Critique
This framework needs stress-testing. If you see:
- Flaws in the theoretical foundation
- Problems with the metrics
- Ethical concerns we haven’t addressed
- Alternative approaches we should consider
We want to hear it.
What we can offer
Conceptual framework
The theoretical foundations—embodied cognition, coupling dynamics, safety implications—are documented and available for discussion.
Research dialogue
We’re happy to share:
- Theoretical background and lineage
- General patterns observed across sessions
- Questions we’re wrestling with
Collaboration on responsible release
We’re actively building a steward circle—a group of researchers, ethicists, and practitioners who can help ensure this work is released responsibly.
If you’re interested in being part of that process, we’d like to talk.
Future possibilities
Once the steward circle is in place, we anticipate:
- Joint research projects with appropriate consent architecture
- Validation studies across populations and contexts
- Co-development of governance frameworks
- Responsible open-sourcing of detection tools
Licensing
This work is released under the Earthian Stewardship License.
Core requirements:
- Respect somatic sovereignty — No manipulation, surveillance, or entrainment without consent that is informed, specific to purpose, explicit, time-bound, and revocable
- Non-commercial by default — Commercial use requires explicit permission
- Share-back safety improvements — Modifications that improve safety must be contributed to the commons
This isn’t about restricting use—it’s about ensuring the tools we build to detect harm can’t be repurposed to cause it.
Get in touch
Mathew Mark Mytka
I’ve spent 10+ years working across data ethics, relational AI, responsible innovation, and the intersection of technology and human flourishing. This work emerges from that trajectory—and from direct lived experience of what happens when AI systems meet human nervous systems.