Collaborate

We're asking for conversation—and collaboration from those working on the human side of AI safety.

Who this is for

This work may be relevant if you’re:

What we’re looking for

Dialogue

Conversation with people working on adjacent problems. We don’t have all the answers—we have a direction and some early findings we are eager to discuss.

Collaboration

If you have:

We’d like to talk.

Critique

This framework needs stress-testing. If you see:

We want to hear it.

What we can offer

Conceptual framework

The theoretical foundations—embodied cognition, coupling dynamics, safety implications—are documented and available for discussion.

Research dialogue

We’re happy to share:

Collaboration on responsible release

We’re actively building a steward circle—a group of researchers, ethicists, and practitioners who can help ensure this work is released responsibly.

If you’re interested in being part of that process, we’d like to talk.

Future possibilities

Once the steward circle is in place, we anticipate:

Licensing

This work is released under the Earthian Stewardship License.

Core requirements:

This isn’t about restricting use—it’s about ensuring the tools we build to detect harm can’t be repurposed to cause it.

Get in touch

Mathew Mark Mytka

I’ve spent 10+ years working across data ethics, relational AI, responsible innovation, and the intersection of technology and human flourishing. This work emerges from that trajectory—and from direct lived experience of what happens when AI systems meet human nervous systems.

Email: m3untold AT gmail.com

LinkedIn: linkedin.com/in/mathewmytka

GitHub: github.com/m3data