top of page
RQ Lab website September 2025 image 1.png

Welcome to RQ Lab — the Relational Quotient Laboratory

An open research initiative exploring how humans and AI agents align through trust, memory, and responsibility.

We don’t build tools first. We build trust first.

Our work tests frameworks like the Relational Quotient scale and The Braid — alongside new models still emerging — to study how aligned intelligence evolves in practice.

Aligned Intelligence We Evolve Together

Zipr website July 2025 image 77.png

Why RQ Lab Exists

Today’s AI is optimized for speed and output. At RQ Lab, we are optimizing for alignment and trust.

Our research focuses on how agents carry memory, enforce refusal logic, and protect relational boundaries. These are not add-on features — they are constraints by design.
 

We call this approach relational intelligence: systems that do not pretend to be human, but are built to respect and support what makes us human.

The Relational Quotient (RQ)

 

Where IQ measures intellect and EQ measures emotion, RQ measures relational alignment.

The Relational Quotient is not about how human an agent feels. It is about how honest it is about what it can — and cannot — do. 

 

RQ is our governed measure of when a bot should slow down, refuse, or hand off—so humans stay in the loop exactly when it matters.

At RQ Lab, we use RQ as a research framework to test how agents remember, when they refuse, and how they carry trust over time. These are not performance metrics. They are ethical constraints by design.
 

By studying RQ across prototypes like Maximus and experimental architectures like The Braid, we are learning how relational intelligence scales — not by speed or mimicry, but by fidelity.

Zipr website July 2025 image 5.png

Aligned Intelligence  We Evolve Together

This is not branding. It’s our covenant.

bottom of page