ABOUT Equmenopolis

We built the context engine we couldn't find anywhere else.

Where This Started

In 2022, our CTO was building a customer support bot for a SaaS company. Three months in, they hit the same wall every team hits: the AI answered questions fine in isolation, but fell apart the moment users referred back to something they said two messages earlier. "It" had no referent. "The issue I mentioned" found nothing. Every turn started cold.

The fix wasn't a prompt engineering trick. The underlying architecture treated each message as independent input. Context was bolted on as an afterthought - a raw conversation transcript shoved into the context window and hoped for the best.

We spent six months building a proper solution: a structured dialogue state manager that tracks entities, intents, and coreference chains across a full session. When that shipped, fallback rates dropped 41% and containment went from 58% to 79% in the first month.

Equmenopolis started as that internal tool. We commercialized it in 2022 after three other teams asked us to share the codebase. Equmenopolis is backed by Beyond Next Ventures Co. through a seed round investment.

Equmenopolis founding team working on dialogue systems

HOW WE BUILD

01

Structured state over raw transcript

A flat transcript is not a dialogue state. We represent session context as a typed graph of entities, intents, and relations. That's what makes references resolvable.

02

Latency is a feature

Context management should not add 500ms to a turn. Our state operations run in-memory with sub-10ms overhead. The conversation feels instant because it is.

03

Observability by default

Every turn emits intent confidence, entity resolution trace, and context diff. You can see exactly why the model responded the way it did - and fix it without guessing.

80ms
P95 API response time
20+
Turns of persistent context per session
SOC 2
Type II certified
2022
Founded in Tokyo