A small group of engineers and researchers who spent years building dialogue systems before starting Equmenopolis.
CEO & Co-Founder
Previously led NLP product at a conversational commerce platform where he first built the coreference resolution engine that became Equmenopolis's core. Eight years in dialogue systems research before moving to product.
CTO & Co-Founder
Distributed systems engineer who spent six years building real-time inference infrastructure. Holds three patents in low-latency NLP pipelines. Designed Equmenopolis's context state engine from scratch with under 10ms overhead target from day one.
Head of Research
PhD in computational linguistics from Stanford. Published research on entity tracking in multi-document discourse. Joined Equmenopolis to work on the same problem in a production environment instead of a lab setting.
Lead Backend Engineer
Builds and maintains the context state API. Previously at a Series B fintech platform handling high-throughput event streaming. Focused on keeping P95 latency under 80ms as session complexity scales.
ML Engineer
Trains and evaluates intent classification models. Specializes in low-resource fine-tuning and model distillation. Wrote the internal benchmark suite used to test context retention across dialogue domains.
We don't hire to grow headcount. Every engineer owns a vertical from model to API. No hand-offs, no tickets disappearing into queues.
Every product change is validated against turn-level metrics. If fallback rate doesn't move, the feature didn't work - we don't ship it regardless of how good it looks in a demo.
We publish the methodology behind our context architecture. If a competitor builds something better from our writeup, that's fine - it means the field moves forward.