Build mission critical AI that does real work.

Parsed is an Al interpretability lab focused on supercharging model performance and robustness through the lens of evaluations and mechanistic interpretability.

Self-improving AI systems built for your specific needs.

Foundation models, tailored.

We build bespoke AI systems that understand your domain from the ground up. Our approach ensures models align with your specific requirements and expert knowledge, creating a foundation that evolves with your needs.

Foundation models, tailored.

We build bespoke AI systems that understand your domain from the ground up. Our approach ensures models align with your specific requirements and expert knowledge, creating a foundation that evolves with your needs.

Continuous model evolution.

The models we deploy today are just the starting point. They learn from every interaction, automatically improving without the overhead of traditional development cycles. This creates compounding intelligence gains that accelerate over time—your AI doesn't just serve your business, it grows with it.

Continuous model evolution.

The models we deploy today are just the starting point. They learn from every interaction, automatically improving without the overhead of traditional development cycles. This creates compounding intelligence gains that accelerate over time—your AI doesn't just serve your business, it grows with it.

Transparent & reliable AI.

Full model interpretability combined with enterprise-grade infrastructure ensures you understand how decisions are made while maintaining the reliability and performance your users expect. Complete visibility into AI reasoning enables confident deployment at scale in the most high stakes environments.

Transparent & reliable AI.

Full model interpretability combined with enterprise-grade infrastructure ensures you understand how decisions are made while maintaining the reliability and performance your users expect. Complete visibility into AI reasoning enables confident deployment at scale in the most high stakes environments.

Research.

Technical, academic interpretability roots.

Mech interp researcher (MATS, Stanford, Johns Hopkins). Previously ML engineer (NASA, Macuject, quant trading). CS PhD candidate (Oxford).

Charles O'Neill

CTO

Mech interp researcher (MATS, Stanford, Johns Hopkins). Previously ML engineer (NASA, Macuject, quant trading). CS PhD candidate (Oxford).

Charles O'Neill

CTO

Medical doctor. Engineering PhD candidate (Oxford). Rhodes Scholar. Ex-elite pole vaulter for Australia.

Mudith Jayasekara

CEO

Medical doctor. Engineering PhD candidate (Oxford). Rhodes Scholar. Ex-elite pole vaulter for Australia.

Mudith Jayasekara

CEO

Rhodes Scholar. PhD candidate in computational neuroscience (Oxford) studying reasoning in natural intelligence.

Max Kirkby

CSO

Rhodes Scholar. PhD candidate in computational neuroscience (Oxford) studying reasoning in natural intelligence.

Max Kirkby

CSO

We're backed by the best.

Led by LocalGlobe and backed by notable angels including co-founder & CSO @ HuggingFace, ex-director @ DeepMind, director @ Meta Al Research, head of startups @ OpenAl, ex-chair of the NHS, etc.

We're building for mission-critical use cases.

We believe that Parsed is the most scalable way to actually improve lives. It applies horizontally across mission-critical use cases, is at the frontier of AI research, and has immediate impact for our customers. Deep expertise interpretability is essential for this mission and is the DNA of our founding team. We’re growing a lean, all-star team.

Build mission-critical AI

Want to outperform frontier closed source models for your task? Want complete interpretability for every output? Want zero-effort, ongoing, model improvement? Get in touch.

Build mission-critical AI

Want to outperform frontier closed source models for your task? Want complete interpretability for every output? Want zero-effort, ongoing, model improvement? Get in touch.