Build mission critical AI that does real work.

Parsed is an Al interpretability lab focused on supercharging model performance and robustness through the lens of evaluations and mechanistic interpretability.

State-of-the-art evaluations that drive continuously improving, transparent AI. Completely tailored to your task.

It all starts with evaluation

Our forward-deployed engineers work side by side with your team, embedding into your workflows to build a tailored evaluation harness that captures expert judgment for your exact task. This becomes the ground truth that everything else orbits around. It’s fully owned by you, driving rigorous validation and systematic optimisation from day one.

It all starts with evaluation

Optimise your models with bespoke and precise customisation, delivering SOTA accuracy tailored specifically to your clinical workflows. We enhance task-specific performance and unlock maximal clinical impact, ensuring every model achieves frontier capabilities in real-world healthcare scenarios.

It all starts with evaluation

Our forward-deployed engineers work side by side with your team, embedding into your workflows to build a tailored evaluation harness that captures expert judgment for your exact task. This becomes the ground truth that everything else orbits around. It’s fully owned by you, driving rigorous validation and systematic optimisation from day one.

Relentless improvement, by design

Parsed plugs directly into your systems and continuously parses your real-world data. As new inputs, outcomes, and edge cases flow in, they’re evaluated and audited through your harness, driving targeted model optimisation. Your models become more performant and aligned to your needs. Zero engineering lift on your end and total control over the process.

Relentless improvement, by design

Decode LLMs with mechanistic interpretability, discovering the underlying circuits, computations and activations driving model decisions. We help gain transparent insights into model reasoning, enabling trust and clear attribution of inputs to clinical outcomes and diagnoses.

Relentless improvement, by design

Parsed plugs directly into your systems and continuously parses your real-world data. As new inputs, outcomes, and edge cases flow in, they’re evaluated and audited through your harness, driving targeted model optimisation. Your models become more performant and aligned to your needs. Zero engineering lift on your end and total control over the process.

Mechanistic interpretability built-in

Every model we deploy comes with frontier-level mechanistic interpretability. You can see exactly how inputs lead to outputs and trace the model’s internal reasoning. We provide the interpretability tooling to build transparent and auditable AI workflows.

Mechanistic interpretability built-in

Continuously monitor, refine, and fortify your deployed models to guarantee enterprise-grade reliability, minimal latency, and consistent state-of-the-art performance. We rapidly integrate the latest research and model updates, ensuring robust, adaptive, and regulatory-aligned AI solutions across the evolving healthcare landscape.

Mechanistic interpretability built-in

Every model we deploy comes with frontier-level mechanistic interpretability. You can see exactly how inputs lead to outputs and trace the model’s internal reasoning. We provide the interpretability tooling to build transparent and auditable AI workflows.

Deployment that just works

Optimised models run on dedicated, low-latency, region-specific infrastructure with 99.99% uptime and 6x cloud redundancy. Your users get faster, more reliable outputs. You get compliance, control, and peace of mind.

Deployment that just works

Continuously monitor, refine, and fortify your deployed models to guarantee enterprise-grade reliability, minimal latency, and consistent state-of-the-art performance. We rapidly integrate the latest research and model updates, ensuring robust, adaptive, and regulatory-aligned AI solutions across the evolving healthcare landscape.

Deployment that just works

Optimised models run on dedicated, low-latency, region-specific infrastructure with 99.99% uptime and 6x cloud redundancy. Your users get faster, more reliable outputs. You get compliance, control, and peace of mind.

A compounding loop that’s hard to catch

It’s a continuous improvement loop. Each iteration makes your models more accurate, more robust, and more tailored to your task — building a defensible moat that compounds over time.

A compounding loop that’s hard to catch

Continuously monitor, refine, and fortify your deployed models to guarantee enterprise-grade reliability, minimal latency, and consistent state-of-the-art performance. We rapidly integrate the latest research and model updates, ensuring robust, adaptive, and regulatory-aligned AI solutions across the evolving healthcare landscape.

A compounding loop that’s hard to catch

It’s a continuous improvement loop. Each iteration makes your models more accurate, more robust, and more tailored to your task — building a defensible moat that compounds over time.

Research.

Clinician led with technical interpretability roots.

Medical doctor. Engineering PhD candidate (Oxford). Rhodes Scholar. Ex-elite pole vaulter for Australia.

Mudith Jayasekara

CEO

Medical doctor. Engineering PhD candidate (Oxford). Rhodes Scholar. Ex-elite pole vaulter for Australia.

Mudith Jayasekara

CEO

Mech interp researcher (MATS, Stanford, Johns Hopkins). Previously ML engineer (NASA, Macuject, quant trading). CS PhD candidate (Oxford).

Charles O'Neill

CTO

Mech interp researcher (MATS, Stanford, Johns Hopkins). Previously ML engineer (NASA, Macuject, quant trading). CS PhD candidate (Oxford).

Charles O'Neill

CTO

Rhodes Scholar. PhD candidate in computational neuroscience (Oxford) studying reasoning in natural intelligence.

Max Kirkby

CSO

Rhodes Scholar. PhD candidate in computational neuroscience (Oxford) studying reasoning in natural intelligence.

Max Kirkby

CSO

We're backed by the best.

Led by LocalGlobe and backed by notable angels including co-founder & CSO @ HuggingFace, ex-director @ DeepMind, director @ Meta Al Research, head of startups @ OpenAl, ex-chair of the NHS, etc.

We're building for mission-critical use cases.

We believe that Parsed is the most scalable way to actually improve lives. It applies horizontally across mission-critical use cases, is at the frontier of AI research, and has immediate impact for our customers. Deep expertise interpretability is essential for this mission and is the DNA of our founding team. We’re growing a lean, all-star team.

Build mission-critical AI

Want to outperform frontier closed source models for your task? Want complete interpretability for every output? Want zero-effort, ongoing, model improvement? Get in touch.

Build mission-critical AI

Want to outperform frontier closed source models for your task? Want complete interpretability for every output? Want zero-effort, ongoing, model improvement? Get in touch.