Own your frontier model, build your moat.

Train and deploy LLMs that are smarter, cheaper, faster for your workflows — on a platform built for continual learning and minimal engineering lift.

Custom-trained models that keep learning, stay transparent, and deliver significantly cheaper, faster, better performance than the big labs.

Foundation models, tailored.

We train large language models that know your domain and workflow inside out. Starting with your subject matter experts and an evaluation-first approach, we align models precisely to your workflows — making them over 50% cheaper, 2-3x faster, and outperforming the big-lab generalist models.

Foundation models, tailored.

We train large language models that know your domain and workflow inside out. Starting with your subject matter experts and an evaluation-first approach, we align models precisely to your workflows — making them over 50% cheaper, 2-3x faster, and outperforming the big-lab generalist models.

RL that actually works.

The models we deploy today are just the starting point. They learn from every input, automatically improving without the overhead of traditional development cycles. This creates compounding intelligence gains that accelerate over time — your AI doesn't just serve your business, it grows with it.

RL that actually works.

The models we deploy today are just the starting point. They learn from every input, automatically improving without the overhead of traditional development cycles. This creates compounding intelligence gains that accelerate over time — your AI doesn't just serve your business, it grows with it.

Transparent & reliable AI.

Built-in interpretability only possible with open weight models and enterprise-grade multicloud inference to service your API calls with 99.99% uptime.

Transparent & reliable AI.

Built-in interpretability only possible with open weight models and enterprise-grade multicloud inference to service your API calls with 99.99% uptime.

Research.

A letter to the C-suite: think carefully before hiring MLEs

MLE ≠ LLM engineer: different skills, different instincts, different game.

Aug 26, 2025

A letter to the C-suite: think carefully before hiring MLEs

MLE ≠ LLM engineer: different skills, different instincts, different game.

Aug 26, 2025

Amnesiac generalist behemoths are not the future of language models

You don’t need a generic genius. You need a specialist learner.

Jul 28, 2025

Amnesiac generalist behemoths are not the future of language models

You don’t need a generic genius. You need a specialist learner.

Jul 28, 2025

The Bitter Lesson of LLM Evals

Turning expert judgment into a compounding moat. Because in LLM evals, scaling care beats scaling compute.

Jul 13, 2025

The Bitter Lesson of LLM Evals

Turning expert judgment into a compounding moat. Because in LLM evals, scaling care beats scaling compute.

Jul 13, 2025

Do transformers notice their own mistakes? Finding a linear hallucination detector inside LLMs

A linear signal in LLMs reveals hallucinations, is detected by a frozen observer, and steered with a single vector.

May 8, 2025

Do transformers notice their own mistakes? Finding a linear hallucination detector inside LLMs

A linear signal in LLMs reveals hallucinations, is detected by a frozen observer, and steered with a single vector.

May 8, 2025

Resurrecting the salmon: seeing clearer inside LLMs with domain-specific SAEs

A powerful, efficient, and domain-robust strategy for safeguarding medical-text generation

Feb 15, 2025

Resurrecting the salmon: seeing clearer inside LLMs with domain-specific SAEs

A powerful, efficient, and domain-robust strategy for safeguarding medical-text generation

Feb 15, 2025

Why mechanistic interpretability needs a paradigm inversion

The conventional scaling paradigm for language models themselves may be fundamentally misaligned with interp.

Jan 13, 2025

Why mechanistic interpretability needs a paradigm inversion

The conventional scaling paradigm for language models themselves may be fundamentally misaligned with interp.

Jan 13, 2025

Technical, academic roots meets real-world builders.

LLM, RL & Mech interp researcher (MATS, Stanford, Johns Hopkins). Previously ML engineer (NASA, Macuject, quant trading). CS PhD candidate (Oxford).

Charles O'Neill

Co-founder/CSO

LLM, RL & Mech interp researcher (MATS, Stanford, Johns Hopkins). Previously ML engineer (NASA, Macuject, quant trading). CS PhD candidate (Oxford).

Charles O'Neill

Co-founder/CSO

Rhodes Scholar. PhD candidate in computational neuroscience (Oxford) studying reasoning in natural intelligence.

Max Kirkby

Co-founder

Rhodes Scholar. PhD candidate in computational neuroscience (Oxford) studying reasoning in natural intelligence.

Max Kirkby

Co-founder

Previously in quant trading implementing ML strategies. Software engineer (IMC, CSIRO). Grew SaaS to 150k users and exited. CS @ Australian National University.

Paras Stefanopoulos

CTO

Previously in quant trading implementing ML strategies. Software engineer (IMC, CSIRO). Grew SaaS to 150k users and exited. CS @ Australian National University.

Paras Stefanopoulos

CTO

We're backed by the best.

Led by LocalGlobe and backed by notable angels including co-founder & CSO @ HuggingFace, ex-director @ DeepMind, ex-chair of the NHS, etc.

Enterprise security.

Parsed is SOC 2 and ISO 27001 certified, HIPAA-aligned, and GDPR compliant for our EU and UK customers.

We're building for mission-critical use cases.

We believe that Parsed is the most scalable way to actually improve lives. It applies horizontally across mission-critical use cases, is at the frontier of AI research, and has immediate impact for our customers. Deep academic expertise is essential for this mission and is the DNA of our founding team. We’re growing a lean, all-star team.

Start owning your model today.

From training to deployment, we help you launch a specialist LLM that outperforms generic models, adapts automatically, and runs reliably at scale.

Start owning your model today.

From training to deployment, we help you launch a specialist LLM that outperforms generic models, adapts automatically, and runs reliably at scale.