Purpose-built LLMs for dental note-taking
Frontier thinking model performance at a fraction of the latency.
Case study
Nov 5, 2025
Train and deploy LLMs that are smarter, cheaper, faster for your workflows — on a platform built for continual learning and minimal engineering lift.
Foundation models, tailored.
We train large language models that know your domain and workflow inside out. Starting with your subject matter experts and an evaluation-first approach, we align models precisely to your workflows — making them over 50% cheaper, 2-3x faster, and outperforming the big-lab generalist models.
Foundation models, tailored.
We train large language models that know your domain and workflow inside out. Starting with your subject matter experts and an evaluation-first approach, we align models precisely to your workflows — making them over 50% cheaper, 2-3x faster, and outperforming the big-lab generalist models.
RL that actually works.
The models we deploy today are just the starting point. They learn from every input, automatically improving without the overhead of traditional development cycles. This creates compounding intelligence gains that accelerate over time — your AI doesn't just serve your business, it grows with it.
RL that actually works.
The models we deploy today are just the starting point. They learn from every input, automatically improving without the overhead of traditional development cycles. This creates compounding intelligence gains that accelerate over time — your AI doesn't just serve your business, it grows with it.
Transparent & reliable AI.
Built-in interpretability only possible with open weight models and enterprise-grade multicloud inference to service your API calls with 99.99% uptime.
Transparent & reliable AI.
Built-in interpretability only possible with open weight models and enterprise-grade multicloud inference to service your API calls with 99.99% uptime.
Purpose-built LLMs for dental note-taking
Frontier thinking model performance at a fraction of the latency.
Case study
Nov 5, 2025
Purpose-built LLMs for dental note-taking
Frontier thinking model performance at a fraction of the latency.
Case study
Nov 5, 2025
Lumina: building self-improving evaluation through customer-in-the-loop refinement
Lumina: an adaptive evaluation engine that learns to judge like a subject matter expert.
Research
Oct 30, 2025
Lumina: building self-improving evaluation through customer-in-the-loop refinement
Lumina: an adaptive evaluation engine that learns to judge like a subject matter expert.
Research
Oct 30, 2025
Upweight the strategy, not the tokens: faster training with explicit reasoning through RGT (Rationale-Guided Training)
Teach the why, not just the what: Rationale-Guided Training
Research
Oct 28, 2025
Upweight the strategy, not the tokens: faster training with explicit reasoning through RGT (Rationale-Guided Training)
Teach the why, not just the what: Rationale-Guided Training
Research
Oct 28, 2025
Attention-based attribution: what your model is actually looking at
Cosine similarity is cosplay. Attention is attribution.
Research
Oct 28, 2025
Attention-based attribution: what your model is actually looking at
Cosine similarity is cosplay. Attention is attribution.
Research
Oct 28, 2025
Robust, sample efficient SFT with prompt mutations
Low-KL divergence prompt mutations: better performance at a fraction of the cost.
Research
Oct 27, 2025
Robust, sample efficient SFT with prompt mutations
Low-KL divergence prompt mutations: better performance at a fraction of the cost.
Research
Oct 27, 2025
Training loss predicts evaluation performance, even for non-verifiable tasks
Loss: the cheapest evaluation you’ll ever run.
Research
Oct 27, 2025
Training loss predicts evaluation performance, even for non-verifiable tasks
Loss: the cheapest evaluation you’ll ever run.
Research
Oct 27, 2025
Building production AI for regulated industries with a leading digital insurer
From frontier OpenAI/Google models to open-source — delivering 8x the speed and outperforming GPT-5-level accuracy.
Case study
Oct 20, 2025
Building production AI for regulated industries with a leading digital insurer
From frontier OpenAI/Google models to open-source — delivering 8x the speed and outperforming GPT-5-level accuracy.
Case study
Oct 20, 2025
Iterative SFT (iSFT): dense reward learning
Iterative SFT: dense, high-bandwidth learning
Research
Oct 15, 2025
Iterative SFT (iSFT): dense reward learning
Iterative SFT: dense, high-bandwidth learning
Research
Oct 15, 2025
Write small, learn forever: rank-1 LoRA for continual learning
Why rank-1 LoRA updates might be the missing link between static fine-tuning and truly continuous, live-on-GPU learning.
Research
Oct 12, 2025
Write small, learn forever: rank-1 LoRA for continual learning
Why rank-1 LoRA updates might be the missing link between static fine-tuning and truly continuous, live-on-GPU learning.
Research
Oct 12, 2025
Practical LoRA Research
Fine-tuning at Scale: What LoRA Gets Right (and LoRA-XS Doesn’t).
Research
Oct 10, 2025
Practical LoRA Research
Fine-tuning at Scale: What LoRA Gets Right (and LoRA-XS Doesn’t).
Research
Oct 10, 2025
A letter to the C-suite: the shifting role of MLEs
Your MLEs are brilliant, but you’re giving them the wrong job.
Position
Sep 8, 2025
A letter to the C-suite: the shifting role of MLEs
Your MLEs are brilliant, but you’re giving them the wrong job.
Position
Sep 8, 2025
Fine-tuning small open-source LLMs to outperform large closed-source models by 60% on specialized tasks
27B open-source model outperforms biggest OpenAI/Anthropic/Google models on real healthcare task.
Case study
Aug 15, 2025
Fine-tuning small open-source LLMs to outperform large closed-source models by 60% on specialized tasks
27B open-source model outperforms biggest OpenAI/Anthropic/Google models on real healthcare task.
Case study
Aug 15, 2025
Amnesiac generalist behemoths are not the future of language models
You don’t need a generic genius. You need a specialist learner.
Position
Jul 28, 2025
Amnesiac generalist behemoths are not the future of language models
You don’t need a generic genius. You need a specialist learner.
Position
Jul 28, 2025
The bitter lesson of LLM evals
Turning expert judgment into a compounding moat. Because in LLM evals, scaling care beats scaling compute.
Position
Jul 13, 2025
The bitter lesson of LLM evals
Turning expert judgment into a compounding moat. Because in LLM evals, scaling care beats scaling compute.
Position
Jul 13, 2025
Do transformers notice their own mistakes? Finding a linear hallucination detector inside LLMs
A linear signal in LLMs reveals hallucinations, is detected by a frozen observer, and steered with a single vector.
Research
May 8, 2025
Do transformers notice their own mistakes? Finding a linear hallucination detector inside LLMs
A linear signal in LLMs reveals hallucinations, is detected by a frozen observer, and steered with a single vector.
Research
May 8, 2025
Resurrecting the salmon: seeing clearer inside LLMs with domain-specific SAEs
A powerful, efficient, and domain-robust strategy for safeguarding medical-text generation.
Research
Feb 15, 2025
Resurrecting the salmon: seeing clearer inside LLMs with domain-specific SAEs
A powerful, efficient, and domain-robust strategy for safeguarding medical-text generation.
Research
Feb 15, 2025
Why mechanistic interpretability needs a paradigm inversion
The conventional scaling paradigm for language models themselves may be fundamentally misaligned with interp.
Research
Jan 13, 2025
Why mechanistic interpretability needs a paradigm inversion
The conventional scaling paradigm for language models themselves may be fundamentally misaligned with interp.
Research
Jan 13, 2025
Engineering PhD candidate (Oxford). Rhodes Scholar. Medical doctor. Ex-elite pole vaulter for Australia.

Mudith Jayasekara
Co-founder/CEO
Engineering PhD candidate (Oxford). Rhodes Scholar. Medical doctor. Ex-elite pole vaulter for Australia.

Mudith Jayasekara
Co-founder/CEO
LLM, RL & Mech interp researcher (MATS, Stanford, Johns Hopkins). Previously ML engineer (NASA, Macuject, quant trading). CS PhD candidate (Oxford).

Charles O'Neill
Co-founder/CSO
LLM, RL & Mech interp researcher (MATS, Stanford, Johns Hopkins). Previously ML engineer (NASA, Macuject, quant trading). CS PhD candidate (Oxford).

Charles O'Neill
Co-founder/CSO
Rhodes Scholar. PhD candidate in computational neuroscience (Oxford) studying reasoning in natural intelligence.

Max Kirkby
Co-founder
Rhodes Scholar. PhD candidate in computational neuroscience (Oxford) studying reasoning in natural intelligence.

Max Kirkby
Co-founder
Previously in quant trading implementing ML strategies. Software engineer (IMC, CSIRO). Grew SaaS to 150k users and exited. CS @ Australian National University.

Paras Stefanopoulos
CTO
Previously in quant trading implementing ML strategies. Software engineer (IMC, CSIRO). Grew SaaS to 150k users and exited. CS @ Australian National University.

Paras Stefanopoulos
CTO
Math masters @ Cambridge, Pure mathematics @ USyd (Valedictorian), Quant research (IMC, Optiver), Physics Olympiad, Algorithmic betting (1M+ profit).

Harry Partridge
Member of Technical Staff
Math masters @ Cambridge, Pure mathematics @ USyd (Valedictorian), Quant research (IMC, Optiver), Physics Olympiad, Algorithmic betting (1M+ profit).

Harry Partridge
Member of Technical Staff
10y software experience, previously in quant trading as global head of historical data @ IMC. Ex-pianist and fencer.

Kimbrian Canavan
Member of Technical Staff
10y software experience, previously in quant trading as global head of historical data @ IMC. Ex-pianist and fencer.

Kimbrian Canavan
Member of Technical Staff
Ranked #1 in Maths/CS Master’s @ Oxford, University Medal in Applied Math @ USyd, background in quantum computing and philosophy lecturing. EA and Rationalist-adjacent.

Jonathon Liu
Member of Technical Staff
Ranked #1 in Maths/CS Master’s @ Oxford, University Medal in Applied Math @ USyd, background in quantum computing and philosophy lecturing. EA and Rationalist-adjacent.

Jonathon Liu
Member of Technical Staff
Led by LocalGlobe and backed by notable angels including co-founder & CSO @ HuggingFace, co-founder of Weights & Biases, prev. director @ DeepMind, prev. chair of the NHS, etc.
Parsed is SOC 2 and ISO 27001 certified, HIPAA-aligned, and GDPR compliant for our EU and UK customers.
We believe that Parsed is the most scalable way to actually improve lives. It applies horizontally across mission-critical use cases, is at the frontier of AI research, and has immediate impact for our customers. Deep academic expertise is essential for this mission and is the DNA of our founding team. We’re growing a lean, all-star team.
From training to deployment, we help you launch a specialist LLM that outperforms generic models, adapts automatically, and runs reliably at scale.
From training to deployment, we help you launch a specialist LLM that outperforms generic models, adapts automatically, and runs reliably at scale.