← All essays

APRIL 10, 2026

I stopped trusting attribution. Here's what I do instead.

The cleanest attribution dashboard I ever built told me the wrong story for nine months. The lessons cost real money.

The cleanest attribution dashboard I ever built ran for nine months and lied to me for most of them.

It wasn't broken. The tracking was clean, the conversions were deduped, the channel definitions were consistent across teams. It produced beautiful CAC numbers per channel that the finance team could pull into a board deck without flinching.

It was still wrong, and the way it was wrong cost real money.

What it told us

The dashboard said paid social was driving acquisition at a CAC well below our blended target. Every week the same channel led the table. The recommendation that fell out of the data was unambiguous: shift budget into paid social.

We did, in stages, for nine months.

What was actually happening

When we finally ran a geo-holdout — turning the channel off in a controlled set of markets for six weeks — the truth surfaced. The conversions paid social was "driving" were happening anyway. Organic and direct were absorbing them in the holdout markets at almost the same rate.

The attribution model was crediting the last touchpoint. Paid social was the last touchpoint because we ran a lot of retargeting there. Retargeting on users who would have converted regardless.

Attribution doesn't measure causation. It measures the order users touched things in. Those are different problems, and one of them is almost always more expensive than the other.

What I do now

Three things changed in how I make channel decisions:

  • Attribution dashboards are a diagnostic tool, not a budget allocation tool
  • Every meaningful channel decision gets validated with a holdout, an MMM read, or both
  • The team is trained to say "the dashboard suggests, the experiment decides"

That last one is cultural, not technical. The dashboard suggests; the experiment decides. The default question in any channel review meeting is "what experiment would settle this?" not "what does the dashboard say?"

The cost of the lesson

If I had to put a number on the misallocated spend across those nine months, it was in the seven figures. Most of it wasn't wasted — it just went to a channel that was over-credited at the expense of channels we underinvested in.

The frustrating part is that the failure mode is invisible without a deliberate test. The dashboard looks right. The numbers tie. The PMs and the finance team agree on what they see. The only way out is to break the loop with experimentation.

Run the holdout. Even when the dashboard is clean. Especially when the dashboard is clean.

Got a growth problem worth a real conversation?

I respond within two business days. No discovery-call gauntlet.

Email Andre →
Built with v0