You don’t make friends with Causal Inference

Being the resident causal inference botherer in any workplace is a lot like being a vocal vegan - there is a risk of alienating the people around you if you’re too dogmatic. This post explores that tension.

I've been the causal inference botherer at three different companies now, and I've learned that being right about methodology doesn't automatically make you helpful. Actually, it can make you insufferable.

There's this moment that happens in every workplace I've been in. Someone presents an analysis showing that Feature X increased engagement by 15%. They're excited. The stakeholders are nodding. Someone raises their hand and says something like "well, we can't really claim causation here because of selection bias" or "did you consider that this might just be picking up a time trend?" - and everyone looks visibly irritated with them.

I used to think this was just people being defensive about their work. And sure, sometimes that's part of it. But over time I've realized the dynamic is more complicated than that. When you constantly push for causal rigor, you're not just questioning someone's analysis. You're implicitly saying that the work they did, which often took considerable effort, isn't answering the question everyone thinks it's answering. That's a hard message to deliver, and an even harder one to receive.

The thing is, I genuinely believe causal inference matters. If we're making decisions based on correlations that we mistake for causal relationships, we can end up doing real harm. We might invest millions in a feature that doesn't actually help users. We might kill a product that was working fine but happened to launch during a seasonal dip. The stakes are real.

But here's where it gets tricky. In most organizations, perfect causal identification is impossible. We don't have randomized controlled trials for every question. We can't always find a valid instrumental variable or a clean natural experiment. Sometimes the data just doesn't exist to do things properly. And when you're the person who keeps pointing this out, you start to sound like you're just saying "no" to everything.

I've watched colleagues propose reasonable-sounding analyses, only to have me explain why they won't work. "What about a difference-in-differences approach?" Sure, but we don't have parallel trends. "Can we use propensity score matching?" We could, but we're probably missing key confounders. "What if we just control for everything we can measure?" That might make the bias worse if some of those variables are colliders or mediators.

After a while, people stop asking me for input. Not because they think I'm wrong, but because talking to me feels like running into a brick wall. Every path forward seems blocked by some methodological concern. The conversation goes from "how do we answer this question?" to "why is it impossible to answer this question?" That's demoralizing for everyone involved.

The comparison to veganism isn't perfect, but it captures something real. Both causal inference advocates and vegans are often technically correct in their critiques. Factory farming does cause immense suffering. That correlation you're excited about might completely disappear once you account for confounding. But if every dinner conversation becomes a lecture about animal welfare, or every meeting becomes a methodology seminar, people start avoiding you. Not because they disagree with your underlying point, but because the interaction has become exhausting.

I've made this mistake more times than I'd like to admit. Someone would show me a regression with a dozen control variables, and I'd immediately start listing all the ways the causal interpretation could be wrong. I was so focused on being technically correct that I missed the forest for the trees. The person wasn't claiming to have proven causation beyond all doubt. They were trying to make progress on a real business problem with imperfect data. My intervention wasn't helping them make better decisions. It was just making them feel bad about trying.

What I've slowly learned is that there's a difference between being a causal inference advocate and being a causal inference absolutist. The advocate tries to improve decision-making by pushing for better identification strategies where feasible and being honest about uncertainty where it's not. The absolutist treats every methodological imperfection as a fatal flaw and effectively argues for paralysis.

This doesn't mean abandoning standards. There are still times when I need to be firm about methodology. If someone wants to launch a major initiative based on an analysis that's clearly biased in a known direction, I have to speak up. If a team is about to make a causal claim in a public-facing report based on purely observational data with obvious confounders, that's a problem.

But I've gotten better at distinguishing between those high-stakes situations and the more common case where someone is just trying to understand their data a bit better. When an analyst shows me a descriptive analysis and calls an association "an effect," I don't immediately launch into a lecture about the fundamental problem of causal inference. I might just ask "do we think this is causal, or might there be confounding?" in a way that opens up conversation rather than shutting it down.

I've also gotten better at the constructive part. Instead of just pointing out why an approach won't work, I try to suggest what might work, even if it's imperfect. "We can't do a perfect RCT here, but what if we randomized at the city level instead of the user level? It won't be as clean, but it's better than nothing." Or "this observational analysis has confounding issues, but if we're really transparent about the limitations and treat it as suggestive rather than definitive, it might still be useful for prioritization."

The other thing I've learned is to pick my battles. Not every analysis needs to meet the causal inference gold standard. If someone is doing exploratory work to generate hypotheses, that's fine. If we're trying to roughly size an opportunity to decide whether it's worth investing in a more rigorous study, we can tolerate more uncertainty. The level of rigor should match the stakes of the decision.

There's also something to be said for building relationships before you start critiquing methodology. When people trust that you're on their side and trying to help them succeed, they're much more receptive to hearing that their approach has problems. If the first thing you do when you meet someone is tear apart their analysis, you've probably lost them. But if you've spent time understanding their work, offering help, and showing that you care about their success, they'll actually want your input on how to make their causal inferences stronger.

I still slip up sometimes. I still find myself being more critical than constructive. I still occasionally make people feel stupid for not having considered something that seems obvious to me. But I'm trying to get better at remembering that my job isn't to be the methodology police. It's to help the organization make better decisions.

The goal isn't for everyone to become a causal inference expert. It's for people to have a better intuitive sense of when correlation might not be causation, to be more thoughtful about what their analyses can and can't tell them, and to be appropriately uncertain about their conclusions. That's a much more achievable goal than perfect causal identification, and it doesn't require me to be a jerk about it.

I think the best version of being the causal inference person is being someone who makes it easier to do good work, not harder. That means celebrating when people think carefully about confounding, even if their solution isn't perfect. It means helping design studies that will actually be feasible to run, not just theoretically ideal ones. It means being honest about limitations without being paralyzed by them.

You can absolutely make friends while advocating for causal inference. It just requires recognizing that being right about methodology is necessary but not sufficient. You also have to be helpful, constructive, and willing to work within constraints. You have to meet people where they are and help them get to somewhere better, rather than just telling them they're in the wrong place.

I'm still learning this balance. Some days I get it right, and I leave a meeting feeling like I actually helped someone think more clearly about their problem. Other days I walk away knowing I was too pedantic or too dismissive, and I make a mental note to do better next time. But at least I'm not eating lunch alone.

Previous
Previous

The Relentless Correlation Science of the 21st Century

Next
Next

Who actually uses structural causal models?