Thursday, November 21, 2013

Vaccination

I know just enough epidemiology to be obnoxious at dinner parties. Depending on the dinner party, that may be a touch above the median. But what I lack in knowledge about communicable infections I make up for by my familiarity with Bayes' Theorem. For those of you unable or unwilling to get through Yudkowsky's post, in the 18th century, the Reverend Thomas Bayes gave us a very handy formula for how to adjust probability estimates when confronted with new evidence. The two-word advice? Maintain skepticism. It's more complicated than that, obviously. It accounts for the probability of measurement errors (both type I and type II) and forces you to consider base rates. The Wikipedia page is here, though I find Yudkowsky's walkthrough pretty close to the best you're likely to find on the Web. Here's the equation (from the wiki page):


What does Bayes' Theorem have to do with epidemiology? Well, there seems to be pretty compelling evidence that the standard childhood vaccinations (Hepatitis B, Rotavirus, Diptheria, Pertussis [whooping cough], Tetanus, Haemophilus influenzae type b [meningitis], Pneumococcal [meningitis, pneumonia et al], Poliovirus) work pretty well in the sense that vaccinated kids tend to avoid contracting the illnesses associated with infections of the above pathogens*. There also seems to be evidence that some previously CDC-approved vaccines containing a mercury-based preservative could be linked to cases of autism. Well, there's a correlation anyway. Bacterial meningitis rates fell from 2.00 per 100,000 in the early 90s to 1.38 per 100,000 in 2007 (source: CDC). During that same time reported cases of autism increased by a factor of more than ten. (source: also CDC).

Again I ask, what does Bayes' Theorem have to do with any of this? Well, we have the components needed to estimate beliefs. For the question "does Menomune‐A/C/Y/W‐135 sold by Sanofi Pasteur prevent the acquisition of neisseria meningitidis, a bacterium that causes meningitis?", we can gather the lab evidence, trace the pathology, check rigorous double-blind studies, do all that due diligence borne by the FDA and estimate the treatment effects. In terms of the equation, Pr(A|B) is the probability that someone comes down with meningitis conditional on them having received a dose of the vaccine. Pr(A) is the base rate, the probability of contracting bacterial meningitis. Pr(B|A) is the probability of having had the vaccine in a patient diagnosed with meningitis. Pr(B|~A) is the probability of having had the vaccine in someone other than a patient diagnosed with meningitis. Pr(~A) is just the conjugate of Pr(A). Yes, my notation is a little different than the image above. I like "Pr" for probability and the tilde key is easier to reach than the alt+number pad combination you need for the negation symbol.

For fun, here are the base rates by age in the US, 2000-2009 (CDC):

The tricky bits for many folks to get at heuristically are error rates. That is, with what probability do we see the terms in the denominator? If the disease is rare, the diagnosis doesn't even have to be faulty to generate lots of error. But that often (and folks who work with public opinion data know what I mean when I say "often") goes unreflected in the formation of posterior beliefs.

Put another way, the frequentist (naive) reasoner would look at a diagnosis of meningitis in someone who got the vaccine and say "that vaccine is useless, it didn't keep that patient from getting sick", and the Bayesian would say, "uh, let's go ahead and get a second opinion on that diagnosis." Empirically (Bar-Hillel is amazing at this stuff), most folks are frequentists rather than Bayesians.

The downside to this is that these people use the wrong statistical inference techniques to draw conclusions with policy implications. Consider the autism link. What happens when we estimate the Pr(A|B) of the likelihood of having autism conditional on receiving the meningitis vaccine when there's no causal relationship (I can't find an ungated copy of the joint CDC-NIH study on this, sorry)? A good Bayesian would adjust for shifting diagnosis base rates, or try to get additional information. What is the frequentist response? Two sequential events, one after another, the prior caused the latter, QED.

Yikes!

All that's fine and dandy, Sam. But this blog is about euvoluntary exchange, not Bayesian inference. Get to the good stuff.

Yes, of course. Consider the implications of frequentist reasoning, particularly way out in the tails of the probability distribution. Rare events get blown way out of proportion, people over-react to uninformative information, and anecdotal evidence becomes mass movements, perhaps partly fueled by many of the same sentiments we chronicle here at EE. For instance, did you know that Sanofi Pasteur is the biggest producer of human vaccines in the world and that they're just a branch of a humongous international conglomerate based in France of all places? BATNA disparity, my friends.

And the tragic downside of not vaccinating your kids? Invisible to most people. My grandparents grew up knowing what polio looked like up close and personal. I know it only from the history books. Out of sight, out of mind. It's a free rider problem heaped on the backs of suffering, dying children. Say it with me: uncompensated externalities. And if the rest of us are lucky, the polluted commons will be limited to that one select club.

Of course, we could all get lucky, vaccinated or not. That would be the best possible outcome. Think a bit more carefully if you're willing to play that game however. Analyze the collective action problem like you were Tullock himself.

The annual influenza vaccine is another matter. That one's a scam. But that's a post for another day.








*By now the astute reader may have noticed that I've provided additional evidence for my claim of amateurishness when it comes to epidemiology. 

No comments:

Post a Comment

Do you have suggestions on where we could find more examples of this phenomenon?