Again I ask, what does Bayes' Theorem have to do with any of this? Well, we have the components needed to estimate beliefs. For the question "does Menomune‐A/C/Y/W‐135 sold by Sanofi Pasteur prevent the acquisition of neisseria meningitidis, a bacterium that causes meningitis?", we can gather the lab evidence, trace the pathology, check rigorous double-blind studies, do all that due diligence borne by the FDA and estimate the treatment effects. In terms of the equation, Pr(A|B) is the probability that someone comes down with meningitis conditional on them having received a dose of the vaccine. Pr(A) is the base rate, the probability of contracting bacterial meningitis. Pr(B|A) is the probability of having had the vaccine in a patient diagnosed with meningitis. Pr(B|~A) is the probability of having had the vaccine in someone other than a patient diagnosed with meningitis. Pr(~A) is just the conjugate of Pr(A). Yes, my notation is a little different than the image above. I like "Pr" for probability and the tilde key is easier to reach than the alt+number pad combination you need for the negation symbol.
For fun, here are the base rates by age in the US, 2000-2009 (CDC):
Put another way, the frequentist (naive) reasoner would look at a diagnosis of meningitis in someone who got the vaccine and say "that vaccine is useless, it didn't keep that patient from getting sick", and the Bayesian would say, "uh, let's go ahead and get a second opinion on that diagnosis." Empirically (Bar-Hillel is amazing at this stuff), most folks are frequentists rather than Bayesians.
The downside to this is that these people use the wrong statistical inference techniques to draw conclusions with policy implications. Consider the autism link. What happens when we estimate the Pr(A|B) of the likelihood of having autism conditional on receiving the meningitis vaccine when there's no causal relationship (I can't find an ungated copy of the joint CDC-NIH study on this, sorry)? A good Bayesian would adjust for shifting diagnosis base rates, or try to get additional information. What is the frequentist response? Two sequential events, one after another, the prior caused the latter, QED.
All that's fine and dandy, Sam. But this blog is about euvoluntary exchange, not Bayesian inference. Get to the good stuff.
Yes, of course. Consider the implications of frequentist reasoning, particularly way out in the tails of the probability distribution. Rare events get blown way out of proportion, people over-react to uninformative information, and anecdotal evidence becomes mass movements, perhaps partly fueled by many of the same sentiments we chronicle here at EE. For instance, did you know that Sanofi Pasteur is the biggest producer of human vaccines in the world and that they're just a branch of a humongous international conglomerate based in France of all places? BATNA disparity, my friends.
And the tragic downside of not vaccinating your kids? Invisible to most people. My grandparents grew up knowing what polio looked like up close and personal. I know it only from the history books. Out of sight, out of mind. It's a free rider problem heaped on the backs of suffering, dying children. Say it with me: uncompensated externalities. And if the rest of us are lucky, the polluted commons will be limited to that one select club.
Of course, we could all get lucky, vaccinated or not. That would be the best possible outcome. Think a bit more carefully if you're willing to play that game however. Analyze the collective action problem like you were Tullock himself.
The annual influenza vaccine is another matter. That one's a scam. But that's a post for another day.
*By now the astute reader may have noticed that I've provided additional evidence for my claim of amateurishness when it comes to epidemiology.