November 9th, 2012

beartato phd

(no subject)

Today's xkcd is pretty great summary of the first-order bit on the topic of statistics/probabilistic reasoning.

But, being as I am not a professional statistician/probabilistic reasoner, I'm still philosophically troubled by what the frequentist/bayesian distinction really means here. (Because all the pros totally have it all worked out and sleep soundly at night, right? ...right?)

I believe in the definition of conditional probability, I believe Bayes Theorem, but I continue to believe it's essentially just the definition of conditional probability. (I mean, come on, the proof-distance from P(A|B) = P(A ^ B)/P(B) to to P(A|B) = P(B|A) P(A) / P(B) is certainly trivial). So it's never made a lot of sense to me to be smug about "being a Bayesian", despite the fact that I'm definitely on the side of the person making the $50 bet that the sun will still rise tomorrow.

It seems to me naively that
(a) "A bayesian is only as good as their priors", or to put it less sloganistically, the pragmatic value of employing bayes rule seems to absolutely hinge on the quality of your priors. A Bizarro-world Nate Silver with hugely whacked-out priors on the effect of a hurricane on poll bias would have made incorrect predictions, despite employing the same methodology of "look at lots of poll data, filter it through my priors" which we are led to believe is virtuous.
...and yet,
(b) most of our priors are not, or so it empirically seems, all that whacked out and crazy in magnitude. The cumulative effect of them being adjusted by repeated experiments brings them in line with reality. Our priors would have to be exponentially crazy (in n) to survive n independent experiments.

Anyway I suspect that these thoughts sound to a knowledgeable person rather dumb and freshman-stats-course-student-ish in ways that I can't anticipate, but I am throwing them out there so you can correct me.