User:TheHumanAmbassador/Bayes' Theorem Explanation (From HtAU Part 6)

So, we have evidence, which does not prove a statement true, but makes it more likely. But how much more likely? How have the probabilities been affected by the evidence? Well, there's actually a formula that we can use to help us figure out just that! It's called Bayes' Theorem.

$$P(A|B)=\frac{P(B|A)P(A)}{P(B|A)P(A)+P(B|¬A)P(¬A)}$$

..Hm, we should probably rename our variables.

$$P(H|E)=\frac{P(E|H)P(H)}{P(E|H)P(H)+P(E|¬H)P(¬H)}$$

H stands for a particular hypothesis, and E stands for a particular kind of evidence. What this formula is telling us is the probability of a hypothesis after finding evidence E is based on the likelihood of hypothesis E appearing if hypothesis H is true, the prior probability of hypothesis H, and, of course, the likelihood of just seeing that evidence anyway. After all, it could have appeared randomly. Maybe that rock is there because we're in ruins, where rocks are commonly found, not that someone brought it underground and hit Toriel with it.

P(E) can be calculated by adding P(H)P(E|H) to P(¬H)P(E|¬H).

Let's get ourselves familiar with this formula.. with an example! We'll calculate the Bayes' probability of a coin being rigged to always show heads.

We'll assume prior probability of .001. That is, 1 in 1000. (The prior probability's the hardest to figure out-Sometimes, we really only have a guess for it. If we're trying to see if someone has a rare disease, we use the rarity of the disease as the prior probability, for example.) [Note:I may plan on using the real ratio if I can find it, and then adjust the number of heads flipped to account for it, and calculate the new answer.)

So our hypothesis, H, is that the coin is rigged. We'll see what happens if we flip 12 heads in a row.

P(rigged coin|E)= $$P(H|E)=\frac{P(E|H)P(H)}{P(E)}$$

P(H) is .001, and the probability of seeing the evidence IF the coin is rigged is 1. So multiplying them gives us .001. Now to calculate the denominator. $$P(E)=\frac{P(E|H)P(H)}{P(E|H)P(E)+P(E|¬H)P(¬H)}$$

$$P(E)=P(H)P(E|H)+P(¬H)P(E|¬H)$$

We already calculated the first part to be true-As you can see, the first part is the same as the numerator. Keep it in mind, so you can use it as a shortcut. You can just copy/paste the numerator into the first part of the denominator, and only manually calculate the second part of it.


 * P(E)=.001+P(¬H)P(E|¬H)
 * P(¬H)=1-P(H) (This is always true.)
 * P(E)=.001+.999P(E|¬H)

As you can see, the last bit that matters is the odds of the evidence showing up anyway if the hypothesis is false.

This is precisely how one can tell which evidence is meaningless and which evidence is not. If it's reasonably likely to see a piece of evidence anyway, there's no reason to assume it means something. But if it's abnormal, then it probably means something.

12 heads has odds of 1/2^12. Or 1/4096. Or .00024414063. So we multiply it by .999 to get .00024389649, and add it to .001 to get .00124389649. And that's the denominator. Now we divide the numerator (.001) by the denominator to get 0.80392541344. So there's about an 80.39% chance that the coin is rigged. That's the new probability.

When there's multiple pieces of evidence, we calculate them one at a time, but we make sure to use the new probability we got from the previous calculation as the new prior probability. We continue doing this, until all the evidence has been presented.

In the case of counter-evidence, we just treat it as regular evidence. The formula will lower the probability in response to it! But we need to calculate the variables just as with supporting evidence. (What that means is we don't even need to know which side evidence supports in order to put it in the formula-The formula does that FOR us!)

If we calculated each of the 12 individual flips into Bayes' Theorem this way, using the result from the first iteration as the prior probability for the new iteration, we'd get the same answer.

So it does allow for many small pieces of evidence to combine and form one strong case.