Non-trivial extension of the binary case in Hypothesis testing

On the topic of probability, just like most amateurs (myself included), we are mainly concerned with posterior distribution, given prior information. For example, given an urn with 2 blue balls, and 99 black balls, what is the probability of selecting a pink ball? (LOL). But a good deal of real life problems transcend such simple distributions. That brings up the exciting subfield of hypothesis testing. Whether it’s testing the efficacy of a Covid-19 vaccine or studying the correlation between synthetic drugs and teenage pregnancy, hypothesis testing is one the powerful and useful tools of probability.

The set up is pretty easy, given the following

X = prior information

D = Data or the present Distribution

H = Hypothesis to be tested

given the elementary coxrule we can express the probabilities and their equivalence as

    \[p(DH|X) = p(HD|X)\]

    \[p(D|X) \cdot p (H|DX) = p(H|X) \cdot p(D|HX)\]

finally

    \[p (H|DX) = \dfrac{p(H|X) \cdot p(D|HX)}{p(D|X)}\]

We can notice many wonderful things in this simple equation, the term p (H|DX) is the posterior probability after considering the hypothesis in light of new information D given the prior information X. And the logical prior probability p(H|X) is updated with the dimensionless term \dfrac{p(D|HX)}{p(D|X)}. Tomake calculations faster andaccount very tightly packed probablities (e.g 0.9999 and 0.9999999) we can express the equation in terms of logarithms in base 10, as shown below.

    \[ \log_{10} p (H|DX) = \log_{10} p (H|X) + \log_{10} \dfrac{p(D|HX)}{p(D|X)} \]

We can further represent the original equation in terms of “odds” i.e \dfrac{p(A)}{p(\overline{A})}

    \[ \dfrac{p (H|DX)}{p (\bar{H}|DX)} = \dfrac{p (H|X)}{p (\bar{H}|X)} \cdot \dfrac{p(D|HX)}{p(D|\overline{H}X)} \]

bringing back the logarithms and representing \log_{10} \dfrac{p(A)}{p(\overline{A})} = \log_{10} O(A) = e(A) called the expectation.We can then write

    \[ e(H|DX) = e(H|X) +  \log_{10} \dfrac{p(D|HX)}{p(D|\overline{H}X)} \]

and if the data is comprised of mutliple mutually exclusive data on H, we may as well write

    \[ e(H|DX) = e(H|X) + \sum_i \log_{10} \dfrac{p(D_i|HX)}{p(D_i|\overline{H}X)} \]

The above equationslooks very elegant and it’s easy to fall into the trap of extending it to non-binary hypothesis i.e H_i \in \{H_1, H_2,\hdots, H_n\} for n > 2

To see that it’s not possible, we consider a simple three hypothesis problem with the following properties

    \[ H_i \in \{H_1, H_2,\hdots, H_3\} \]

all the hypotheses are mutually exclusive such that \sum_i H_i = 1

we can write the Hypothesis posterior probability as

    \[ e(H_1|DX) = e(H_1|X) + \sum_i \log_{10} \dfrac{p(D_i|H_1X)}{p(D_i|\overline{H_1}X)} \]

Trying to rewrite the denominator of the second term in term of the other two hypotheses followsthe following tedious process (since it’s not a binary case).

    \[ p(D_i|\overline{H_1}X) = \dfrac{p(D_i \overline{H_1}|X) \cdot p(D_i|X)}{p(\overline{H_1}|X)}   \]

The probability of “not” the first hypotheses implies either the second or third, so we can write further

    \[ p(D_i|\overline{H_1}X) = \dfrac{p(D_i (H_2 + H_3)|X) \cdot p(D_i|X)}{p((H_2 + H_3)|X)}   \]

since the hypotheses are mutually exclusive, we can go further

    \[ p(D_i|\overline{H_1}X) = \dfrac{p(D_i H_2|X) \cdot p(H_2|X) + p(D_i H_3|X) \cdot p(H_2|X)}{p(H_2|X) + p(H_3|X)}   \]

replacing back in the original equation we get

    \[ e(H_1|DX) = e(H_1|X) + \sum_i \log_{10} \dfrac{p(D_i|H_1X)}{ \dfrac{p(D_i H_2|X) \cdot p(H_2|X) + p(D_i H_3|X) \cdot p(H_2|X)}{p(H_2|X) + p(H_3|X)} } \]

Now imagine for hypotheses testing with n > 10, the calcualtions become a lot more messier and complication, showing very simply that Non-trivial extension of the binary case in Hypothesis testing is not possible or permissible.

 

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *