Hypothesis Tests are big part of most statistics curricula, even as they have somewhat started to fade from favor in the statistics world at large. One of the reason I think for their slow fade is that they are often pretty easy to *do* while not *understanding*. There are many ways to help with this, and over the last couple of days I’ve been thinking a lot about hypotheses; how to structure them, how to explain them, and how to make the hypothesis testing process clearer and less rote.

One thing I am very proud of in our CPM Statistics text is the way we approach *statistical claims* – that is, claims about statistics that we want to evaluate for reasonableness. Unlike some texts, we introduce this idea with confidence intervals, and we don’t use the term “hypothesis” at all, at least at first. The problem below is the introduction to this idea.

And here is the “Math Notes” box at the end of the lesson that sums up what students get from their exploration, using confidence intervals, in the lesson. I like these notes, but I do think they should be generalized just a bit to allow for statistical claims like “The two events are independent” or “The drawing was done randomly.” Still, what I like about this is the idea of simply evaluating claims and the recognition that, yes, we CAN sometimes accept claims. Note the italics at the end, which are a precursor to “Never Accept the Null.”

Another thing that I like about our text is that we really spend a lot of time thinking about where the Burden of Evidence should be (or, conversely, what we should give the Benefit of the Doubt) when we are setting up hypotheses for one-tailed hypothesis tests. For example, consider this problem:

In this case, the Claim We Want To Test is p < 0.01. But we also don’t want to give this claim the benefit of the doubt – we need strong evidence that is true before moving forward! Therefore, even though it is the claim we want to test, it is NOT the Null Hypothesis. This contrasts with the problem below, where the Claim We Want To Test *should* be given the benefit of the doubt, and so *should* be the null hypothesis.

I like that our book considers this possibility – we *should* consider what should be given the benefit of the doubt when setting up hypothesis tests – but there are some confusions inherent in this. The biggest problem is that for this scenario, the “correct” hypotheses presented by our book are this:

$$\begin{align*}&H_0:p=0.02\\&H_a:p>0.02\end{align*}$$

Why aren’t EITHER of these the actual claim from the text, \(p < 0.02\)????

Statistics teachers know about this problem. The traditional answer, and the one we employ in our text, is that “The Null Hypothesis must be a *specific, assumable* statistical fact, one that will allow you to continue exploring mathematically.” If we did the (in many ways more logical) set of hypotheses \(\begin{align*}&H_0:p\leq 0.02\\&H_a:p>0.02\end{align*}\) then we would run into a problem as soon as we tried to assume a value for *p* to continue the test – which of the many values of *p* that are less than 0.02 do we mean?

There are two ways I’ve seen to get around this confusion:

- Go ahead and use the \( \leq \) version of the null, and just say “we always assume the border value.” Some books do this, many online resources do, and the AP exam accepts it. Personally, I just don’t like that it breaks the “We are going to assume the null hypothesis is exactly true because that’s the structure of the test!” part of the argument.
- Say “The Null Hypothesis isn’t really the claim we are trying to test. It is simply the assumption. You should instead focus on the
*alternative hypothesis*as the claim we need to gather clear evidence to demonstrate, and just let the null always be equals because that’s necessary for the test structure.”

Our book basically chooses the second route, as does The Practice of Statistics, and it seems to be the assumed default on most AP Free Response problems (though the other method is accepted). But it is still causing problems for my students. However, in the shower today I started to get an inkling of a way to help them. The idea was inspired by a tweet I saw about adding a line “Evidence for the \(H_a\)” underneath the hypotheses, as in:

$$\begin{align*}&H_0:p>=0.02\\&H_a:p<0.02\\&\text{Evidence for }H_a: \hat{p}=0.012\end{align*}$$

I love this! It makes it clear that what we are trying to gather evidence for is to “prove” the alternative hypothesis, which is a key component of the hypothesis test idea. But it doesn’t solve my student understanding problem by itself. So I’m thinking about having students do something like this:

$$\begin{align*}&\text{Claim from problem: }p < 0.02.\\ &\text{Benefit of doubt:} p<0.02\\ \\ &H_0:p=0.02\\&H_a:p>0.02\\&\text{Evidence for }H_a: \hat{p}=0.012\end{align*}$$

By giving them a place to state the claim that’s getting the benefit of the doubt, *even when it is not exactly the null hypothesis*, I think this method will satisfy some of their internal conflict. Then the null hypothesis really does become simply the assumable middle ground that “needs to be equal.” It is the most generous possible assumable value of the null *claim*. Thoughts?