Many statistics texts pose inferential statistical problems in a disjointed way. By using a simple five-step procedure as a template for statistical inference problems, the student can solve problems in an organized fashion. The problem and its solution will thus be a stand-by-itself organic whole and a single unit of thought and effort. The described procedure can be used for both parametric and nonparametric inferential tests. The example given is a chi-square goodness-of-fit test of a genetics experiment involving a dihybrid cross in corn that follows a 9:3:3:1 ratio. This experimental analysis is commonly done in introductory biology labs.
Inferential statistics is an indispensible tool for biological hypothesis testing. Early in their science education, students learn about the scientific method and how inductive rather than deductive reasoning is used to make the logical leap from particular experimental results to one or more general conclusions. However, before any conclusion can be reached, the experimental results must be tested for statistical significance. After all, there is a chance that any difference between two or more experimental treatments or tests is attributable to random events. Therefore, we use statistics ““to compare the data with our ideas and theories, to see how good a match there is”” (Hand, 2008: p. 10). The five-step procedure presented here was designed to aid in this process.
Science teachers must lead students through a strange new statistical landscape that combines logic, jargon, and mathematical calculations such as variance, standard deviation, sum of squares, and calculated test statistics. Concepts like Type I errors, one-tailed or two-tailed alternative hypotheses, and p value must be defined and related to specific examples. But even in excellent statistics and biostatistics texts, data are given, a value for a (level of significance) is given, and then, typically, a ““What do you conclude?”” question is asked. As an afterthought, usually a part B to the problem, students are asked to give the p value for their conclusion. This method of posing statistics problems has always struck me as disjointed.
I believe that the following simple procedure allows the given problem to be stated, viewed, and solved as a stand-by-itself organic whole. This procedure both formalizes and crystalizes student thinking. Another advantage of this five-step procedure is that it can be used for essentially all statistical inference tests —— both parametric and nonparametric. I was taught this technique in a graduate-level course in statistics, and I have been using it ever since.
The Five General Steps in Hypothesis Testing
Step 1 Write down the null and alternative hypotheses in both symbols and words, using complete sentences.
Step 2 Calculate the test statistic to the appropriate number of significant figures.
(a) State the given a (probability of a Type I error).
(b) Calculate the degrees of freedom.
(c) Give the region of rejection both in symbols and in a graph.
Step 4 Draw a conclusion based on the calculated test statistic.
(a) If the test statistic is in the region of rejection (RR), reject the null hypothesis and state the conclusion in one or more complete sentences.
(b) If the test statistic is not in RR, accept the null hypothesis and state the conclusion in one or more complete sentences.
Step 5 Bracket the p value.
A chi-square goodness-of-fit test is quite commonly used to check the appropriateness of a proposed model that uses categorical data. One popular experiment involves checking to see if a cross involving corn plants results in the Mendelian dihybrid phenotypic ratio of 9 purple smooth to 3 purple wrinkled to 3 yellow smooth to 1 yellow wrinkled corn grains. The following example and data are from such an experiment from one of my botany lab groups.
Step 1 Ho: The data fit the model of 9 purple smooth to 3 purple wrinkled to 3 yellow smooth to 1 yellow wrinkled corn grains.
Ha: Ho is false.
(a) αα == 0.05
(b) df == 4 —— 1 == 3
(c) RR == (7.815,∞∞)
Step 4 χχ2calc == 3.218 does not lie in RR; therefore, I accept Ho (the null hypothesis) and conclude that the data fit the model proposed in Ho above.
Step 5 0.30 ≤≤ p ≤≤ 0.40
Step 1 For this example, no symbols were used in Step 1, although one could use, for example, p1 == 9/16, p2 == 3/16, p3 == 3/16, and p4 == 1/16. In a test for means equality, the null hypothesis might be as follows: Ho: μμ1 == μμ2; and Ha might be μμ1 ≠≠ μμ2 or μμ1 ≤≤ μμ2 or μμ1 ≥≥ μμ2 where μμ refers to the population mean. Regarding Ha, for this example, one could state that the data do not fit the proposed model or simply that Ho is false.
Step 2 The ““expected”” counts are calculated under the assumption that Ho is true. Thus, the expected count for purple smooth corn grains was calculated as 9/16 × 361 (total of all corn grains). The chi-square statistic is simply the sum of the last column in the table given in Step 2, or (Obs —— Exp)2 / Exp. For this example, it is 3.218. The chi-square statistic was calculated to the same number of significant figures in the chi-square table. It is assumed that the instructor has informed students of the conditions for validity of this test, namely that (1) the data represent a random sample from a large population, (2) the data are whole (counting) numbers and not percentages or standardized scores, and (3) the expected count for each class is ⩾⩾5 (Samuels & Witmer, 2003: chapter 10; Mendenhall et al., 1990: pp. 665——666).
Step 3 The probability of a Type I error, a, must be given as part of the problem. A Type I error is made when a true null hypothesis (Ho) is rejected. The degrees of freedom (df) are calculated as k —— 1, where k is the number of data classes. The chi-square statistic (2) has a domain of zero to infinity. The region of rejection (RR) is obtained from a statistical table of chi-square values.
Step 4 This is the important ““Decision Rule”” of many statistics books. By plotting the 2calc value of 3.218 on the graph in Step 3, one can see that 3.218 does not lie in the region of rejection (RR) but, rather, lies in the region of acceptance; this means that the null hypothesis is accepted. Since an absolute truth is not known, in the sense that the conclusion could be wrong, most statisticians prefer stating that there is insufficient evidence to reject the null hypothesis. Failing to reject Ho, under the constraints of committing a Type I or Type II error, is a better decision than simply accepting it, even though the two choices appear to give a similar conclusion. At this point, depending on time and the level of the class, the instructor may wish to discuss Type II errors. A Type II error is made if a false null hypothesis is accepted (not rejected). The probability of a Type II error (b) can be calculated after the fact (Glover & Mitchell, 2006: section 5.3; Schork & Remington, 2000: pp. 174——181), looked up in tables for some tests (Portney & Watkins, 2009: p. 853), or controlled for by calculating the sample size needed for a given b value (Mendenhall et al., 1990: pp. 443——446). The instructor may also wish to explain why, in most cases, a Type I error is more insidious than a Type II error and that most problems thus give the value for a without ever mentioning ββ.
Step 5 Most statistics books offer excellent explanations for the concept of ““p value.”” One of the best and simplest explanations I have found is: ““The term p-value is used to describe the probability that we would observe a value of the test statistic as extreme or more extreme than that actually observed, if the null hypothesis were true”” (Hand, 2008: p. 88). In some statistics books, 0.20 is the largest value for p found in the chi-square table. In that case, Step 5 for this example would be written as p ≥≥ 0.20.
The five-step procedure for general hypothesis testing given here allows students to follow a handy template or procedure for statistical inference tests. This procedure formalizes the approach to problem solving and forces the math and logic involved in such tests to form an organic whole. The five steps stand as a unified entity. The problem is stated, a test statistic is calculated, a conclusion is reached based on a given value for a, and a confidence level is given as the last step (see Step 5 in the Comments section above). The problem and its solution thus stand as a single unit of thought and effort.