Even those who defend null hypothesis testing recognize many of the problems with it. But what should be done? Some suggestions now appear in the
*Pu**b**lica**t**io**n **M**a**n**ua**l*. One is that each null hypothesis test should be accompanied by an
effect size measure such as Cohen’s *d*or Pearson’s *r*. By doing so, the researcher provides an estimate of how strong the relationship in the population is—not just whether
there is one or not. (Remember that the *p *value cannot substitute as a measure of relationship strength because it also depends on the sample size. Even a very weak result can be
statistically significant if the sample is large enough.)

Another suggestion is to use **c****o****n****fi****d****e****n****c****e ****i****n****t****e****r****v****a****l****s **rather than null hypothesis tests. A confidence interval around a statistic is a range of values that is computed in such a way that some
percentage of the time (usually 95%) the population parameter will lie within that range. For example, a sample of 20 college students might have a mean calorie estimate for a chocolate chip
cookie of 200 with a 95% confidence interval of 160 to 240. In other words, there is a very good chance that the mean calorie estimate for the population of college students lies between 160
and 240. Advocates of confidence intervals argue that they are much easier to interpret than null hypothesis tests. Another advantage of confidence intervals is that they provide the
information necessary to do null hypothesis tests should anyone want to. In this example, the sample mean of 200 is significantly different at the .05 level from any hypothetical population
mean that lies outside the confidence interval. So the confidence interval of 160 to 240 tells us that the sample mean is statistically significantly different from a hypothetical population
mean of 250.

Finally, there are more radical solutions to the problems of null hypothesis testing that involve using very different approaches to inferential statistics. **B****a****y****e****s****i****a****n ****s****t****a****t****i****s****t****i****c****s**, for example, is an
approach in which the researcher specifies the probability that the null hypothesis and any important alternative hypotheses are true before conducting the study, conducts the study, and then
updates the probabilities based on the data. It is too early to say whether this approach will become common in psychological research. For now, null hypothesis testing—supported by effect size
measures and confidence intervals—remains the dominant approach.

## KEY TAKEAWAYS

- The decision to reject or retain the null hypothesis is not guaranteed to be correct. A Type I error occurs when one rejects the null hypothesis when it is true. A Type II error occurs when one fails to reject the null hypothesis when it is false.
- The statistical power of a research design is the probability of rejecting the null hypothesis given the expected relationship strength in the population and the sample size. Researchers should make sure that their studies have adequate statistical power before conducting them.
- Null hypothesis testing has been criticized on the grounds that researchers misunderstand it, that it is illogical, and that it is uninformative. Others argue that it serves an important purpose—especially when used with effect size measures, confidence intervals, and other techniques. It remains the dominant approach to inferential statistics in psychology.

## EXERCISES

- Discussion: A researcher compares the effectiveness of two forms of psychotherapy for social phobia using an independent-samples
*t*test.- Explain what it would mean for the researcher to commit a Type I error.
- Explain what it would mean for the researcher to commit a Type II error.

Discussion: Imagine that you conduct a*t*test and the*p*value is .02. How could you explain what this*p*value means to someone who is not already familiar with null hypothesis testing? Be sure to avoid the common misinterpretations of the*p*value.

- 2032 reads