Affordable Access

[Not significant--what now?].

Authors
  • Gerss, J
Type
Published Article
Journal
Zentralblatt für Gynäkologie
Publication Date
Dec 01, 2006
Volume
128
Issue
6
Pages
307–310
Identifiers
PMID: 17213967
Source
Medline
License
Unknown

Abstract

In a statistical significance test a scientific problem is expressed by formulating a null hypothesis and an opposite alternative. Construction of an empirical decision rule usually focuses on control of the alpha-error, i.e. the probability of erroneously refusing the null hypothesis. Contrary to the alpha-error, the beta-error is not controlled and in general is of unknown size. Thus in case of a non-significant result the validity of the null hypothesis still may be highly questionable. Such an unwanted outcome of an applied test the researcher should try to avoid by choosing an appropriate study design. In case it occurs nevertheless, it is advised to further evaluate the (non-significant) result. This can be done by calculating confidence intervals of the tested effects. Furthermore the p-value can be interpreted as a metric measure of evidence against the null hypothesis. By means of a posterior power analysis the probability of a significant test result is estimated under the given circumstances. Thus possibly the applied test--under the assumption of actual validity of the alternative--turns out to have had hardly a chance of rejecting the null hypothesis. In this case the non-significant result (pointing towards the null hypothesis) is relativized substantially. On the other hand a large power points to a small probability of a beta-error.

Report this publication

Statistics

Seen <100 times