A possible fix for too-frequently-published type 1 error findings.
Quoting from Martha K. Smith, formerly of the math department faculty of the University of Texas, now with the blog, Musings on Using and Misusing Statistics:Type II Error
Not rejecting the null hypothesis when in fact the null hypothesis is true is called a Type I error. (The second example below provides a situation where the concept of Type I error is important.)
The following table summarizes Type I and Type II errors:
Truth (for population studied) | |||
Null Hypothesis True | Null Hypothesis False | ||
Decision (based on sample) | Reject Null Hypothesis | Type I Error | Correct Decision |
Fail to reject Null Hypothesis | Correct Decision | Type II Error |
==========================================================
Thanks, Dr. Smith!
How is this important in neuroscience?
Studies suggest less than 10% of published scientific research is replicated in later publications. Some of this is the tendency of journals to not publish merely replication work, and some is likely due to publication of interesting false positives.
Some have suggested behavioral researchers may repeat an experiment using isolated statistical tests until a statistically significant result for a single trial series is reached--one likely to regress to a non-significant mean on any replication attempt. This manipulates the results to minimize Type II errors, but tends to create Type I errors!
Here's a suggestion on systematizing the marking of spurious findings.
No comments:
Post a Comment