Type I and II Errors in Medical Research

A possible fix for too-frequently-published type 1 error findings.

Quoting from Martha K. Smith, formerly of the math department faculty of the University of Texas, now with the blog, Musings on Using and Misusing Statistics:

Type II Error
Not rejecting the null hypothesis when in fact the null hypothesis is true is called a Type I error. (The second example below provides a situation where the concept of Type I error is important.)

The following table summarizes Type I and Type II errors:
Truth
(for population studied)
Null Hypothesis TrueNull Hypothesis False
Decision  
(based on sample)
Reject Null HypothesisType I ErrorCorrect Decision
Fail to reject Null HypothesisCorrect DecisionType II Error

==========================================================
Thanks, Dr. Smith!
How is this important in neuroscience?
Studies suggest less than 10% of published scientific research is replicated in later publications. Some of this is the tendency of journals to not publish merely replication work, and some is likely due to publication of interesting false positives.
Some have suggested behavioral researchers may repeat an experiment using isolated statistical tests until a statistically significant result for a single trial series is reached--one likely to regress to a non-significant mean on any replication attempt. This manipulates the results to minimize Type II errors, but tends to create Type I errors!
Here's a suggestion on systematizing the marking of spurious findings.

No comments:

Post a Comment

Risks for impaired post-stroke cognitive function

In a printed posted to the medRxiv preprint archive this month, I found a chart review of patients with stroke to determine factors (other t...