A reviewers dream has come true. The new statcheck-package for R automagically checks the accurate reporting of statistical results, i.e. do the teststatistic, degrees of freedom and p-values add up? It relies on pdftotext to first convert the pdfs to text files and then extracts the stats to check them for accuracy. Unfortunately this does not work perfectly so some pdf cannot be parsed automatically. Another problem is that it works best on APA-type reporting of stats and may miss other conventions.

If you want to read more about the package and the distribution of reporting errors across 30,000+ papers in Psychology, read the paper Nuijten et al., 2015.

The following ten lines of R-code go through some of the pdfs I have on my harddrive and count the number of gross decision errors, i.e. the reported teststatistic and degrees of freedom indicate a non-significant effect, while the reported p-value is smaller than .05, or vice versa.

To make it more interesting I want to compare the prevalence of these gross errors in papers from vision research (31 papers with 342 stats) to the clincial research papers (100 papers with 972 stats). And the winner is: Clinical papers (1.7%) compared to vision-papers (3.8%) have fewer reporting errors Chi(1) = 3.90; p <.05. Who would have guessed that!




> prop.test(c(vis_err, clin_err), c(vis_all, clin_all))

	2-sample test for equality of proportions with continuity correction

data:  c(vis_err, clin_err) out of c(vis_all, clin_all)
X-squared = 3.9002, df = 1, p-value = 0.04828
alternative hypothesis: two.sided
95 percent confidence interval:
 -0.003332322  0.044376290
sample estimates:
    prop 1     prop 2 
0.03801170 0.01748971 


Also do not miss the graph!




We already explained here how you can use Latex to keep your CV up2date. Today I want to show you how the same bibtex sources and style-f...… Continue reading

Relaunch on Jekyll

Published on June 04, 2015