I have so much stuff backlogged to blog about – especially that we are working on fully integrating to OSF and putting up preprints of the cool work we are doing! But this blog post is reserved for HOW EXCITED I AM to announce that MOTE is ready to go to import into R. Run this code in your R:

install.packages(“devtools”) ##only needed if you do not have it yet


Remember that “” sometimes does not copy correctly into R. Go nuts! Ask questions! Give feedback! One thing I did not talk about in the video is a limitation of V in chi-square. Due to the distribution of chi-square, V confidence intervals are only useful on smaller r x c combinations (like 2X2, 3×3). After you hit about 4 rows/columns, the distribution flattens out, and the calculated confidence interval is not around the V value.  For example, a X2 of 14 with sample size 100, with four rows and columns gives you:

v.chi.sq(x2 = 14, n = 100,r = 4, c = 4, a = .05)
[1] 0.6480741

[1] 0.1732051

[1] 0.3241347

[1] 100

[1] 9

[1] 14

[1] 0.1223252

Warning message:
The size of the effect combined with the degrees of freedom is too small to determine a lower confidence limit for the ‘alpha.lower’ (or the (1/2)(1-‘conf.level’) symmetric) value specified (set to zero).

As you can see, this is a limitation of confidence intervals on chi-square. Also, I found more typos :|.

Go check out github:


Go check out the video on how to install and the history of MOTE:

Hey all!

I wanted to write a post about the permutation test video I uploaded to YouTube. I have linked the video and put up the materials on Advanced Statistics page.

I mainly wanted to cover that advantages to permutation tests:

  • You are not relying on some magical population. I hope I expressed this idea well in the video. The more I do research, the more I realize that populations are a thing of magic that just doesn’t exist – especially because, short of a lot of money, how are we supposed to randomly sample from that population anyway?
  • Those pesky assumptions! I am a big proponent of checking your assumptions – which is why all my videos have information about data screening in them. However, I am also guilty of being like “oh well shrug, there goes some power because what else am I supposed to do?”. Or even better … what do you do when all the reviewers only know ANOVA, and you do want to use something special? It’s a messy system we have going here.
  • They have a certain elegance to them … I test my data, and it turns out to be X statistic number. If I randomize that data, how many times do I get X or greater? How simple is that idea?

The hidden side of permutation tests is that they still rely on some form of probability, and potentially, the same faulty logic that we use now for null hypothesis significance testing. Additionally, I can see someone running the test to fish – if something is close, you could run permutation until it comes out your way.

I do know that I said something a bit wrong at the end of the video around 30 minutes in … you can’t really calculate F for the permutation test, because there are lots of F values (that’s the point). I would suggest reporting the p values and potentially calculating F for the original test by doing MS / Residuals but making it very clear the p value is for a permutation test. I also highly recommend adding eta or eta squared for effect size using the SS Variable and SS Residual information provided in the table. If you compare the aovp() output to regular ANOVA, you will find it is approximately the same for the SS and MS, but p changes based on the randomization results.



All blogs have to start somewhere, so wanted to give a quick introduction. I am an Associate Professor of Quantitative Psychology and Missouri State University. I teach a lot of stuff mostly related to statistics: baby stats (undergraduate basics), advanced stats (undergraduate/graduate mix of multivariate methods), graduate stats (graduate basics), and structural equation modeling. I run the Statistics and Research Design certificate program at MSU, along with working closely with our Experimental Psychology Track in the master’s program.

My research focuses on computational linguistics and applied statistics, which you can read a whole lot more about on my website. I would describe my language work as being interested in the types and way we use psycholinguistic variables, and how these variables relate to judgments and memory. Statistically speaking, I usually help others by exploring how they might analyze their data, but more recently am interested in the way we do business in statistics (i.e. understanding the way our statistics work and function under different scenarios) and how to teach statistics.

Here on the blog, we will be posting all sorts of information including links to new help videos, discussions about statistics in the real world, promoting our research papers, and any random thoughts that might cross the brain. My goal for this information is to not only promote what we are doing as scientists, but also to be able to teach anyone interested in how we did our work and promote the open science framework.

I also have purple hair, much to the amusement of my students and small children.