Since last year I have a monthly column in the Dutch magazine “de Psycholoog“, called “de Gereedschapskist” [The Toolbox]. As the title of the column suggests, my aim is to disseminate psychological methods and my reflections on what such methods can and cannot do to a wide audience of psychologists – ranging from clinical psychologists to organization psychologists – who work in practice. It is my firm belief that also psychologists working in practice can benefit from knowing how to evaluate the claims of papers in terms of their methodology: was the correct method chosen given the research question, and to what extent do the results provide evidence in favor (or not) of a hypothesis? In clinical psychology, for example, therapists may benefit from some additional methodological background knowledge in order to evaluate the extent to which a new intervention truly performs better than existing interventions.
Besides disseminating methodological knowledge, I have written pieces about slow science (and how much I love it!), methodological terrorism and open science combined with some guidelines, or checklists, as a means to optimize the probability of finding real effects.
These columns are written in Dutch but I’m planning to translate them into English somewhere in the coming months. I’ll keep you posted!
- Simpson’s paradox
- Within- versus between-subjects effects
- Bureaucracy in the scientific enterprise (inspired by this insightful paper by Jelte Wicherts et al.)
- Slow science
- Methodological terrorism
- Explanation vs. prediction (inspired by this brilliant paper by Yarkoni and Westfall)
- Factor analysis
- Confirmatory versus exploratory hypothesis testing (based on my own paper about the multiple comparisons problem in the multiway ANOVA)