> One Dutch study tested 38 participants' ability to solve a Stroop test – which uses conflicting stimuli such as the word "blue" in red letters to interfere with how quickly people respond to a prompt – whilst stepping backwards, forwards or sideways. It found that participants stepping backwards had the fastest reaction times, perhaps because their brains were already used to performing an incongruous task.
I suspect it would. There is a well-documented similar effect called the congruency sequence effect [0] in which performance on a given incongruent trial improves if the previous trial was likewise incongruent (and, interestingly, performance on a given congruent trial will degrade). The current understanding of this phenomenon is essentially that you take just a bit more time and care on the current trial after recently experiencing a more demanding trial, which would parallel the hypothesis here.
It’s possible. The red flags for me were (1) the small sample size and (2) the fact that they tried forward, sideways, and backwards walking. That makes it seem more like a fishing expedition. If it were pre-registered that would mitigate the second issue, of course.
The study appears to be a part of one of the authors' dissertation which focuses on approach avoidance behaviors and their relationship to cognitive control [0]. The inclusion of sidestepping is not "fishing" but is used as a control to have participants doing a similar level of motor engagement on the task but that is neither approaching nor avoidant. The deeper hypothesis being tested is a part of the embodied cognition literature [1] which posits that not only does our cognitive system influence the state of our bodies, but that likewise the state of the body influences our cognitive state. Thus, the study aims to address whether the body being in a state of approach or avoidance influences our level of cognitive control being implemented on our assessment of other stimuli in our environment. Including sidestepping a control would absolutely have been an a priori decision with a legitimate scientific reasoning.
As a dissertation and a psychology study, the small sample size is somewhat expected, and a power analysis would (i suspect) indicate that it is perfectly valid for the statistical test being used (36 subjects is pretty close to 80% power for a moderate effect size for simpler experimental designs like this, if I remember correctly). These studies are not intended to serve as a critical and seminal authority on the subject but to add evidence to the literature, which then is taken as a whole to understand the underlying phenomena. Especially in a world where grants are reserved for only the flashiest and trendiest technologies and topics and publish or perish rules the day, papers with small sample sizes are a symptom of a lack of funding for these questions and relentless demands on researchers' time and effort, and I would bet that these researchers were doing the best they could with the resources they had. I think hackernews is rather uncharitable about psychology research in these respects.
As a note, this study was published in 2009 and thus likely conducted at least a year or two prior, at which point preregistration was not common for the cognitive field (and indeed it is still in its infancy). So, it is not surprising that this study was not preregistered and nor, I would argue, should it be considered a red flag.
I wonder if this study would replicate.