Today I would like to introduce a new section to this blog: ‘Incurably sceptical’. In this section I rifle through the recent cognitive psychology literature and pick out a paper which looks interesting to me based on the abstract alone. I then proceed to examine the authors’ aims, methodology, analysis and interpretation. Hopefully along the way we will not only learn a little about the topic of the paper, but, in appraising it with a critical eye, perhaps also derive some lessons about the scientific method. Maybe we will even have some fun … Importantly, these are not ‘bad’ papers. Indeed, unless papers I find interesting are more likely to be bad, they should be representative of the standard of papers being published in the main cognitive psychology journals at the moment.
This time we will be looking at a paper titled ‘I control therefore I do: Judgments of agency influence action selection’1. The paper aimed to investigate whether a person’s feeling of agency over an effect made them more likely to engage in the behaviour which produced that effect – in other words, the paper sought to determine if a feeling of agency is in itself rewarding.
There are purportedly several facets to a person’s ‘sense of agency’. One such facet we will focus on is the mental belief that one is the intentional source of an outcome. For example, if you decide to put a dirty mug in the dish washer, and you then do so, you might hold the belief that ‘I intentionally moved that mug’. In the words of Haggard & Tsakaris: “As we perform actions in our daily lives, we have a coherent experience of a seemingly simple fluent flow from our thoughts, to our body movements, to the effects produced in the world. I want to have something to eat, I go to the kitchen, I eat a piece of bread. We have a single experience of agency – of control over these events.”2
This experience of agency, not only of simple movements of a hand, but also of more complex outcomes in the world is a growing area of study in a wide range of disciplines. It has been noted as an important concept in moral responsibility in law2, hypothesized as a core component of one’s experience of consciousness3 and a lack of agency has been implicated as one potential factor leading to auditory hallucinations in schizophrenia4. Work has also shown that individuals and corporations considered ‘harmful’ are actually judged to possess less agency5. Given that agency is very closely linked to blame6, it therefore may also have ramifications for the apportionment of blame in the wake of social disasters such as the banking crisis (see Blame The Banks). We can also be tricked into illusions of agency, such as in the notorious ‘mirror-hand illusion’7.
Figure 1. Some beliefs about agency are perhaps more illusory than others …
To test this hypothesis, the researchers placed their participants in front of a computer screen and gave them four buttons to press. They were instructed to press one of the four buttons every time a red dot appeared on the screen. They were also instructed to “take care that the sequence of responses they generate will be as random as possible”, i.e. ‘try to press all the buttons equally often’. That was the entirety of their instructions (well actually there was the occasional blue triangle, but we’ll get into that later). Now if you are suspicious at this point that there must be something ‘more’ going on in this experiment, I don’t blame you, and I personally find it hard to believe that the participants would have been convinced that this was the entirety of the experiment. This can be a problem particularly if your participants figure out the real aim of the experiment and even worse if they figure out what your hypothesis is – they might intentionally try to prove it (common enough to be known as ‘demand characteristics’) or disprove it (not common enough to have its own name – I suppose it would take a real jerk to want to do this) – either of which completely undermine the vital experimental assumption of participant naivety.
Of course, the experiment wasn’t just studying how good people are at constructing random sequences (we’ve known for a long time that we suck at it8, if you’re interested). Participants actually found themselves unknowingly in one of three conditions. In each of the three conditions, with varying probability, each of the four buttons was set to cause an ‘effect’: sometimes, when they pressed the buttons, the little red dot would turn into a little white dot before promptly disappearing. In the first, ‘High Probability’ condition, all four buttons had a 90% chance of triggering this white-dot effect. In the ‘No effect’ condition, the effect could never happen – these poor chaps really were just pressing random buttons for no reason. Finally, in the ‘Key Specific’ condition, the four buttons varied in their likelihood of producing the white-dot effect (90%; 60%; 30%; 0%). The idea behind this method was to variably instil this ‘sense of agency’ in the participants – for them to feel to varying extents like they were ‘causing’ this white dot to appear. The researchers assumed (reasonably, I suppose) that if the participants found this sense of agency rewarding or pleasurable in some way, they would press the button that produced that effect more frequently. Perhaps causing a white dot to appear doesn’t sound particularly rewarding to you, but I guess that was the point – they don’t state this explicitly but the researchers may have wanted to eliminate the potential confound (alternative explanation of an effect) that the participants were pressing the buttons with a higher probability of producing the effect not for the sense of agency it provided, but instead for the sheer enjoyment of the stimulus it provided – if the buttons produced biscuits instead of white dots, for example, no one would hesitate to complain that the desire for biscuits was driving the button pressing, rather than the enjoyment for some abstract ‘sense of agency’. However, I think I am not sure this issue is entirely dealt with. While a white dot is about as unstimulating an effect as I can imagine, I think we have to consider just how dull the existence of these poor button-mashing people was during the course of this experiment – this is perhaps one of the most boring experiments I have ever come across. The presence of a white dot instead of a red dot may well have seemed like nirvana itself. If we do find the hypothesized effect, perhaps the participants really just want to bring forth the holy white dot merely to revel in the brief glory of its existence and don’t care one wit whether they are the cause or not – perhaps the variety from the monotony of pressing four buttons for no reason was reward enough.
Figure 2. How I imagine this experiment.
Anyway, on to the results. We are firstly told in the results section that “To increase statistical power and the accuracy of parameter estimation, the following statistical analyses were conducted on the combined data from a small preliminary experiment (N = 29) which included only the Key Specific condition”. Personally, this rings alarm bells. They are not the alarm bells of outright fraud, but the slightly quieter but more insidious bells of ‘Researcher Degrees of Freedom’. Researcher degrees of freedom are the branching set of choices that experimenters are able to take throughout the entire process of designing, undertaking and analysing an experiment (e.g. figure 3, below) which may alter the likelihood of getting a ‘significant’ (ideally an indication that a result is not due to random fluctuations) result at the end. These include things like which of a number of designs to use, when to stop collecting participants, which statistical analysis to use, which outcome measures to focus on, and so on. With each of these decisions, the researcher will often know that one choice is more likely to lead to a significant result, and it takes a high level of commitment to scientific integrity to make a fully objective decision. In this paper, the researchers were at some point faced with a decision as to whether or not include these extra 29 participants from the preliminary study, and it is a very common decision experimenters face – I’ve faced it myself. Now, if the experimenters had absolutely no idea what the results were of either that preliminary sample or the main sample then there would be absolutely no issue in combining them – the authors would be entirely correct that it would simply boost statistical ‘power’ (more participants = better, in general). The problem comes when the experimenter knows whether the results from that preliminary study are in the direction of their hypothesis or not. We now know enough about human psychology and the subtle unconscious biases that influence our choices to know that on average, if one choice (e.g. including the data) makes a desired result more likely, and the other makes it less likely, then, all else being equal, the person is more likely to choose the former. I can tell you by personal experience that a researcher in this position will miraculously discover that a large number of perfectly sciencey-sounding reasons favouring including the data will spring to the person’s mind (such as ‘bigger sample sizes are better’, for example) – and if you think that scientists do not ‘want’ a particular result, that they are entirely objective, paper-producing automatons, who don’t care about furthering their career or producing highly-cited papers to get grant funding, well then you probably haven’t met many.
Figure 3. Researcher Degrees of Freedom: The worrying reality behind many significant findings in psychology?
Anyway, let us assume that no biases influenced the decision of whether to include that data. The researchers found that within the ‘Key Specific’ condition, where each key had a different chance of producing the white-dot effect, the buttons which produced it with greater frequency were pressed more often (actually, this effect was only significant for the 90% button, but hey, let’s move on). The researchers also found that participants in this ‘Key Specific’ condition were less ‘random’ in their key presses than in either of the other two conditions, where all the buttons had the same probability. This suggests that overall, the participants in this condition were drawn from their requested task of pressing the buttons randomly by the desire to produce the white dot effect. The researchers also found that reaction times in the ‘High Probability’ condition were greater than in the ‘No Effect’ condition. The conclusion again: people must be pressing faster because they are enjoying the sense of agency so much. The researchers also ruled out general engagement with the task as a ‘confound’ (alternative explanation of the effect). Remember that blue triangle tit bit I teased you with earlier? Well the researchers planted several trials throughout the experiment where these showed up instead of a red dot, and participants were told at the start that when this happened they should press the space bar, instead of one of the four normal buttons. This was supposed to be a measure of ‘engagement’ with the task. Apparently, by this measure, participants in the ‘High Probability’ condition were no more engaged than those in the ‘No Effect’ condition, but they still reacted faster.
So what can we draw from this experiment? Overall it seems that when people are able to cause an effect, even when explicitly asked to do something which runs counter to this (press the buttons randomly), they can’t seem to quite help themselves but press the buttons that cause that effect, and they are even quite good at becoming attuned to which buttons cause the effect more frequently. So are we all power-hungry maniacs who can’t even follow simple instructions when there is the temptation to cause some effect in the world (even one as mild as producing a white dot?) Well I think we should be careful before jumping to this conclusion. Firstly, there was little real incentive for people to pursue the randomness goal given them. Perhaps if the researchers had paid them according to how ‘random’ they were then this power-hungry idea might be more compelling. Further as I said before, there is good reason to think the participants might have second-guessed the researchers’ intentions with the white dots, and might have tried to achieve more white dots, suspecting this was the researcher’s true aim: the deception used in the task may not have been dastardly enough. Further, the effect was not very large – as mentioned, in the key specific condition, only the 90% button was actually pressed significantly more often – the 60% and 30% ones were no different to the 0% one (although this makes some sense: if you want the holy white dot, and you’ve figured out the frequencies, I guess you would just press the 90% one all the time, and ignore the 60% and 30% buttons equally as much as the 0% button). We also have the slight worry about the researcher degree of freedom involved in choosing to add the preliminary study’s data in to the present study – did the present data show a significant effect all on its own? If not, did the researchers know this before they added in the extra 29 people? Finally, even if all of these issues weren’t present, we still have the interpretation issue that the participants may have been pressing the button for the effect itself, for pure love of the white dot, instead of for the love of being the cause of the white dot. This is perhaps made less likely by the already discussed fact that the production of a white dot in itself shouldn’t be that rewarding, but it really can’t be ruled out on the present data. It is admittedly quite hard to think of an experiment where you could experimentally distinguish these two: the love of the white dot itself, and the love of causing the white dot – if you can think of a good way, you might have a paper on your hands.
1. Karsh, N., & Eitam, B. (2015). I control therefore I do: Judgments of agency influence action selection. Cognition, 138, 122–131. doi:10.1016/j.cognition.2015.02.002
2. Haggard, P., & Tsakiris, M. (2009). The Experience of Agency, 18(4), 242–246.
3. Schwabe, L., & Blanke, O. (2007). Cognitive neuroscience of ownership and agency. Consciousness and Cognition, 16(3), 661–666. doi:10.1016/j.concog.2007.07.007
4. Pareés, I., Brown, H., Nuruki, A., Adams, R. a, Davare, M., Bhatia, K. P., … Edwards, M. J. (2014). Loss of sensory attenuation in patients with functional (psychogenic) movement disorders. Brain : A Journal of Neurology, 137(Pt 11), 2916–21. doi:10.1093/brain/awu237
5. Khamitov, M., Rotman, J. D., & Piazza, J. (2016). Perceiving the agency of harmful agents: A test of dehumanization versus moral typecasting accounts. Cognition, 146, 33–47. doi:10.1016/j.cognition.2015.09.009
6. Shaver, K. (2012). The attribution of blame: Causality, responsibility, and blameworthiness. Springer Science & Business Media.
7. Tajima, D., Mizuno, T., Kume, Y., & Yoshida, T. (2015). The mirror illusion: does proprioceptive drift go hand in hand with sense of agency? Frontiers in Psychology, 6(February), 200. doi:10.3389/fpsyg.2015.00200
8. Wagenaar, W. A. (1972). Generation of random sequences by human subjects: A critical survey of literature. Psychological Bulletin, 77(1), 65.