Incurably Sceptical: Can Super Recognisers Detect Happy or Fearful emotions by Sniffing Underarm Sweat?

Welcome to the second in my ‘Incurably Sceptical’ series (see first post here). In this section I pick a paper from the cognitive psychology literature that appears interesting based on abstract alone. We then pick apart the author’s aims, methodology, analysis and interpretation, having perhaps just a little fun at their expense but hopefully also learning a few useful things about scientific method along the way. This time we will be looking at a paper entitled ‘Testing for Individual Differences in the Identification of Chemosignals for Fear and Happy: Phenotypic Super-Detectors, Detectors and Non-Detectors.’ [Link] Broadly, the aim of this paper was to examine the extent to which people can detect a person’s mood (fearful or happy) by smelling their under-arm sweat (stay tuned for more on the protocol for extraction).

Background
Super Detector (/ recogniser) research is a popular trend at the moment, both academically and in the media. The idea is that there are some individuals in the population who, for whatever reason, have a ‘super’ ability for e.g. detecting flavours in wine / coffee, recognising faces, detecting minute changes in pitch / tone etc (in general, having an extremely heightened ability to detect similarities between two stimuli or patterns based on one or other of the senses). There have been many articles lately about super detectors being used by the police and private companies for all sorts of wonderful things (see: Are you a Super Recogniser? and ‘The Super-Recognisers of Scotland Yard’). Even the higher-quality reporting on the subject has raised my scepto-sense before – see for example this paper where a group of people with an average 93% accuracy for facial recognition (vs 80% in the general population) apparently deserve the title ‘Super Recognisers’. ‘Slightly Better Recogniser’ might be more appropriate. This is inevitable of course. When being ‘on trend’ increases your likelihood of getting published, the application of sensational category labels like ‘Super Detector’ to small group differences is to be expected. So, I would certainly describe myself as in a sceptical frame of mind when I first read the title of this paper. The abstract didn’t improve things with its mention of ‘implications for the further study of genetic differences’ despite there clearly being no actual genetic analyses in the study. Further, ‘dual processing’, another trendy term, was thrown in despite a lack of clear relevance. In short, this paper appeared to me, based on the abstract, to be tenuously ticking all the boxes publishers like to see, and when a paper’s doing that, I tend to worry that low quality work is being masked underneath. However, the abstract also said that “mood odors had been collected from 14 male donors during a mood induction task” and that 41 females had been asked to identify the mood odor chemosignals … so obviously I read on.

 

Method
So, yes, onto this extraction method. Normally, I would paraphrase a method, but I enjoyed the tone of the write up of this one so much that below I reproduce (nearly) the whole section on extraction:

In this study the mood odors were collected from 14 healthy male undergraduate nonsmokers. For a 7-day period prNoseior to the sample collection, the donors only used the provided odor-free deodorant and cleansing products. The donors were instructed to shower (using the soap provided) the morning of sample collection approximately 6 hours prior to sample collection. They were also given a list of prohibited “spicy” and other odorous foods and did not eat them during the 24 hours prior to the collection.

Axillary samples were collected during two video mood inductions, one day apart. The fear mood and happy mood induction videos were 12-minute standardized videos … The videos were shown twice to the subjects for a 24-minute induction. The videos have multiple facial displays for fear (or happy). There is no narrative theme. This reduces the likelihood that repetition of the 12 minute video would decrease the impact of the video.

Samples were collected onto cleaned Kerlix8 brand sterile gauze. Prior to mood induction, donors were given 2 pairs of gauze strips (each strip 3cm x 8cm) in separate plastic enclosed bags labeled “right” and “left” arm. They placed one pair in each left/right axilla. At 12 minutes into the mood induction, the film was paused and donors removed one pair (1 left and 1 right) of axillary pads. Donors placed each pad into its labeled plastic zipper bags. All air was forced from the bag prior to sealing. The second pair of pads was collected in the same manner after 24 minutes. All samples were placed in a minus 80C° freezer within 2 hours of collection.

So, yes, an unusually invasive and controlling set of requirements for these healthy male undergraduates. They don’t report the incentive for them to take part in the study, or if they were paid more or less than the people who had to smell their sweat – tough call that one. In terms of the smelling protocol, “Participants (detectors) were tested individually in dedicated testing rooms approximately 8’x8′” (I’m not sure why they included the room size here, but perhaps if you know a lot about smelling, this is quite important for understanding the … diffusion dynamics or something). Then:

On each trial the experimenter placed five identical sample jars from one set of donors on a plastic tray on the table, shuffled them, and presented the tray of jars to the detector. S/he was instructed to sniff the jars as many times as necessary and in any order. The detector identified the odors by setting each jar on its label [e.g. fear, happy, control] on a place-mat.

I imagine this scene a bit like a gross version of the ball and cup trick (‘Keep your eyes on the fear-sweat jar’). The offer of unlimited sniffs was very generous though. Anyway … despite its amusing elements, on reading the methodology I was struck by how well controlled it was e.g. “To avoid position effects, half of the detectors had fear labels on the left side of the place mat and half of the detectors had them on the right side of the place mat.” There were lots of neat little controls built into the study like this to ensure the results weren’t biased and overall I was impressed by the attention to detail.

Results
So, they extracted sweat during fear-inducing or happy-inducing videos, then got people to sniff fear, happy, and control (no video) sweat to see if they could correctly label them. Simple enough. Now, what did they find? “The first analysis (rtt) showed that the population was phenotypically heterogeneous, not homogeneous, in identification accuracy.” This sentence annoyed me, I must admit. The use of the word phenotypically implies an important distinction is being made between the participants’ genotype (their set of genes) and their phenotype (their body). But there’s no genetic testing in this paper, so the distinction is pointless – the word can just be deleted entirely without affecting the meaning of the sentence. And heterogeneous? All that means is that every single individual in the sample weren’t all equally good at smelling – the addition of these words seem just to serve to add a ‘sciencey’ sounding feel to the paper. If you’re wondering why I’ve gone on about this for so long, well yes, it’s a pet peeve of mine. Flowery, jargon-rich scientific

Graph
Percentage of Super Detectors, Detectors and Non-Detectors accurately identifying the jars on each of 15 trials.

writing usually hides a lack of competence and knowledge, rather than demonstrating it. It also serves to alienate lay people and even scientists from other disciplines. It is exactly the kind of writing I expected from reading the abstract, with its unnecessary use of trendy terms. In truth it actually isn’t a bad paper underneath, despite my expectations, so I all the more wish they could have stuck to a more concise, less show-offy (that’s not jargon, just a made up word, by the way) reporting style.

I think what they were actually trying to convey with this sentence was that their participants’ smelling ability, rather than being a smooth spectrum from rubbish to good, was broken up into well-defined groups. Indeed, around 49% were deemed to be super-detectors, who had around a 75% accuracy rating by the final trial. 33% were just ‘detector’s (around 40% accuracy on the final trial) and around 18% were ‘non-detectors’ with 0% accuracy. Now, as I briefly outlined earlier, this concept of super detectors rests on the idea that there is a proportion of the population who have an unusually heightened ability. Any definition you look up of ‘super’ is likely to include the words ‘particularly’, ‘especially’, ‘unusually’, etc. This makes it a peculiar term to apply to the largest group (half the sample!) This is the majority, not some niche elite … and here again we arrive at issues with, not the underlying paper itself (the statistical analysis is actually as far as I can tell excellent) but with sensationalism and dressing-up in the write up. These authors used the term ‘super-detectors’, despite the ludicrous fact that their ‘super’ group was half the sample. The only reason for this can be that it is an eye catching term and increases their chances of getting published. Sigh. There are no 100% objective scientists. They are all just regular people who need to further their careers. 15 year old me would be very depressed.

Read More

Is our feeling of ‘agency’ over an event inherently rewarding?

Today I would like to introduce a new section to this blog: ‘Incurably sceptical’. In this section I rifle through the recent cognitive psychology literature and pick out a paper which looks interesting to me based on the abstract alone. I then proceed to examine the authors’ aims, methodology, analysis and interpretation. Hopefully along the way we will not only learn a little about the topic of the paper, but, in appraising it with a critical eye, perhaps also derive some lessons about the scientific method. Maybe we will even have some fun … Importantly, these are not ‘bad’ papers. Indeed, unless papers I find interesting are more likely to be bad, they should be representative of the standard of papers being published in the main cognitive psychology journals at the moment.

 

Background

This time we will be looking at a paper titled ‘I control therefore I do: Judgments of agency influence action selection’1. The paper aimed to investigate whether a person’s feeling of agency over an effect made them more likely to engage in the behaviour which produced that effect – in other words, the paper sought to determine if a feeling of agency is in itself rewarding.

There are purportedly several facets to a person’s ‘sense of agency’. One such facet we will focus on is the mental belief that one is the intentional source of an outcome. For example, if you decide to put a dirty mug in the dish washer, and you then do so, you might hold the belief that ‘I intentionally moved that mug’. In the words of Haggard & Tsakaris: “As we perform actions in our daily lives, we have a coherent experience of a seemingly simple fluent flow from our thoughts, to our body movements, to the effects produced in the world. I want to have something to eat, I go to the kitchen, I eat a piece of bread. We have a single experience of agency – of control over these events.”2

This experience of agency, not only of simple movements of a hand, but also of more complex outcomes in the world is a growing area of study in a wide range of disciplines. It has been noted as an important concept in moral responsibility in law2, hypothesized as a core component of one’s experience of consciousness3 and a lack of agency has been implicated as one potential factor leading to auditory hallucinations in schizophrenia4. Work has also shown that individuals and corporations considered ‘harmful’ are actually judged to possess less agency5. Given that agency is very closely linked to blame6, it therefore may also have ramifications for the apportionment of blame in the wake of social disasters such as the banking crisis (see Blame The Banks). We can also be tricked into illusions of agency, such as in the notorious ‘mirror-hand illusion’7.

 


Figure 1. Some beliefs about agency are perhaps more illusory than others …

The Paper

The present study sought to determine if this sense of agency over events, effects or outcomes, is itself rewarding – do we seek and enjoy this sense of agency or control as an end in itself?

To test this hypothesis, the researchers placed their participants in front of a computer screen and gave them four buttons to press. They were instructed to press one of the four buttons every time a red dot appeared on the screen. They were also instructed to “take care that the sequence of responses they generate will be as random as possible”, i.e. ‘try to press all the buttons equally often’. That was the entirety of their instructions (well actually there was the occasional blue triangle, but we’ll get into that later). Now if you are suspicious at this point that there must be something ‘more’ going on in this experiment, I don’t blame you, and I personally find it hard to believe that the participants would have been convinced that this was the entirety of the experiment. This can be a problem particularly if your participants figure out the real aim of the experiment and even worse if they figure out what your hypothesis is – they might intentionally try to prove it (common enough to be known as ‘demand characteristics’) or disprove it (not common enough to have its own name – I suppose it would take a real jerk to want to do this) – either of which completely undermine the vital experimental assumption of participant naivety.

Of course, the experiment wasn’t just studying how good people are at constructing random sequences (we’ve known for a long time that we suck at it8, if you’re interested). Participants actually found themselves unknowingly in one of three conditions. In each of the three conditions, with varying probability, each of the four buttons was set to cause an ‘effect’: sometimes, when they pressed the buttons, the little red dot would turn into a little white dot before promptly disappearing. In the first, ‘High Probability’ condition, all four buttons had a 90% chance of triggering this white-dot effect. In the ‘No effect’ condition, the effect could never happen – these poor chaps really were just pressing random buttons for no reason. Finally, in the ‘Key Specific’ condition, the four buttons varied in their likelihood of producing the white-dot effect (90%; 60%; 30%; 0%). The idea behind this method was to variably instil this ‘sense of agency’ in the participants – for them to feel to varying extents like they were ‘causing’ this white dot to appear. The researchers assumed (reasonably, I suppose) that if the participants found this sense of agency rewarding or pleasurable in some way, they would press the button that produced that effect more frequently. Perhaps causing a white dot to appear doesn’t sound particularly rewarding to you, but I guess that was the point – they don’t state this explicitly but the researchers may have wanted to eliminate the potential confound (alternative explanation of an effect) that the participants were pressing the buttons with a higher probability of producing the effect not for the sense of agency it provided, but instead for the sheer enjoyment of the stimulus it provided – if the buttons produced biscuits instead of white dots, for example, no one would hesitate to complain that the desire for biscuits was driving the button pressing, rather than the enjoyment for some abstract ‘sense of agency’.  However, I think I am not sure this issue is entirely dealt with. While a white dot is about as unstimulating an effect as I can imagine, I think we have to consider just how dull the existence of these poor button-mashing people was during the course of this experiment – this is perhaps one of the most boring experiments I have ever come across. The presence of a white dot instead of a red dot may well have seemed like nirvana itself. If we do find the hypothesized effect, perhaps the participants really just want to bring forth the holy white dot merely to revel in the brief glory of its existence and don’t care one wit whether they are the cause or not – perhaps the variety from the monotony of pressing four buttons for no reason was reward enough.

Picture1

Figure 2. How I imagine this experiment.

Anyway, on to the results. We are firstly told in the results section that “To increase statistical power and the accuracy of parameter estimation, the following statistical analyses were conducted on the combined data from a small preliminary experiment (N = 29) which included only the Key Specific condition”. Personally, this rings alarm bells. They are not the alarm bells of outright fraud, but the slightly quieter but more insidious bells of ‘Researcher Degrees of Freedom’. Researcher degrees of freedom are the branching set of choices that experimenters are able to take throughout the entire process of designing, undertaking and analysing an experiment (e.g. figure 3, below) which may alter the likelihood of getting a ‘significant’ (ideally an indication that a result is not due to random fluctuations) result at the end. These include things like which of a number of designs to use, when to stop collecting participants, which statistical analysis to use, which outcome measures to focus on, and so on. With each of these decisions, the researcher will often know that one choice is more likely to lead to a significant result, and it takes a high level of commitment to scientific integrity to make a fully objective decision. In this paper, the researchers were at some point faced with a decision as to whether or not include these extra 29 participants from the preliminary study, and it is a very common decision experimenters face – I’ve faced it myself. Now, if the experimenters had absolutely no idea what the results were of either that preliminary sample or the main sample then there would be absolutely no issue in combining them – the authors would be entirely correct that it would simply boost statistical ‘power’ (more participants = better, in general). The problem comes when the experimenter knows whether the results from that preliminary study are in the direction of their hypothesis or not. We now know enough about human psychology and the subtle unconscious biases that influence our choices to know that on average, if one choice (e.g. including the data) makes a desired result more likely, and the other makes it less likely, then, all else being equal, the person is more likely to choose the former. I can tell you by personal experience that a researcher in this position will miraculously discover that a large number of perfectly sciencey-sounding reasons favouring including the data will spring to the person’s mind (such as ‘bigger sample sizes are better’, for example) – and if you think that scientists do not ‘want’ a particular result, that they are entirely objective, paper-producing automatons, who don’t care about furthering their career or producing highly-cited papers to get grant funding, well then you probably haven’t met many.


Figure 3. Researcher Degrees of Freedom: The worrying reality behind many significant findings in psychology?

Anyway, let us assume that no biases influenced the decision of whether to include that data. The researchers found that within the ‘Key Specific’ condition, where each key had a different chance of producing the white-dot effect, the buttons which produced it with greater frequency were pressed more often (actually, this effect was only significant for the 90% button, but hey, let’s move on). The researchers also found that participants in this ‘Key Specific’ condition were less ‘random’ in their key presses than in either of the other two conditions, where all the buttons had the same probability. This suggests that overall, the participants in this condition were drawn from their requested task of pressing the buttons randomly by the desire to produce the white dot effect. The researchers also found that reaction times in the ‘High Probability’ condition were greater than in the ‘No Effect’ condition. The conclusion again: people must be pressing faster because they are enjoying the sense of agency so much. The researchers also ruled out general engagement with the task as a ‘confound’ (alternative explanation of the effect). Remember that blue triangle tit bit I teased you with earlier? Well the researchers planted several trials throughout the experiment where these showed up instead of a red dot, and participants were told at the start that when this happened they should press the space bar, instead of one of the four normal buttons. This was supposed to be a measure of ‘engagement’ with the task. Apparently, by this measure, participants in the ‘High Probability’ condition were no more engaged than those in the ‘No Effect’ condition, but they still reacted faster.

 

Conclusion

So what can we draw from this experiment? Overall it seems that when people are able to cause an effect, even when explicitly asked to do something which runs counter to this (press the buttons randomly), they can’t seem to quite help themselves but press the buttons that cause that effect, and they are even quite good at becoming attuned to which buttons cause the effect more frequently. So are we all power-hungry maniacs who can’t even follow simple instructions when there is the temptation to cause some effect in the world (even one as mild as producing a white dot?) Well I think we should be careful before jumping to this conclusion. Firstly, there was little real incentive for people to pursue the randomness goal given them. Perhaps if the researchers had paid them according to how ‘random’ they were then this power-hungry idea might be more compelling. Further as I said before, there is good reason to think the participants might have second-guessed the researchers’ intentions with the white dots, and might have tried to achieve more white dots, suspecting this was the researcher’s true aim: the deception used in the task may not have been dastardly enough. Further, the effect was not very large – as mentioned, in the key specific condition, only the 90% button was actually pressed significantly more often – the 60% and 30% ones were no different to the 0% one (although this makes some sense: if you want the holy white dot, and you’ve figured out the frequencies, I guess you would just press the 90% one all the time, and ignore the 60% and 30% buttons equally as much as the 0% button). We also have the slight worry about the researcher degree of freedom involved in choosing to add the preliminary study’s data in to the present study – did the present data show a significant effect all on its own? If not, did the researchers know this before they added in the extra 29 people? Finally, even if all of these issues weren’t present, we still have the interpretation issue that the participants may have been pressing the button for the effect itself, for pure love of the white dot, instead of for the love of being the cause of the white dot. This is perhaps made less likely by the already discussed fact that the production of a white dot in itself shouldn’t be that rewarding, but it really can’t be ruled out on the present data. It is admittedly quite hard to think of an experiment where you could experimentally distinguish these two: the love of the white dot itself, and the love of causing the white dot – if you can think of a good way, you might have a paper on your hands.

 

References

1. Karsh, N., & Eitam, B. (2015). I control therefore I do: Judgments of agency influence action selection. Cognition, 138, 122–131. doi:10.1016/j.cognition.2015.02.002

2. Haggard, P., & Tsakiris, M. (2009). The Experience of Agency, 18(4), 242–246.

3. Schwabe, L., & Blanke, O. (2007). Cognitive neuroscience of ownership and agency. Consciousness and Cognition, 16(3), 661–666. doi:10.1016/j.concog.2007.07.007

4. Pareés, I., Brown, H., Nuruki, A., Adams, R. a, Davare, M., Bhatia, K. P., … Edwards, M. J. (2014). Loss of sensory attenuation in patients with functional (psychogenic) movement disorders. Brain : A Journal of Neurology, 137(Pt 11), 2916–21. doi:10.1093/brain/awu237

5. Khamitov, M., Rotman, J. D., & Piazza, J. (2016). Perceiving the agency of harmful agents: A test of dehumanization versus moral typecasting accounts. Cognition, 146, 33–47. doi:10.1016/j.cognition.2015.09.009

6. Shaver, K. (2012). The attribution of blame: Causality, responsibility, and blameworthiness. Springer Science & Business Media.

7. Tajima, D., Mizuno, T., Kume, Y., & Yoshida, T. (2015). The mirror illusion: does proprioceptive drift go hand in hand with sense of agency? Frontiers in Psychology, 6(February), 200. doi:10.3389/fpsyg.2015.00200

8. Wagenaar, W. A. (1972). Generation of random sequences by human subjects: A critical survey of literature. Psychological Bulletin77(1), 65.


Read More

‘That Facebook Study’

Let’s do a test of selective memory. Do you remember ‘that Facebook study’ from last year? It was really creepy and ethically dubious right? Do you remember what the study was actually about? No?

 

LeBon
Gustav Le Bon

Well let me tell you. It was about ‘emotional contagion’. This is the theory that the spread of emotions in a social network (on or off-line) is essentially replicative, like the spread of a virus. This ‘epidemiological’ approach can be traced back to Gustav Le Bon’s 1896 work ‘Psychologie des Foules’  or ‘The Psychology of Crowds’. Le Bon’s work was motivated by the French elite who were becoming increasingly afraid of emotional contagion in rioting masses and its potential effects on social order. Le Bon believed that the spread of emotions in crowds could be seen like the spread of germs and that this effect deprived them of their capacity to act individually and rationally.

 

The emotional spread in the modern contagion model (see Hatfield et al, 1994) is thought to occur not directly, but through two steps:

FrenchMob
The eighteenth century ruling french elite feared emotional contagion in rioting mobs.

1. The observer mimics the behaviour of the individual experiencing the emotion (not necessarily in its entirety, but e.g. by tensing one’s stomach in response to fear, screwing up one’s face in response to disgust etc.)

2. The mimicked behaviour causes the observer to experience the same emotion.

If your brain is immediately coming up with counter examples to this model, don’t worry, you aren’t alone (see the recent paper by Dezecache et al. [2015] for a discussion of its limitations). The model holds fairly well for things like disgust and fear or anger at an out-group (like the ruling French elite), but what about interpersonal emotions like envy? This doesn’t usually trigger envy in an onlooker, and certainly not in the person being envied. So, it seems fair to say:

 

GoldacreBook
Ben Goldacre’s 2014 book ‘I think you’ll find it’s a bit more complicated than that’

Moving onto the Facebook study itself, what were they actually trying to do? The study was titled ‘Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks’. So they were trying to test this emotional contagion theory on positives emotions in the biggest data set ever, with all the power of technology at their fingertips. They classified posts as ’emotionally positive’ if they had at least one positive word in them and no negative, (I don’t blame your brain if it is again coming up with issues with this method, but it is at least in accordance with previous work). They then reduced the amount of these positive posts in some people’s news feeds and not others, selected randomly (you remembered that bit didn’t you?).

 

So what did they find? By reducing the frequency of these ‘positive’ posts in a person’s news feed they were able to decrease the amount of positive posts that person then produced themselves. This was good enough to prove the emotional contagion theory for these authors. To quote the paper: “The results show emotional contagion.” (Kramer, Guillory & Hancock, 2014).

 

But hold on there Core Data Science Team, Facebook, Inc., are you entirely sure you have thoroughly examined your reasoning process here? In psychology we like to think about ‘confounds’ when interpreting our findings. These are things which explain a finding other than what you are claiming is the explanation. So, is there anything that could explain this change other than ‘emotional contagion’? Well, I can think of a few. What if the original ‘positive’ post provides information of some event which affects the person who sees the post e.g. “Oh my God I am so Happy ­-Insert Popular Band- is coming to town!” which would make someone in the same town who likes said popular band but didn’t know they were coming more happy and more likely to post positive things, possibly about the same band. Or what if the post directly mentions other individuals e.g.“I can’t wait to see Jim, Bob and Frank this weekend, I hope you are ready for me, it’s going to be great fun!!!”. It would probably make Jim, Bob or indeed Frank quite happy to know their friend was looking forward to coming to see them.

 

In both of these cases positive emotions are spreading through the social network, but it has nothing to do with behavioural mimicry and isn’t behaving like the spread of germs. It is spreading through revelation of a mutually happy event in the first example, and social bonding in the second. So while the Facebook study was able to show that positive emotions spread, it really wasn’t able to say anything about why and unfortunately emotional contagion is a ‘why’ theory. This is a good example of one of the problems with big data experiments. No data set in the world can make up for a flawed experimental method (not even if it’s REALLY BIG). And methodology tends to get more sloppy with really big samples. Here is a link to a nice article by Tim Harford, of BBC R4’s ‘More or Less’ on this topic.

 

Now for our second test of selective memory. If you were one of the people who did actually remember what the study was about, do you remember what was the difference between the groups who got the ‘less happy news feed’ and those in the control group? What would your guess be? 10% more positive posts? 20%? You can see the answer in the graph below from their paper. Looks pretty impressive huh? Now look at the scale. Yep. 0.1%. When positives posts were reduced in the person’s news feed, they produced, on average, 0.1% less positive words in their own posts. In the Psychology field we call this … a very, very, very small difference.

Graph
Graph reproduced from Kramer, Guillory & Hancock (2014)

So, the next time someone asks you what you think about ‘that Facebook study’ you can reply “Yea, that was so dodgy! Emotional Contagion is an overly simplistic model, their method was confounded and anyway they only found a 0.1% change in positive post frequency”.

 

References

Dezecache, G., Jacob, P., & Grèzes, J. (2015). Emotional contagion: its scope and limits. Trends in Cognitive Sciences, (APRIL). doi:10.1016/j.tics.2015.03.011

Goldacre, B. (2014). I think you’ll find it’s a bit more complicated than that. Harper Collins.

Hatfield, E. et al. (1994). Emotional Contagion. Cambridge University Press.

Kramer, D. I., Guillory, J. E. & Hancock, J. T. (2014). Experimental evidence of massive scale emotional contagion through social networks. Proceedings of the National Academy of Sciences of the United States of America, 111(29), 1073. doi:10.1073/pnas.1412469111

Le Bon, G. (1896). Psychologie des Foules, Macmillan.

Read More