When is Too Much Success a Bad Thing?

Social Psychology, and the related field of Consumer Behaviour, relies on laboratory experiments. This has great benefits. Lab experiments give flexibility to investigate causation and test interesting ideas.

Given the benefits of lab experiments Consumer Behavior scholars doing experimental work have come to dominate marketing. I’d guess about 2/3rds of marketing academics study Consumer Behavior. Unfortunately lab experiments come with downsides. Data is created for a specific purpose so some scholars have succumbed to the — admittedly massive — temptation to fraudulently create favourable data. Some journals have started to ask for data to be shared which should, at a minimum, allow us to catch the most incompetent of the fraudsters.

There exists a broader problem. Even scholars who would never simply make up the data can fall into traps. Researchers must drop unintelligible responses and it is easy to see how someone could, even subconsciously, drop a few unhelpful responses. Furthermore, journals don’t publish inconclusive results so when experiments don’t work everyone ignores them. This means published effects give a false impression of the likelihood of a positive result. We see the five published studies that show an effect — but we know nothing about the ten studies that failed to do so.

Francis (2014) looked at papers in Psychological Science with 4+ studies and found worrying results. The method he used is controversial but he finds that 36 out of 44 articles in Psychological Science were excessively successful, i.e., too good to be true. “…82% of articles in Psychological Science appear to be biased…”(Francis, 2014, page 1185). He cannot say why this occurred — and some explanations don’t involve fraud or incompetence — but it isn’t great news. “When empirical studies succeed at a rate much higher than is appropriate for the estimated effects and sample sizes, readers should suspect that unsuccessful findings have been suppressed, the experiments or analyses were improper, or the theory does not properly account for the data. (Francis, 2014, page 1180).

A solution that appeals to me is for marketers to ensure that we draw more on theory/ideas completely independent of other laboratory experiments. Academic literature reviews shouldn’t just be lists of other very similar experimental papers they should tell us how the lab experiments fit with non-experimental papers.

Read: Gregory Francis (2014) The frequency of excess success for articles in Psychological Science, Psychonomic Bulletin & Review (2014) 21, pages 1180-1187