Site icon Marketing Thought

What Can The Marketing-Finance Interface Tell Us About Witchcraft Trials?

Edeling, Srinivasan, and Hanssens have a useful new review paper on the Marketing-Finance Interface.

The Value Of Review Papers

Review papers, such as this, play a vital role in helping frame the field. They give Ph.D. students a way to get up to speed on prior contributions. Review papers also give more seasoned academics a way to catch up on things that we’ve missed. Alex Edeling and his colleague’s paper is especially useful for its extensive view of what is happening at the Marketing-Finance Interface

But Here Implying What Academics Do Matters Is An Anti-Science Approach

Of course, it wouldn’t be a Marketing Thought post if I didn’t have a concern, and I certainly do. The authors seek to understand the correct use of metrics by asking what academics do. This can be a decent approach in the right circumstances. If we don’t have strong prior theory we might be interested to see what the possibilities are. Here though asking academics runs into a major problem.

Assume we are interested in understanding mathematics. (Rather than the question of whether people understand mathematics). Surveying people about whether 2+2=4 can only have two outcomes. 1) respondents know 2+2=4 and their opinions are not useful as they add nothing to our understanding of mathematics. 2) The respondents don’t know that 2+2=4 and their opinions are not useful as they are wrong.

Asking what academics think has no potential to advance the field when seeking to understand the value of a metric (rather than track its usage). Indeed, it can only hold back the field. Edeling and his colleagues ask marketing and finance scholars what they think of (Accounting-based Approaches to) Tobin’s q. Doing this risks reinforcing bad habits in the Marketing-Finance Interface. It can give a social proof justification to bad practice. At this point, a better academic than me would start talking about Kuhn and being caught in an outdated paradigm. Instead, I’ll just ask you to cast your mind back to the 17th century.

Don’t Ask Witchfinders About The Validity Of Their Witchfinding Tools

Imagine four hundred years ago we plan to survey one educated elite of the time, witchfinders. We will ask them what they think of drowning witches as an indicator of the witch’s guilt or innocence. A few enlightened progressive witchfinders (I accept that is a bit of an oxymoron) will be against drowning. Still, likely most will be for it. What have we learned about the efficacy of drowning as a method of detecting witches? Nothing. Why? Because we have strong prior theory that says drowning people is a terrible approach. So what was the point of the survey?

Does Magic Work? We Surveyed A Load Of Stage Magicians And They Said Yes

We Need To Argue From Theory

To understand the value of any metric we need to look at what the theory behind it is. The authors when discussing Tobin’s q say that: “First, the criticism of AATQ [Accounting-based Approaches to Tobin’s q] is not new” (Edeling, Srinivasan, and Hanssens, 2020). I agree that the problems have been long known in marketing and finance and by extension the marketing-finance interface.

Why then is it that Tobin’s q is still “the second most often used firm-value variable”? (Edeling, Srinivasan, and Hanssens, 2020). The obvious question becomes wouldn’t it be better to simply tell marketing academics that Tobin’s q is a terrible performance metric rather than ask their opinions? It seems pretty clear that lots of marketing academics haven’t thought sufficiently about the metric’s efficacy. They, after all, continued to use it despite the prior criticisms.

Teach The Debate?

The authors “offer solutions to recent technical debates on …Tobin’s q” (Edeling, Srinivasan, and Hanssens, 2020). In essence, the paper implies that the jury was out on the use of Tobin’s q as a dependent variable in marketing but they have a solution. My paper with Moeen Butt was acknowledged as raising issues but no one has responded to those issues. As such, I simply don’t see what the technical debate is. (To be clear many marketing professors in the survey did not want to use Tobin’s q as a dependent variable. Still, this information is beside the point as we shouldn’t establish the validity of metrics by opinion poll).

Maybe I am wrong. I don’t think so on this occasion but I’ve been wrong plenty of times in the past. If so, someone really must tell me how I’m wrong before it can be classed as a debate. We can’t have a proper debate when there is no serious counterargument to my condemnation of the use of Tobin’s q as a performance metric. You can’t teach a controversy as one person thinks this, another thinks this, and pretend we are a serious scientific discipline. As far as I can see there is no debate being had. Until there is a proper debate the only acceptable advice is: Tobin’s q should never be used as a performance metric in marketing.

Social Proof

Social proof can be a useful heuristic but it doesn’t make something correct. We have to debate the merits of any topic. When we know something doesn’t work then what is the point of surveying people to see whether they think it works? Plenty of people think the earth is flat, that don’t make it so.

Read: Alexander Edeling, Shuba Srinivasan, Dominique M. Hanssens (2020) The marketing–finance interface: A new integrative review of metrics, methods, and findings and an agenda for future research, International Journal of Research in Marketing, available online 19 September 2020

Exit mobile version