Marketing Thought
Clarifying management theory for students, academics and practitioners.

The Tyranny of Random Facts

BCG have a new piece on measuring marketing results: Making Sense of the Marketing Measurement Mess. There is a lot to like in it.

The authors describe a number of questions to ask about marketing metrics.

  • Do the metrics and tools capture the short- and longer-term value of marketing?
  • Do they produce answers and insights that can be acted on?
  • Are they readily understood by and credible to the CEO, the CFO, and the broader organization?” (De Bellefonds et al., 2017)

These are important ideas. Many organization sanctioned metrics do a poor job of getting to the longer-term value of marketing. (To be fair this is because it really isn’t easy). Furthermore, metrics that can’t be acted upon aren’t really terribly useful. If you can’t do anything about it knowing information is a bit of a waste of time.

Perhaps key to the problems of many marketing metrics is the final point. The metrics many marketers appeal to simply don’t excite the people allocating budgets — sometimes for understandable reasons: Why do we care about the number of likes that we have? I’m pretty sure the average CFO doesn’t.

The article lays out some good advice. Most notably pick some metrics and stick with them. If you don’t use the same metrics each period it is impossible to monitor progress. Even if the metric in itself is decent an isolated number is unlikely to help you develop a successful strategy.

One phrase that was especially appealing is the “Tyranny of Random Facts”. This is  “a common corporate phenomenon in which different marketing managers each cite a fact or data point revealed by some unique tool or model as ­evidence of the great job they are doing. It’s not that the facts are wrong; they may be totally valid. However, there is no clear way to compare one fact with another or even to know whether they are appropriate reference points for the issue at hand.” (De Bellefonds et al., 2017).

If you can’t fit the random facts into a coherent picture you will be doomed to wander aimlessly around, changing strategic priorities with the next random fact that seems especially compelling. This might be because it is the flavour of the month, or the marketer touting it had an especially good night’s sleep and makes a particularly convincing argument today. That isn’t good so beware the tyranny of random facts.

Read: Nicolas De Bellefonds, Dominic Field, David Ratajczak, Neal Rich, and Jody Visser (2017) Making Sense of the Marketing Measurement Mess, https://www.bcgperspectives.com/content/articles/go-to-market-strategy-making-sense-of-marketing-measurement-mess/

Star Wars, The US Constition, and the Dangers of Not Rewriting When Necessary

Cass Sunstein’s book — The World According to Star Wars — is a must for Star Wars fans who are also interested in behavioural economics/law/public policy which is probably a surprisingly big intersection. I must confess to not seeing the appeal of Star Wars. (I’m not too cool for Star Wars — I’m a Star Trek person. I find Star Wars badly written, stereotypical, and which cheats by using magic to drive the key plot points). Sunstein definitely did the project because he loves it. To be honest given his stint in government and excellent academic work if anyone deserves a self-indulgent project it is him. This one holds insights buried throughout the text. Sometimes they are well hidden but interesting if you spot them.

Perhaps the most interesting relates to a key issue in the US law/politics. I’m in no way qualified to speak about it but as an academic that never stops me. To get there Sunstein describes how the Star Wars trilogies came about — the unfurling of the story over time. He makes it clear that the ending was not know when the story started. The most obvious example being the blossoming romance between Luke and Leia in the original movie. Then suddenly it turns out that they are brother and sister and Leia says she “knew it all along”. This seems highly unbelievable given her actions.

What is the point of all this? Sunstein suggests that even if we have a basic idea where things are going (as did George Lucas writing Star Wars) it doesn’t mean all the details are fleshed out. You shouldn’t trust everything that is said afterwards about planning. Even in the unlikely event that Lucas is trying to be truthful when talking about the creation of Star Wars, rather than creating an origin story, he must be confused given he contradicts himself.

What does this mean for big battles in the US? Sunstein discusses the intent of the US’ founding fathers. Some argue for a more literal interpretation of the US constitution. Sunstein, I think, convincingly argues why that doesn’t make sense. If George Lucas didn’t think through the Star Wars plot but it evolved to (some) people’s satisfaction, there is little reason to think the founding father’s could have come up with a coherent plan for the US constitution. Stories, and constitutions, are better when they have a bit of flexibility to respond to the times.

Here is the end of, probably, my only post on the US constitution. Lets hope its development over time has left it better written than Star Wars.

Read: Cass Sunstein (2016) The World According to Star Wars, HarperCollins

Behavioural Insights, Policy Policy, and the OECD

The Organization for Economic Cooperation and Development have just launched a fascinating new initiative: “Behavioural Insights and Public Policy: Lessons from Round the World.” The accompanying book has extensive case studies of how behavioural insights have been deployed to advance public policy. As the OECD says “The use of behavioural economics by governments and regulators is a growing trend globally, most notably in the United Kingdom and United States but more recently in Australia, Canada, Columbia, Denmark, Germany, Israel, Netherlands, New Zealand, Norway, Singapore, South Africa, Turkey and the European Union.” (OECD website).

It is interesting that the OECD see this as such an important way to advance economic and social welfare. From a fringe endeavour behavioural economics is now quite central to many governmental approaches. (As I have said previously I see this as excellent news.) I also think it is interesting that they describe the approach as both behavioural insights and behavioural economics. Where (behavioural) economics ends and good old fashioned common sense – designing policy and presentation that work with human nature – begins is a valid question. In many ways this work on behavioural insights/economics is just repackaging ideas in a more palatable form. I’m all for this; indeed it is good old fashioned common sense to make positive change as palatable as possible. Making change easy/simple/appealing is a key element to making change happen.

I will write more about the OECD book after digesting it. For now I’ll simply suggest looking at what is being done. There are accompanying videos tailored to different world regions that explain the ideas behind and benefits of using behavioural insights in public policy. (Most applies in the commercial and not-for-profit worlds too.)  I’m excited by what the OECD are up to, and look forward to hearing more.

Read: (2017) Behavioural Insights and Public Policy: Lessons from Round the World. OECD Library. See more at: http://www.oecd.org/gov/regulatory-policy/behavioural-insights.htm

Smoothing Data

Excel can be a very useful tool. While it can’t easily do the most advanced statistical tasks it can be used for most everyday business analytics. Ron Person shows how to produce balanced scorecards in a book that is packed with neat tricks and practical suggestions. He shows how to do Box-and-Whisker Plots, custom labels and even Gantt (project management) charts. Person explains how to put all the tricks together to create interesting dashboards in Excel, including some useful ideas on how to improve the presentation of the data.

A problem he discusses is erratic data. Sometimes data is quite messy, e.g., each week’s sales jump around a lot, but you really want to make the trends easy to see. An approach to achieve this is to smooth the data. Person explains how to do this, even going so far as to give the simple code to use in Excel, but he also highlights the strength and weaknesses of the approach. He says: “Moving averages are used to smooth erratic data so that you can see the underlying pattern. They work by taking an average of data over a time period. That average smooths erratic spikes or dips but also “flattens” the data, causing rapid changes to be lost or delayed” (Person, 2013, page 318).

You rarely get anything for free. Smoothing data will allow you to see the trend much more easily, and, generally, the longer the averaging period you use the more the trend will be clear. (For example, all else equal, smoothing data over a month is likely to show an easier to see trend than smoothing over a week). The challenge is that long periods tend to obscure changes. Perhaps a dramatic change has occurred recently — this will be lost in data smoothed over a long time period. One trick Person shows is to weight the average more heavily towards more recent periods which somewhat compensates for the problem.

Smoothing data can be useful and if you want to do it in Excel, or simple want some other useful tricks, Person’s book can certainly help.

Read: Ron Person (2013) Balanced Scorecards & Operational Dashboards with Microsoft Excel, John Wiley and Sons, Inc.

Field Guide to Lies

Daniel Levin has an very enjoyable and informative popular science book in his Field Guide to Lies. He surveys how we know what we know, and how we communicate it to others. To be fair not all of it is about lies, for instance, he discusses how data is collected. A lot of the problems he highlights are errors in which the survey taker is likely tricking themselves as much as trying to trick anyone else. Of course there will be those doing surveys who know how to sample correctly and choose not to do it in order to deceive. Probably many more however simply don’t know what they are doing and end up with a biased sample in their survey by pure accident. For instance, a store might survey its loyal customers in an attempt to find out what the average consumer wants. The firm might get confused because the loyal customers aren’t the same as the average consumer but I wouldn’t describe this as a lie.

Levin channels Darrel Huff (of How to Lie With Statistics fame) when he explains how charts and visual representations can mislead. He shows a graph using what he calls “The Dreaded Double Y-Axis” (Levin, 2016, page 37). The double Y-Axis may often be a deliberate attempt to deceive. The chart he shows how non-smokers can appear to have a higher chance of death than smokers by a certain age through using different scales for the smokers and non-smokers.

An interesting, and potentially deceitful, way to show sales is to plot cumulative sales by period rather than per period sale (Levin, 2016, page 47). This can help obscure in period declines in sales. Cumulative sales (by their nature) continue to increase whenever a sale is made. Business observers are often most interested in whether the sales are growing or not and that is often quite hard to see on a cumulative graph. A per period decline is still there in the cumulative sales chart, it is just much less obvious than dips when sales are shown per period sales. As Levin says: “..our brains aren’t very good at detecting rates of change such as these (what’s known as the first derivative in calculus, a fancy name for the slope of the line)” (Levin, 2016, page 47)

Books like Levin’s have potential to be very helpful in improving the quality of thought and communication. I’m happy to recommend.

Read: Daniel J. Levin (2016) A Field Guide to Lies: Critical Thinking In the Information Age, Allen Lane

Improving Public Policy

David Halpern is an interesting character. Originally an advisor to Tony Blair’s Labour government he went on to establish the U.K.’s Behavioural Insights for David Cameron’s Conservative government. His CV makes sense to me given what he specializes in. His aim is to make government policy better. The politicians decide what should be done and Halpern tries to ensure it is done efficiently and effectively.

The reason why Halpern doesn’t get caught constantly in ideological problems is that most people can agree on many government aims regardless of where they are on the political spectrum. These areas of common cause include such things as collecting taxes that are due, making programs to get the unemployed a job more effective, and improving student learning. I’d agree with Halpern when he says that: “If we’re going to introduce a tax break to encourage businesses to invest in R&D, I think that very few people would consider it wrong to ensure that the communications about this tax break should be as clear and simple as possible” (Halpern, 2015, page 308).

The good (and bad) news is that government often hasn’t used much testing in the past. This means that the gains for improving communications etc.. are pretty easy to find. The key thing that is required is humility from experts. These need to admit they don’t know everything up front to motivate the need for a test.

Even when the need to test is recognized there are of course some challenges to improving policy this way. Most notably in the real world (as opposed to university laboratories) the researcher doesn’t have full control given they can’t make sure everything is just right in the field. “… field trials have to incorporate pragmatic compromises, and the researchers have to use statistical controls to iron out the imperfections” (Halpern, 2015, page 203). Field tests often aren’t perfect but this does not mean that they aren’t useful.

To my mind tests are a great (and bi-partisan) way to improve public policy. To see the benefits just consider where we are without testing. When a change is adopted how do we decide what exact change to make? He says “..without systematic testing, this is often little more than an educated guess.” (Halpern, 2015, page 328). If we want to move policy beyond educated guesses we need to run more trials and at least do what we can all agree on as effectively as possible.

Read: David Halpern (2015), Inside the Nudge Unit: How Small Changes Can Make a Big Difference, Random House

Conducting Business Tests

In business decisions are often taken “without having any real evidence to back them up” (Davenport, 2009, page 69). This is a source of great frustration to me, (and many academics). To be fair sometimes there isn’t really any choice. Davenport explains that this is often the case for big strategy decisions. One simply can’t test some massive changes to the way the firm operates. Tactical changes are often much more amenable to testing. As he says, “[g]enerally speaking the triumphs of testing occur in strategy execution, not strategy formulation” (Davenport, 2009, page 71).

Given its strengths in tactical decisions it is no coincidence that testing is big in the field of credit card offerings. One can offer one set of customers a new offer, another set of customers the traditional offer and, provided the customer groups were about the same going in, we can ascribe any differences to the offers made. Davenport highlights how we set up a control; in the example above the control group is a set of customers similar to the customers receiving the new offer. Random assignment between the test and control conditions, which is relatively easy with lists of customers/online marketing, should ensure that the two groups are pretty similar before they receive the offers.

The critical thing to adopting a scientific approach is that the tests have to be real tests. Davenport notes that “tests” are often run that aren’t really rigorous. This is especially worrying if they are described in scientific terminology. One might think the terminology used, laboratories etc.., is  suggestive of proper testing when none of the benefits of a proper test are received.

Davenport offers six +1 easy steps to running a test and in his article explains what happens at each stage.

  1. create or refine hypothesis
  2. design test
  3. execute test
  4. analyze test
  5. plan rollout
  6. rollout

The plus one step is important — create a Learning Library. This will capture the insights for posterity. There is little point in gaining knowledge from tests if everyone just forgets about it as soon as it is finished running.

Thomas Davenport has quite a knack of explaining ideas clearly. Anyone planning to try testing at their firm would do a lot worse than look at his advice.

Read: Thomas H. Davenport (2009) Smart Business Experiments, Harvard Business Review, February.

The Persistence of Academic Customs

Recently I was reading Max Weber’s thoughts on “Science as a Vocation” given in a lecture on November 7th, 1917 . By science Weber means knowledge creation in the broader sense so pretty much all academics should be included as scientists.

There are lots of ideas in the lecture but the most interesting observation to me was the persistence of behaviours over time in academia. Clearly from looking at this book alone we can’t conclude whether behaviours persisted because they were the best ideas, or through some sort of status quo bias/historical dependence. By the later I mean the persistence of a behaviour because “that is what we do” rather than because it is the best approach. Despite this major caveat the persistence is interesting.

For example, Weber discusses the fact that he won’t hire his own doctoral students. This is an interesting problem in academia. If you hire one of your own students the fact you aren’t hiring another suggests a problem with that student. It becomes Akerlof’s famous lemons problem — the fact you are “selling” your student means no one should want to “buy” the student. That said if you never hire your students some are potentially undervalued — you know how good they are but the market might not appreciate this, especially given how hard it is to measure academic contribution.

Weber notes how blinkered one has to be to achieve success in “science”. I worry a bit about this — I fear people build towers of ideas that if they removed their blinkers they would see the obvious structural flaws. That said total specialism remains the prevailing wisdom and certainly has some logic. One has to be willing to delve into knowledge more deeply than the average person would think sensible.

Finally, Weber seems most modern when he talks about the need to appeal to students. I’m not sure that being said to be a “bad teacher… amounts in most cases to an academic death warrant”  (Weber, 2004, page 6) as he suggests but many professors like to complain about being dependent on student whims. Weber sounds like a classic professor when he says “After extensive experience and sober reflection on the subject, I have developed a profound distrust of lecture courses that attract large numbers…” (Weber, 2004, page 6). Next time my students numbers aren’t high enough I’ll content myself that Weber would have approved.

Weber’s lecture seems very modern despite speaking 100 years ago.  Even though theories and methods have changed, for good nor bad, academic behaviour doesn’t seem to change too much.

Read: Max Weber (2004, written in 1917), The Vocation Lectures: “Science as a Vocation”, Translation by Rodney Livingstone, Hackett Publishing Ltd.

Why Don’t Businesses Experiment More?

One puzzle for academics, myself included, is why businesses don’t experiment more. Experiments have great potential to improve business outcomes. Yet often business don’t seem to do much experimenting.

“Companies pay amazing amounts of money to get answers from consultants with overdeveloped confidence in their own intuition. Managers rely on focus groups—a dozen people riffing on something they know little about—to set strategies. And yet, companies won’t experiment to find evidence of the right way forward.” (Ariely, 2010, page 34)

There are likely several reasons for this. Consultants give answers and answers are nice. Even if the consultant isn’t correct they give confidence and probably support what a senior executive thinks is a good idea.

One of the more interesting objections is that business experiments often mean you aren’t treating all customers the same. Is this fair? It seems to me if it improves customer outcomes in the long-term the risk of testing is worth taking. You don’t know any customer is getting worse outcomes from the outset. After all you only test if you don’t know so I don’t think you are being unfair to any consumers. By the nature of random assignment which is key to the best tests — you aren’t deliberately discriminating against any group of consumers.

Experimentation is used effectively in medicine to improve our knowledge. Business rarely has such important outcomes which makes the downside risks testing much less. We don’t need to be as careful, lets do more testing.

Read: Dan Ariely (2010)Why Don’t Businesses Experiment, Harvard Business Review, April,

Teaching CLV Badly

Ex-Ivey PHD student and now University of Calgary professor, Charan Bagga, and I have just published an article on the teaching of CLV (Customer Lifetime Value). We surveyed the state of case-based teaching materials related to CLV and found them a pretty shoddy bunch.

One problem is that many of the cases calculated CLV in a way which had no obvious managerial application. If you want to decide how much to spend on acquiring customers, you really must discount any cash you’ll receive in the future to properly compare to any cash that needs to be invested now. Similarly you can’t use CLV to decide how much to spend on acquiring a customer if you subtract the acquisition cost before reporting CLV.

We examined 33 cases and related materials and “show considerable confusion in teaching materials; they contain incorrect formula, erroneous claims, and contradict other materials from the same school. ” (Bendle and Bagga, 2016, page 1).

Perhaps most importantly we have some pretty clear recommendations. We “recommend educators always (a) use contribution margin, (b) discount cash flows, and (c) never subtract acquisition costs before reporting CLV.” (Bendle and Bagga, 2016, page 1).

While the teaching of CLV is currently pretty awful the good news is that it should be relatively easy to improve by following some pretty basic advice.

Read: Neil Bendle and Charan Bagga, (2016),  The Confusion About CLV In Case-Based Teaching Materials, Marketing Education Review, This link might work if you want to see the full paper: The Confusion About CLV in Case-Based Teaching Materials