Marketing Thought
Clarifying management theory for students, academics and practitioners.

Field Guide to Lies

Daniel Levin has an very enjoyable and informative popular science book in his Field Guide to Lies. He surveys how we know what we know, and how we communicate it to others. To be fair not all of it is about lies, for instance, he discusses how data is collected. A lot of the problems he highlights are errors in which the survey taker is likely tricking themselves as much as trying to trick anyone else. Of course there will be those doing surveys who know how to sample correctly and choose not to do it in order to deceive. Probably many more however simply don’t know what they are doing and end up with a biased sample in their survey by pure accident. For instance, a store might survey its loyal customers in an attempt to find out what the average consumer wants. The firm might get confused because the loyal customers aren’t the same as the average consumer but I wouldn’t describe this as a lie.

Levin channels Darrel Huff (of How to Lie With Statistics fame) when he explains how charts and visual representations can mislead. He shows a graph using what he calls “The Dreaded Double Y-Axis” (Levin, 2016, page 37). The double Y-Axis may often be a deliberate attempt to deceive. The chart he shows how non-smokers can appear to have a higher chance of death than smokers by a certain age through using different scales for the smokers and non-smokers.

An interesting, and potentially deceitful, way to show sales is to plot cumulative sales by period rather than per period sale (Levin, 2016, page 47). This can help obscure in period declines in sales. Cumulative sales (by their nature) continue to increase whenever a sale is made. Business observers are often most interested in whether the sales are growing or not and that is often quite hard to see on a cumulative graph. A per period decline is still there in the cumulative sales chart, it is just much less obvious than dips when sales are shown per period sales. As Levin says: “..our brains aren’t very good at detecting rates of change such as these (what’s known as the first derivative in calculus, a fancy name for the slope of the line)” (Levin, 2016, page 47)

Books like Levin’s have potential to be very helpful in improving the quality of thought and communication. I’m happy to recommend.

Read: Daniel J. Levin (2016) A Field Guide to Lies: Critical Thinking In the Information Age, Allen Lane

Improving Public Policy

David Halpern is an interesting character. Originally an advisor to Tony Blair’s Labour government he went on to establish the U.K.’s Behavioural Insights for David Cameron’s Conservative government. His CV makes sense to me given what he specializes in. His aim is to make government policy better. The politicians decide what should be done and Halpern tries to ensure it is done efficiently and effectively.

The reason why Halpern doesn’t get caught constantly in ideological problems is that most people can agree on many government aims regardless of where they are on the political spectrum. These areas of common cause include such things as collecting taxes that are due, making programs to get the unemployed a job more effective, and improving student learning. I’d agree with Halpern when he says that: “If we’re going to introduce a tax break to encourage businesses to invest in R&D, I think that very few people would consider it wrong to ensure that the communications about this tax break should be as clear and simple as possible” (Halpern, 2015, page 308).

The good (and bad) news is that government often hasn’t used much testing in the past. This means that the gains for improving communications etc.. are pretty easy to find. The key thing that is required is humility from experts. These need to admit they don’t know everything up front to motivate the need for a test.

Even when the need to test is recognized there are of course some challenges to improving policy this way. Most notably in the real world (as opposed to university laboratories) the researcher doesn’t have full control given they can’t make sure everything is just right in the field. “… field trials have to incorporate pragmatic compromises, and the researchers have to use statistical controls to iron out the imperfections” (Halpern, 2015, page 203). Field tests often aren’t perfect but this does not mean that they aren’t useful.

To my mind tests are a great (and bi-partisan) way to improve public policy. To see the benefits just consider where we are without testing. When a change is adopted how do we decide what exact change to make? He says “..without systematic testing, this is often little more than an educated guess.” (Halpern, 2015, page 328). If we want to move policy beyond educated guesses we need to run more trials and at least do what we can all agree on as effectively as possible.

Read: David Halpern (2015), Inside the Nudge Unit: How Small Changes Can Make a Big Difference, Random House

Conducting Business Tests

In business decisions are often taken “without having any real evidence to back them up” (Davenport, 2009, page 69). This is a source of great frustration to me, (and many academics). To be fair sometimes there isn’t really any choice. Davenport explains that this is often the case for big strategy decisions. One simply can’t test some massive changes to the way the firm operates. Tactical changes are often much more amenable to testing. As he says, “[g]enerally speaking the triumphs of testing occur in strategy execution, not strategy formulation” (Davenport, 2009, page 71).

Given its strengths in tactical decisions it is no coincidence that testing is big in the field of credit card offerings. One can offer one set of customers a new offer, another set of customers the traditional offer and, provided the customer groups were about the same going in, we can ascribe any differences to the offers made. Davenport highlights how we set up a control; in the example above the control group is a set of customers similar to the customers receiving the new offer. Random assignment between the test and control conditions, which is relatively easy with lists of customers/online marketing, should ensure that the two groups are pretty similar before they receive the offers.

The critical thing to adopting a scientific approach is that the tests have to be real tests. Davenport notes that “tests” are often run that aren’t really rigorous. This is especially worrying if they are described in scientific terminology. One might think the terminology used, laboratories etc.., is  suggestive of proper testing when none of the benefits of a proper test are received.

Davenport offers six +1 easy steps to running a test and in his article explains what happens at each stage.

  1. create or refine hypothesis
  2. design test
  3. execute test
  4. analyze test
  5. plan rollout
  6. rollout

The plus one step is important — create a Learning Library. This will capture the insights for posterity. There is little point in gaining knowledge from tests if everyone just forgets about it as soon as it is finished running.

Thomas Davenport has quite a knack of explaining ideas clearly. Anyone planning to try testing at their firm would do a lot worse than look at his advice.

Read: Thomas H. Davenport (2009) Smart Business Experiments, Harvard Business Review, February.

The Persistence of Academic Customs

Recently I was reading Max Weber’s thoughts on “Science as a Vocation” given in a lecture on November 7th, 1917 . By science Weber means knowledge creation in the broader sense so pretty much all academics should be included as scientists.

There are lots of ideas in the lecture but the most interesting observation to me was the persistence of behaviours over time in academia. Clearly from looking at this book alone we can’t conclude whether behaviours persisted because they were the best ideas, or through some sort of status quo bias/historical dependence. By the later I mean the persistence of a behaviour because “that is what we do” rather than because it is the best approach. Despite this major caveat the persistence is interesting.

For example, Weber discusses the fact that he won’t hire his own doctoral students. This is an interesting problem in academia. If you hire one of your own students the fact you aren’t hiring another suggests a problem with that student. It becomes Akerlof’s famous lemons problem — the fact you are “selling” your student means no one should want to “buy” the student. That said if you never hire your students some are potentially undervalued — you know how good they are but the market might not appreciate this, especially given how hard it is to measure academic contribution.

Weber notes how blinkered one has to be to achieve success in “science”. I worry a bit about this — I fear people build towers of ideas that if they removed their blinkers they would see the obvious structural flaws. That said total specialism remains the prevailing wisdom and certainly has some logic. One has to be willing to delve into knowledge more deeply than the average person would think sensible.

Finally, Weber seems most modern when he talks about the need to appeal to students. I’m not sure that being said to be a “bad teacher… amounts in most cases to an academic death warrant”  (Weber, 2004, page 6) as he suggests but many professors like to complain about being dependent on student whims. Weber sounds like a classic professor when he says “After extensive experience and sober reflection on the subject, I have developed a profound distrust of lecture courses that attract large numbers…” (Weber, 2004, page 6). Next time my students numbers aren’t high enough I’ll content myself that Weber would have approved.

Weber’s lecture seems very modern despite speaking 100 years ago.  Even though theories and methods have changed, for good nor bad, academic behaviour doesn’t seem to change too much.

Read: Max Weber (2004, written in 1917), The Vocation Lectures: “Science as a Vocation”, Translation by Rodney Livingstone, Hackett Publishing Ltd.

Why Don’t Businesses Experiment More?

One puzzle for academics, myself included, is why businesses don’t experiment more. Experiments have great potential to improve business outcomes. Yet often business don’t seem to do much experimenting.

“Companies pay amazing amounts of money to get answers from consultants with overdeveloped confidence in their own intuition. Managers rely on focus groups—a dozen people riffing on something they know little about—to set strategies. And yet, companies won’t experiment to find evidence of the right way forward.” (Ariely, 2010, page 34)

There are likely several reasons for this. Consultants give answers and answers are nice. Even if the consultant isn’t correct they give confidence and probably support what a senior executive thinks is a good idea.

One of the more interesting objections is that business experiments often mean you aren’t treating all customers the same. Is this fair? It seems to me if it improves customer outcomes in the long-term the risk of testing is worth taking. You don’t know any customer is getting worse outcomes from the outset. After all you only test if you don’t know so I don’t think you are being unfair to any consumers. By the nature of random assignment which is key to the best tests — you aren’t deliberately discriminating against any group of consumers.

Experimentation is used effectively in medicine to improve our knowledge. Business rarely has such important outcomes which makes the downside risks testing much less. We don’t need to be as careful, lets do more testing.

Read: Dan Ariely (2010)Why Don’t Businesses Experiment, Harvard Business Review, April,

Teaching CLV Badly

Ex-Ivey PHD student and now University of Calgary professor, Charan Bagga, and I have just published an article on the teaching of CLV (Customer Lifetime Value). We surveyed the state of case-based teaching materials related to CLV and found them a pretty shoddy bunch.

One problem is that many of the cases calculated CLV in a way which had no obvious managerial application. If you want to decide how much to spend on acquiring customers, you really must discount any cash you’ll receive in the future to properly compare to any cash that needs to be invested now. Similarly you can’t use CLV to decide how much to spend on acquiring a customer if you subtract the acquisition cost before reporting CLV.

We examined 33 cases and related materials and “show considerable confusion in teaching materials; they contain incorrect formula, erroneous claims, and contradict other materials from the same school. ” (Bendle and Bagga, 2016, page 1).

Perhaps most importantly we have some pretty clear recommendations. We “recommend educators always (a) use contribution margin, (b) discount cash flows, and (c) never subtract acquisition costs before reporting CLV.” (Bendle and Bagga, 2016, page 1).

While the teaching of CLV is currently pretty awful the good news is that it should be relatively easy to improve by following some pretty basic advice.

Read: Neil Bendle and Charan Bagga, (2016),  The Confusion About CLV In Case-Based Teaching Materials, Marketing Education Review, This link might work if you want to see the full paper: The Confusion About CLV in Case-Based Teaching Materials

Causation and the Post Hoc Fallacy

There are generally two types of fallacy. The first is nice and clean — formal fallacies. These are clearly wrong by the rules of logic. The classic is the well known fallacy that: If p then q, does not mean that if q then p. If all cats are mammals does not imply that all mammals are cats.

The second type of fallacy is much less clean. Informal fallacies include ideas such as an appeal to authority. The reason why it isn’t as clean is that sometimes it makes sense to accept authority. Obeying rules, unless you have a great reason not to, helps society function and may be better for you – generally you should listen to doctors when they give medical advice. This means it isn’t always wrong to heed appeals to authority. That said, following any authority whenever it is simply claimed by the speaker is a recipe for disaster, especially when the authority isn’t an authority in what they are talking about.

Another similarly messy classic logic mistake is the Post Hoc Fallacy:

“1. A occurs before B, 2. Therefore A is the cause of B.” (LaBossiere, 2013, page 97).

The problem arising is that it makes sense to use temporal consideration in causation. Something that comes after something else sometimes might be caused by it. Unfortunately, many things that come after other things simply aren’t caused by them at all. The thing to bear in mind is that while coming first is generally a pre-requisite of causing something else it isn’t enough. While informal logical fallacies are messy and sometimes tricky to spot we should all bear in mind that the post hoc fallacy is a real problem. The solution is to ask for more evidence of causation than simply accepting that because something came before something else it caused it.

Read: Michael LaBossiere (2013) 76 Fallacies

 

Motivation For The New Year

Lets start the new year thinking about how motivation works. What encourages us to action? Dan Ariely has a new book on this topic. (And to motivate you to read it I’ll say it is a very short book).

Ariely is keen to emphasize that often we rely on overly simplistic views of human motivation. Satisfaction at work isn’t simply a function of how much we get paid. The need to feel like we are contributing to something, that what we do is noticed helps us put effort in.

Ariely discusses the motivation of university professors and many knowledge workers. One challenge is that the link between effort and achievement isn’t always apparent. Sometimes much effort goes into a project that goes nowhere, and sometimes you do very little for great success. This creates challenges with reward structures too. When rewarding academic success you risk rewarding luck as much as ability or effort. My experience of academia suggests that a more nuanced understanding of human nature might help improve the workplace.

In some ways it is surprising that for-profit businesses sometimes make poor use of non-monetary incentives. Motivating people with non-monetary incentives is a value for money way of running a business. It is good for the workers too, making them see the purpose in their work will make them happier and more efficient. With greater efficiency should come more money to share around. Note to be clear motivating people relies upon giving them a level of pay that is decent and that they think is fair.  No amount of meaningful work will make up for the fact that a worker can’t afford to feed their children.

Furthermore, it is worth bearing in mind that people respond well to the feeling that they have a long term commitment to, and from, their jobs. Ariely suggests that we can invest in employee’s education, provide them with health benefits that clearly communicate a sense of long term commitment” (Ariely, 2016, page 75).

I think Ariely is right that with an understanding of human motivation that is more behaviourally informed we can hope to create better business outcomes and happier workplaces. A good new years’ resolution would be to think of how we can make workplaces more motivating places.

Read: Dan Ariely (2016) Payoff: The Hidden Logic That Shapes Our Motivations, TED Books, New York, NY

Showing A Problem Does Not Equal Demonstrating A Worsening Problem

Cathy O’Neil has a great book on big data but one with a fundamental flaw. The flawed claim is made in the book’s subtitle and permeates throughout the book. The subtitle is: “How Big Data Increases Inequality and Threatens Democracy”. I could find no significant evidence of big data increasing inequality in the book. This is not to say that big data couldn’t, or doesn’t, increase inequality, but O’Neil makes no serious attempt to show that this happens never mind how it happens.

This is a massive problem. It means O’Neil cannot make meaningful recommendations. Should we destroy all models? I have no idea because she doesn’t show if the machines are making life better, worse, or having no net impact.

To my mind O’Neil’s is an example of the sloppy thinking that permeates politics. The right wants to make countries great — implying that there has been a decline but from when exactly is never specified. Some on the left, who might be expected to believe in progress, instead accept this characterization of decline. Little evidence is ever offered, only a vague feeling that things were better in the past. (Do these people not watch Mad Men?)

To be clear O’Neil’s book contains numerous examples of significant problems in big data models. But O’Neil’s claim about increasing inequality is unsupported because pointing to evidence of a problem now is not the same as pointing to evidence of an increasing problem.

Strangely the problem with her logic is evident just from listening to what she says. She clearly knows that bias didn’t get created along with the first PC. She describes how housing loans pre-big-data-models were awarded in a racist fashion. She mentions that people exhibit bias when they make decisions without using big data models. She even says that “..racists don’t spend a lot of time hunting down reliable data to train their twisted models” (O’Neil, 2016, page 23). Unfair bias has been with us probably as long as people have existed.

The obvious unanswered question is: have math models made things worse? Policing and the judicial systems have had, and still have, problems with being unfair but are they worse? To do this O’Neil must specify a baseline — how biased decisions were before the adoption of the models she complains about and compare this to the results after the models. To labour the point if loan and policing decisions were racist before the adoption of math models then documenting evidence of racism after the adoption of the models isn’t enough to show they are more racist now. We need to know whether there is more or less racism now.

O’Neil has some great points yet the error she makes is pretty basic — it is intellectually sloppy to claim things are getting worse without providing any supporting evidence.

As we see an end to 2016 I think it is important that people who believe in progress don’t accept that society is inevitably plunging towards doom. O’Neil has an important point — math models can codify bias — but the models could also help make the world a better place. Crucially, we need to test when, and how, progress can be made and not just assume that the world is falling apart. Such knee jerk negativity only helps those who don’t believe in progress.

Read: Cathy O’Neil (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown, New York.

Is Math Racist?

Cathy O’Neil’s book – Weapons of Math Destruction – is an entertaining and informative read. She has done a great job of highlighting the challenges with math models. (I have one massive problem with the book but I’ll detail that next time).

A summary of the book might be that math models can codify inequity. The classic example is the use of postcodes in prediction models. On the face of it throwing zip codes into a predictive model seems reasonable. It is likely to improve the prediction and isn’t explicitly biased against anyone. The problem that O’Neil correctly highlights is that such inputs to models can cause unfair outcomes even if no one deliberately aims for the model to do this. Zip code is often highly correlated with race and so by using zip code you end up using race as an input by proxy.

If you use models that predict a job applicant’s work success you will give jobs to those who appear, in model terms, to be similar to those who have been successful in the past. Clearly if all your senior executives are white men you shouldn’t expect this to change using a model that selects for “similar” people.

Compounding the problem is that many people using them don’t understand the models so can’t really critique them or compensate for the model’s problems.

To O’Neil Weapons of Math Destruction cause major problems — they are the dark side of the big data revolution. She suggests we worry when math models are: “..opaque, unquestioned, and unaccountable, and they operate at a scale to sort, target, or “optimize” millions of people. By confusing their findings with on-the-ground reality, most of them create pernicious WMD feedback loops.” (O’Neil, 2016, page 12).  The last point is especially interesting. Models can help craft reality. Police see areas as likely to have high crime, they patrol more, see more crimes, and so the area becomes seen as even more likely to have high crime.

There are great points in O’Neil’s book and even those who love big data should read it and think through the problems raised. We shouldn’t accept a model, however sophisticated, without giving very serious thought to its inputs and consequences.

Read: Cathy O’Neil (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown, New York.