Marketing Thought
Clarifying management theory for students, academics and practitioners.

Causation and the Post Hoc Fallacy

There are generally two types of fallacy. The first is nice and clean — formal fallacies. These are clearly wrong by the rules of logic. The classic is the well known fallacy that: If p then q, does not mean that if q then p. If all cats are mammals does not imply that all mammals are cats.

The second type of fallacy is much less clean. Informal fallacies include ideas such as an appeal to authority. The reason why it isn’t as clean is that sometimes it makes sense to accept authority. Obeying rules, unless you have a great reason not to, helps society function and may be better for you – generally you should listen to doctors when they give medical advice. This means it isn’t always wrong to heed appeals to authority. That said, following any authority whenever it is simply claimed by the speaker is a recipe for disaster, especially when the authority isn’t an authority in what they are talking about.

Another similarly messy classic logic mistake is the Post Hoc Fallacy:

“1. A occurs before B, 2. Therefore A is the cause of B.” (LaBossiere, 2013, page 97).

The problem arising is that it makes sense to use temporal consideration in causation. Something that comes after something else sometimes might be caused by it. Unfortunately, many things that come after other things simply aren’t caused by them at all. The thing to bear in mind is that while coming first is generally a pre-requisite of causing something else it isn’t enough. While informal logical fallacies are messy and sometimes tricky to spot we should all bear in mind that the post hoc fallacy is a real problem. The solution is to ask for more evidence of causation than simply accepting that because something came before something else it caused it.

Read: Michael LaBossiere (2013) 76 Fallacies

 

Motivation For The New Year

Lets start the new year thinking about how motivation works. What encourages us to action? Dan Ariely has a new book on this topic. (And to motivate you to read it I’ll say it is a very short book).

Ariely is keen to emphasize that often we rely on overly simplistic views of human motivation. Satisfaction at work isn’t simply a function of how much we get paid. The need to feel like we are contributing to something, that what we do is noticed helps us put effort in.

Ariely discusses the motivation of university professors and many knowledge workers. One challenge is that the link between effort and achievement isn’t always apparent. Sometimes much effort goes into a project that goes nowhere, and sometimes you do very little for great success. This creates challenges with reward structures too. When rewarding academic success you risk rewarding luck as much as ability or effort. My experience of academia suggests that a more nuanced understanding of human nature might help improve the workplace.

In some ways it is surprising that for-profit businesses sometimes make poor use of non-monetary incentives. Motivating people with non-monetary incentives is a value for money way of running a business. It is good for the workers too, making them see the purpose in their work will make them happier and more efficient. With greater efficiency should come more money to share around. Note to be clear motivating people relies upon giving them a level of pay that is decent and that they think is fair.  No amount of meaningful work will make up for the fact that a worker can’t afford to feed their children.

Furthermore, it is worth bearing in mind that people respond well to the feeling that they have a long term commitment to, and from, their jobs. Ariely suggests that we can invest in employee’s education, provide them with health benefits that clearly communicate a sense of long term commitment” (Ariely, 2016, page 75).

I think Ariely is right that with an understanding of human motivation that is more behaviourally informed we can hope to create better business outcomes and happier workplaces. A good new years’ resolution would be to think of how we can make workplaces more motivating places.

Read: Dan Ariely (2016) Payoff: The Hidden Logic That Shapes Our Motivations, TED Books, New York, NY

Showing A Problem Does Not Equal Demonstrating A Worsening Problem

Cathy O’Neil has a great book on big data but one with a fundamental flaw. The flawed claim is made in the book’s subtitle and permeates throughout the book. The subtitle is: “How Big Data Increases Inequality and Threatens Democracy”. I could find no significant evidence of big data increasing inequality in the book. This is not to say that big data couldn’t, or doesn’t, increase inequality, but O’Neil makes no serious attempt to show that this happens never mind how it happens.

This is a massive problem. It means O’Neil cannot make meaningful recommendations. Should we destroy all models? I have no idea because she doesn’t show if the machines are making life better, worse, or having no net impact.

To my mind O’Neil’s is an example of the sloppy thinking that permeates politics. The right wants to make countries great — implying that there has been a decline but from when exactly is never specified. Some on the left, who might be expected to believe in progress, instead accept this characterization of decline. Little evidence is ever offered, only a vague feeling that things were better in the past. (Do these people not watch Mad Men?)

To be clear O’Neil’s book contains numerous examples of significant problems in big data models. But O’Neil’s claim about increasing inequality is unsupported because pointing to evidence of a problem now is not the same as pointing to evidence of an increasing problem.

Strangely the problem with her logic is evident just from listening to what she says. She clearly knows that bias didn’t get created along with the first PC. She describes how housing loans pre-big-data-models were awarded in a racist fashion. She mentions that people exhibit bias when they make decisions without using big data models. She even says that “..racists don’t spend a lot of time hunting down reliable data to train their twisted models” (O’Neil, 2016, page 23). Unfair bias has been with us probably as long as people have existed.

The obvious unanswered question is: have math models made things worse? Policing and the judicial systems have had, and still have, problems with being unfair but are they worse? To do this O’Neil must specify a baseline — how biased decisions were before the adoption of the models she complains about and compare this to the results after the models. To labour the point if loan and policing decisions were racist before the adoption of math models then documenting evidence of racism after the adoption of the models isn’t enough to show they are more racist now. We need to know whether there is more or less racism now.

O’Neil has some great points yet the error she makes is pretty basic — it is intellectually sloppy to claim things are getting worse without providing any supporting evidence.

As we see an end to 2016 I think it is important that people who believe in progress don’t accept that society is inevitably plunging towards doom. O’Neil has an important point — math models can codify bias — but the models could also help make the world a better place. Crucially, we need to test when, and how, progress can be made and not just assume that the world is falling apart. Such knee jerk negativity only helps those who don’t believe in progress.

Read: Cathy O’Neil (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown, New York.

Is Math Racist?

Cathy O’Neil’s book – Weapons of Math Destruction – is an entertaining and informative read. She has done a great job of highlighting the challenges with math models. (I have one massive problem with the book but I’ll detail that next time).

A summary of the book might be that math models can codify inequity. The classic example is the use of postcodes in prediction models. On the face of it throwing zip codes into a predictive model seems reasonable. It is likely to improve the prediction and isn’t explicitly biased against anyone. The problem that O’Neil correctly highlights is that such inputs to models can cause unfair outcomes even if no one deliberately aims for the model to do this. Zip code is often highly correlated with race and so by using zip code you end up using race as an input by proxy.

If you use models that predict a job applicant’s work success you will give jobs to those who appear, in model terms, to be similar to those who have been successful in the past. Clearly if all your senior executives are white men you shouldn’t expect this to change using a model that selects for “similar” people.

Compounding the problem is that many people using them don’t understand the models so can’t really critique them or compensate for the model’s problems.

To O’Neil Weapons of Math Destruction cause major problems — they are the dark side of the big data revolution. She suggests we worry when math models are: “..opaque, unquestioned, and unaccountable, and they operate at a scale to sort, target, or “optimize” millions of people. By confusing their findings with on-the-ground reality, most of them create pernicious WMD feedback loops.” (O’Neil, 2016, page 12).  The last point is especially interesting. Models can help craft reality. Police see areas as likely to have high crime, they patrol more, see more crimes, and so the area becomes seen as even more likely to have high crime.

There are great points in O’Neil’s book and even those who love big data should read it and think through the problems raised. We shouldn’t accept a model, however sophisticated, without giving very serious thought to its inputs and consequences.

Read: Cathy O’Neil (2016) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown, New York.

How long will we be unhappy?

A fascinating line of research tackles the problem of affective forecasting. This studies our predictions of how we will feel when events happen. We typically aren’t very good at affective forecasting. “.. expectations are often important and often wrong. They are important because people’s actions are based, in large measure, on their implicit and explicit predictions of the emotional consequences of future events”.(Gilbert, Pinel, Wilson, Blumberg, and Wheatley 1998, page 617). This inability to predict how we will feel is both good and bad. The bad news is that this leads us to pursue things that don’t make us as happy as we think they will — we chase what isn’t really worth getting. The good news is that negative events do not always cause the awful feelings that we fear. (A welcome message given 2016’s events).

A classic paper considers the problem of how long we think good and bad feelings will persist after positive and negative events. The authors argue that a key part of our inability to predict the duration of feelings is Immune Neglect. They suggest we are especially bad at predicting how long the impact of negative events will be because we neglect that we have a psychological immune system. We are often better able to cope with setbacks than we imagine we will be.

Gilbert and his colleagues conducted a series of experiments — being academics they saw a useful source of participants amongst their own ranks. “Assistant professors estimated how generally happy they would be at various points in time after learning that they had or had not achieved tenure. Former assistant professors who had and had not achieved tenure reported how generally happy they were. We expected that assistant professors would overestimate the duration of their negative affect after being denied tenure but that they would be relatively accurate in estimating the duration of their positive affect after achieving tenure” (Gilbert and colleagues, 1998, page 622).

The results for those who got tenure are interesting. “In short, forecasters’ estimates of their long-term reactions to a positive tenure decision were accurate…….” (Gilbert and colleagues, 1998, page 624). I have just got tenure so while it is only anecdotal evidence I can now see whether my estimates are as accurate as predicted by this research.

Read: Daniel T. Gilbert, Elizabeth C. Pinel, Timothy D. Wilson, Stephen J. Blumberg, and Thalia P. Wheatley (1998) Immune Neglect: A Source of Durability Bias in Affective Forecasting, Journal of Personality and Social Psychology, Vol. 75, No. 3, 617-638.

Hovis and The Valuation of Brands

Today we turn to a history lesson on brand valuation. Rank Hovis McDougall, a big U.K. food manufacturer in the 1980s decided to record the value of its brands on its balance sheet including internally generated brands. This “… created a storm of controversy” (Murphy et al., 1989, page 9). (This eventually led to accounting rules designed to prevent similar actions from occurring.)

Faced with this controversy Interbrand — led by John Murphy — responded. A short article in the British Food Journal neatly lays out the problem with not capitalizing spending on brand assets. Many of these comments remain relevant today, for example that recording assets on the balance sheet reduces the need for Goodwill upon acquisitions. (Goodwill is the portion of the purchase price of a company that is largely inexplicable). Furthermore, Murphy makes the (hard to argue with point) that: “We see no reason why acquired brands should be treated differently from “home grown” brands, since both can be equally valuable assets of the company” (Murphy et al., 1989, page 10).

Unfortunately the solution is a little harder to arrive at. Murphy outlines problems in a number of alternative brand valuation methodologies: spending on brand building can be a weak proxy for brand value and reward frivolous spending, increased margins may not capture all the value from a brand, brands are not traded (causing problems finding market value), some survey methodologies (e.g., measuring awareness) can’t easily be translated into dollar values, and predicting future cash flows and growth is fraught with difficulty. All these problems seem a largely fair.

The solution proposed is interesting — and has helped create a massive business for Interbrand and other consultants. The problem is that it also seems a little arbitrary. Brand strength helps determine the multiple of profits assigned to brand value using an S curve methodology. Sadly, the seven aspects of brand strength while all likely important seem not fully justified or even fully explained. These are stated to be: Leadership, Stability, Market, Internationality, Trend, Support, and Protection.

Like many papers Murphy’s is better outlining problems than solutions but it is very useful in helping us better understand the history of brand valuation.

Read: J. Murphy et al. (Interbrand Ltd., Windsor) (1989) The Valuation of Brands, British Food Journal, 91, 2, pages 9-11

Capitalizing Spending On Intangibles

My final comment on Lev and Gu’s The End of Accounting discusses their idea of how to improve financial reporting. This is a bit more controversial to my mind but worth considering.

They argue that accounting uses too many estimates. As such, although the authors want to create more records of intangibles they argue against adding estimates onto the balance sheet for brand values and other intangibles. (This is because the fair values of brands and other intangibles are extremely hard to come up with.) Instead they suggest that objective values should be used.

“We don’t suggest to value intangibles by their current purchase or sale prices (fair values). Rather, in line with the treatment of these assets in the national income accounts, we propose to capitalize the investment in these intangibles, using their objective original costs.” (Lev and Gu, 2016, page 215)

The argument is that capitalizing spending gives an objective value — to capitalize we use the records we have of what was spent on brand building. The advantage is that managers and their auditors can’t just pick their favourite valuation system. The downside is that the amount spent may not have too much relationship to the value created.

Maybe capitalizing investments in intangibles is not going solve all the problems we see but it may be acceptable to accountants and so this may make it worth thinking about.

Read: Baruch Lev and Feng Gu (2016) The End of Accounting and the Path Forward for Investors and Managers, Wiley

Who Has An Interest In Changing Accounting

Baruch Lev and Feng Gu, accounting professors, ask a simple question. “Why are managers and auditors so blasé about accounting for intangibles?” (Lev and Gu, 2016, page 90). The way intangibles are accounted for clearer violates the concept of matching. This concept says we should charge expenses to the profit and loss statement when what is “bought” is  used up but we currently expense brand costs when they are spent not when the brand is used up. This is clearly inconsistent with matching that is central to much accounting theory.

The classic defence to this violation of matching is that there is no problem free solution. To be fair this argument is true; however we account for intangibles is going to encounter problems. That said Lev and Gu have a point when they say that accountants seem unwilling to even try and come up with a better way.

Lev and Gu argue that this lack of interest in even considering change is that change isn’t in managers’ or auditors’ interests. Managers don’t want intangibles on the balance sheet because if they are recorded intangibles can be a stark reminder of mistakes when managers make them. If you have an asset on the balance sheet a manager needs to explain what happened if it is impaired, (i.e. the value written down). Nowadays spending on intangibles can be justified verbally by the manager as an investment when it is made but no one sees a record of the investment. Because there is no record if the value of the brand is frittered away no one needs to explain it. Similarly, auditors don’t want to add anything to the accounts that is hard to point to as such values are likely to be tempting for law suits.

Lev and Gu seem to have a good argument to me. They argue the people who would benefit most from change are investors and they are being shortchanged in order to preserve a status quo that benefits managers and auditors.

Read: Baruch Lev and Feng Gu (2016) The End of Accounting and the Path Forward for Investors and Managers, Wiley

Financial Information and Stock Prices

For the next few weeks I’ll discuss Lev and Gu’s new book on the problems of financial accounting, The End of Accounting. This book sets out the case that financial accounting reports are getting increasingly divorced from usefulness to investors. The authors argue that too much of value is omitted — financial accounts simply do not capture what makes a firm valuable. Counter-intuitively Lev and Gu also suggest that, despite the accounts not capturing important things, there are also too many estimates hidden in the accounting values. They are fans of disclosures and commentary rather than estimates.

One particularly useful task they take on is documenting the discrepancy between the market value of firms and the information captured in company financial accounts. The accounting information they examine is a) the firm value in the accounts (known as book value) and b) earnings.

They use regression to see how useful book value is at predicting the market value of U.S. public companies. It used to be decent in 1950s but the usefulness at predicting market value has declined precipitously since then. A similar story is told when one looks at the usefulness of reported earnings in predicting market value. As they say, “Come to think of it, this is not totally surprising. By the structure of accounting procedures, what affects the income statement also affects the balance sheet, and vice versa.” (Lev and Gu, 2016, page 35). Put simply if you don’t record assets effectively on the balance sheet this will cause you to charge the wrong amount to the profit and loss and the earnings will also be off.

Lev and Gu do a great job of motivating the problem in financial accounting.

Read: Baruch Lev and Feng Gu (2016) The End of Accounting: and the Path Forward for Investors and Managers, Wiley

The Flat Maximum And Data Science

Steven Finlay has a useful book on Data Science, (Predictive Analytics, Data Mining and Big Data). He has lots of helpful practical advice in an easy to access form. Beyond a general recommendation to read the book I will highlight a point he makes — namely the existence of the flat maximum effect. According to Finlay “The flat maximum effect states that for most problems there is not a single best model that is substantially better than all others.” (Finlay, 2014, page 105).

This means that although the benefits to be gained from using analytics may often be significant they can diminish relatively quickly after you already have a model and are simply searching for a better model. While some models may be a little better than some others the flat maximum effect means that you often don’t need to be too obsessive about using the perfect model. If one model isn’t necessarily the very best, but it is close and it works for your business, then you might choose the second best model. One of the reasons to choose a slightly sub-optimal model includes that this model seems credible to the general, i.e. non-data-scientist, managers and so is more likely to be implemented. This will make it infinitely better than a supposedly superior model that isn’t likely to be used. (You can always run the superior model and compare the results to make sure you aren’t sacrificing too much).

The flat maximum effect helps explain why the perfect is often the enemy of the good. Why not accept good as it might be really quite near perfect and be actually achievable?

Read: Steven Finlay (2014) Predictive Analytics, Data Mining and Big Data, Palgrave MacMillan