One of the strangest things in statistics is Simpson’s paradox. The paradox happens when two sets of data each show the same result but when you combine the data into a single data set the combined table gives you a different result.
Smith explains this using a click data example. In the data he shows when you look at aggregate data a two-click format is more profitable for the entire group than a one-click format. One might conclude that the two-click is better as it performs best in aggregate. “This conclusion might be an expensive mistake” (Smith, 2014, page 112).
The problem is that when you dig into the data there are two groups, U.S. and international customers. It is strange to notice that the one-click format is actually better for both groups of customers. What is going on?
The explanation is that in the example there are relatively more US customers using the two-click format than the one-click format. The US customers are much more profitable. The relatively high number of the more profitable type of customers in the two-click format makes it look more profitable but it is not the format that is more profitable. It is that the specific format happens to have more of the profitable customers. If you compare like with like you notice that one click is simply better.
As Smith says: “The Key to being alert to a possible Simpson’s Paradox is to think about whether a confounding factor has been ignored” (Smith, 2014, page 112).
Data can be strange but often very interesting.
Read: Gary Smith, 2014, Standard Deviations: Flawed Assumptions, Tortured Data and Other Ways to Lie With Statistics, The Overlook Press.
Gary Smith’s advice on statistics, Standard Deviations, is a really useful and entertaining book. He points out a number of major problems with the way stats are used. Some problems arise from deliberate tricks played by researchers/managers describing data. Other problems arise through carelessness; the researcher/manager using the data doesn’t realize they are abusing the data. Over the next few weeks I’ll examine three problems that Smith highlights.
One of the problems Smith describes comes from the way that data is visualized. He makes a host of scathing and funny comments about data presentation. I liked his description of “The Secret Axis” (Smith, 2014, page 73) which is something I often see in graphs. (Technically something I don’t see given it is a missing axis).
Smith gives high profile examples of abuse of data visualization. In 1982 Ronald Reagan presented his budget plan with no numbers on the Y-axis; the viewer couldn’t know the scale of what was being presented. A 9% difference in tax plans was represented as a 90% difference on the (unspecified) Y-axis. The quote from David Gergen, Reagan’s spokesman is fantastic. ‘”We tried it with numbers and found they were very hard to read on television so we took them off”‘ (Smith, 2014, page 74). Let us be (very, very) generous and assume that Gergen made a mistake that just happened to make his boss look better.
The lesson is that we all need to be careful about the way we present data. We don’t want to leave anyone with a false impression because of our secret axis.
Conversely when confronted with a secret axis don’t accept it. A graph without a clear axis is merely a pretty picture and shouldn’t be treated seriously.
Read: Gary Smith, 2014, Standard Deviations: Flawed Assumptions, Tortured Data and Other Ways to Lie With Statistics, The Overlook Press.
Mark D. White has written an ominously titled book “The Manipulation of Choice: Ethics and Libertarian Paternalism”. He really doesn’t like the sort of Nudging proposed by Thaler and Sunstein. I think that he needs to chill out. He gets excited about minor philosophical issues while ignoring big social issues.
He says that no one can know anyone else’s “interests”. (This term is as slippery/poorly defined as it sounds.) Given this governments, employees, and other libertarian paternalists shouldn’t try and help us make better decisions. This is a highly dogmatic position: he violently objects to minor tweaks to the way choices are presented.
Telling a new employee “you will be enrolled in the 401k unless you say otherwise” is apparently a gross violation of freedom. He argues this because we don’t know whether the employee’s “interests” are to enroll. He is right employers can’t know employees “interests” with certainty, but to be fair neither can the employee. Furthermore, young employees may have special problems understanding any longer-term interests. We have a massively hard problem. The simple practical solution is to set a default that seems helpful while letting the employee change the choice if he or she wishes.
One can, and should, argue about whether using a default will be the most effective solution. That said, I think we should at least experiment to improve choices given changing the default is a tiny tweak designed to alleviate pensioner poverty which is a massive social ill.
Although White says his approach can appeal across the political spectrum it seems to have a traditional libertarian underpinning. His “So What Should We Do Instead” (White 2013, page 137) basically says do nothing because government should be as small as possible. He then suggest that we should worry about: “Holding People Accountable For Their Choices” (White 2013, page 145). I agree accountability is often good and may sometimes encourage learning. Unfortunately, life is only lived once. If you didn’t save for retirement you don’t get a do-over to try again after you have learned that pensioner poverty is something to avoid. If smoking kills you, one can’t resolve to make a different choice next time.
Overall White’s approach only works if you are willing to ditch all social welfare programs. If collectively we aren’t willing to abandon the sick and old — even if they have made bad decisions that contributed to their problems — then we need a better response than telling people that their suffering is preferable to a minor tweak to a default.
One should always worry that a nudge may be too intrusive given the value of what it could achieve. Some nudges will undoubtedly be ineffective or even counter-productive. Still I want to be nudged to help me make better choices and don’t feel that choosing to put healthy foods at eye-level is a fundamental attack on my freedom.
Read: Mark D. White (2013) The Manipulation of Choice: Ethics and Libertarian Paternalism, Palgrave Macmillan
Peter Fader and Bruce Hardie are experts in understanding the value of customer relationships. They have offered advice on problems with CLV calculations, especially those taught in MBA programs.
They outline five issues. Many of these are things to bear in mind that we might already know but can forget to clarify. For instance, they note that typically CLV is a projection and so it isn’t a true value but an expected value. The way to cover this point is writing E(CLV), i.e. expected value of CLV, rather than CLV. I suspect their idea won’t stick but they are technically correct. In a similar vein Fader and Hardie note that we should include the initial period’s margin to be a full lifetime.
They also talk of the problem with a small number of periods, e.g., three years, being used to calculate a life. This omits the value of the relationship beyond the initial periods so isn’t a lifetime value. The simplest solution is to use a larger number of periods which more closely approximate to lifetimes than a shorter number of periods.
The authors don‘t like assuming a constant retention rate, as this is typically not the case in the real world. Furthermore, they correctly note that many businesses do not observe the time the customer is lost. A retailer never really knows you have ceased to be a customer, the retailer just doesn’t see you for a while and guesses whether you are ever coming back. As such they don’t like the idea of trying to shoehorn a messy customer relationship into a nice neat formula.
“The bottom line is that there is not “one formula” that can be used to compute customer lifetime value” (Fader and Hardie, 2014, page 4). It is a good point. I’m nervous to fully endorse it, not because I disagree – it is true that the world is complex and no one formula does everything. The worry I have is that we already have loads of CLV variants; I’m hoping to get rid of some of these as just plain wrong. I don’t want to encourage people to come up with their own versions as advice that nothing is perfect might encourage. That said Fader and Hardie make a lot of great points and their advice is always worth noting.
Read: Peter Fader and Bruce Hardie (2014) What’s Wrong With This CLV Formula?, <http://brucehardie.com/notes/033/>
Thomas Davenport is one of the best know voices in the field of business analytics. He has a book with Jinho Kim which discusses how individual business people can best manage their work in a world where analytics are a key part of many business strategies. The aim of the book is to enable managers and other people engaging with “quants” to best use the talents of the quants. To help create quantative analyse that is rigorous but also connected to business needs.
They map out their approach to quantative analysis in three stages. (This encompasses a total of six steps, I’ll concentrate on the stages, to avoid confusion between steps and stages). The first stage is Framing the Problem which includes problem recognition – why are we trying to do something? — and a review of previous findings – what do we already know?
The next stage is Solving the Problem – the actual analysis. They are keen to emphasize not to jump straight into trying to solve the problem. If you don’t know why you are doing the analysis, or what you already know, any analysis is likely to be highly ineffectual. The actual analysis includes choosing the model, data collection, and analysing the data using the chosen model. One of the strengths of the book is the numerous examples it gives to help the reader understand what is going on at each stage.
I was particularly pleased about the emphasis the authors placed on the final stage. They say, and I agree, that Communicating and Acting on the Results is just as important as the analysis. It isn’t enough to do some analysis; the analysis has to change an action to be worthwhile from a business perspective. Results presentation and generating action are key to success. Analysis without proper communication of the findings is a waste of time — it is great to see such major figures in business analytics emphasizing this point.
Read: Thomas H. Davenport and Jinho Kim (2013) Keeping Up with the Quants: Your Guide to Understanding and Using Analytics, Harvard Business Review Press
I recently published a short piece for WARC Best Practice on “How to set marketing metrics effectively”. The basic idea behind the piece is an explanation of how to decide upon what marketing metrics to use. This work introduces a new acronym that I’ve produced called the WAITA model. Hopefully this is easy to remember as the WAITA model, just like a server in a restaurant, helps you decide what you need.
The WAITA model covers five things to think about:
1) Who the metric is designed to help. Specifically, who is the person making a decision that the metric will be reported to. For example, a junior marketer will probably need the metric at a much lower level of granularity than a CMO.
2) The assumptions behind the metric. To use a metric effectively you must know what it is telling you, without knowing the assumptions you really can’t know what it means.
3) The ingredients, this includes the sources of data. Many of problem is uncovered by looking at where the data is coming from. This also includes the formula. “One should always expect to see the formula for a given metric being discussed. If the formula is not clearly documented ask for it. If the presenter can’t quickly access the formula or shows signs of not understanding the formula any recommendations cannot be well supported.” (Bendle, 2016).
4) The theory behind the idea. A number without any theory about what it means is a bit meaningless. We generally are looking for causal links — this metric is going up because of something else, i.e. good performance. Is this theory reasonable?
5) Finally, what actions can I take once I know the metric? A metric that doesn’t influence an action is a bit of waste of everyone’s time.
Hopefully the WAITA model will help people choose better metrics.
Read: Neil Bendle (2016) How to set marketing metrics effectively, WARC Best Practice, https://www.warc.com/Content/News/Setting_effective_marketing_metrics.content?ID=bb024f5d-b4ff-495d-a2a4-fd6dbe0a2c54&q=
It is hard to spend any time at a business school without hearing the phrase best practice. We teach students best practice, junior professors seek hints from senior folk on best practice, even schools regularly go through bouts of benchmarking to see if they are adopting best practice. Best practice essentially is something that gets the most out of what the school has, e.g., financial resources, time, students, staff.
One problem is that, if we are trying to do different things, my best resource allocation is likely to be different to yours. Thus, benchmarking may merely report on strategic decisions, for example to invest in staff not the building. In such cases benchmarking whether the building is best in class seems a bit pointless. The building is worse than some others because the strategic decision was made to allow it to be.
Another problem with a lot of benchmarking is that the idea of best practice is a pretty nebulous one even after correcting for the school’s strategic decisions. The target, best practice, is constantly moving when competitors take action. Indeed whenever you take action yourself you change the make up of the market itself so maybe the prior best practice you were aiming to emulate is no long best practice in the new market you have just helped create.
David Stewart and Robert Winsor (2016) make this point in considering the changing nature of marketing. They discuss the Red Queen hypothesis; which is based upon the Red Queen in Alice’s Wonderland who says that in order to stay still (compared to everything else) you must constantly keep moving. According to Stewart and Winsor: “… the Red Queen hypothesis condemns any given marketing activity or strategy to a limited life span over which it can be effectively leveraged.” (Stewart and Winsor, 2016, page 255). They tie this thinking back to best practice saying “…it is largely unrealistic to seek standardised solutions or “best practice” in marketing strategy. In marketing, definitions of competitive success must be bounded within highly unique contexts, and are typically based upon transitory, evolving and relative strategies.” (Stewart and Winsor, 2016, page 255).
The basic message is that you should keep trying to improve but will never have the comfort of achieving best practice.
Read: David W. Stewart and Robert D. Winsor, 2016, Marketing Organization and Accountability, In Accountable Marketing: Linking Marketing Actions to Financial Performance, Edited by David W. Stewart and Craig T. Gugel, Routledge, MASB
Roger Sinclair shows the value of experience when he surveys the history of reporting on brands. He describes a time when firms would add brand values to their balance sheet. The challenge was there was no recognized method for doing this so firms just used whatever method they fancied. That lack of agreement on method to use meant it seemed like firms were just trying to mislead investors. This led to the issuance of “a “cease and desist” order” (Sinclair, 2016, page 168) in the UK and others countries followed suit.
Sinclair clearly has sympathy with the ultimate goal of those who were told to cease and desist. He, correctly, notes that market value has become increasingly detached from the values in financial accounting statements. In essence, it is hard to see why anyone would pay too much attention to the financial statements given a lot of the more important details about the firm’s assets aren’t entered onto them.
He also points out the obvious problem at the moment that although brands mostly aren’t classed as assets, sometimes they are — and this has nothing to do with their value. A brand is added to a balance sheet when the brand is acquired. Thus, an acquired brand is recorded but a brand of exactly the same value that was internally generated is not. My experience suggests accountants see the problem with this weird double standard — many just think allowing internally generated brand values to be recorded will be a worse problem.
Even when brand values are recorded they tend to only be adjusted down, if values increase they stay the same on the balance sheet. Sinclair suggests that there is a perfectly good accounting procedure to increase the value of brands when they go up — accretion. (“The term accretion is the opposite of impairment“, Sinclair, 2016, page 174). Again accountants don’t tend to like increasing values but not increasing the value of strengthening does mean that the recorded brand values can become increasingly meaningless over time.
The sides are quite a way apart on balance sheet recognition but it is a great debate to have.
Read: Roger Sinclair, 2016, Reporting on Brands, In Accountable Marketing: Linking Marketing Actions to Financial Performance, Edited by David W. Stewart and Craig T. Gugel, Routledge, MASB
Marc Fischer explains common methods of brand valuation. One of the problems he highlights is that there are so many methods — different companies have their own proprietary valuation systems. He groups these into three main approaches; a cost-based approach, a market-based approach and an income/DCF-based approach.
A cost-based approach determines the value of a brand according to its inputs. When it costs more to create then the brand is more valuable. “While cost-based measures are attractive due to the objective and easy collection of data, they are heavily criticized for their theoretical weakness.” (Fischer, 2016, page 187). The problem is that there is little reason to think that what it cost to create a brand is a good proxy of brand value.
A market-based approach suggests that what people will pay is the value of the brand. “Unfortunately, there does not exist a liquid market of brand transactions” (Fischer, 2016, page 187). In essence the market-based approach is theoretically better but very hard to deploy as we don’t really have many market values to use for comparison.
An income/DCF-based approach values a brand based upon a projected stream of cashflows that arose because of the brand. Again this is great in theory but challenging to do in practice. Even if you can accurately project the future cashflows of a firm how do you decide what percentage of the cashflows will arise only because of the brand? “Major concerns exist about subjectivity and uncertainty”. (Fischer, 2016, page 187).
After outlining the problem Fischer offers his own valuation system. No system is perfect but with so many people working on valuations let us hope that they continue to improve.
Read: Marc Fischer, 2016, Brand Valuation in Accordance with GAAP and Legal Requirements, In Accountable Marketing: Linking Marketing Actions to Financial Performance, Edited by David W. Stewart and Craig T. Gugel, Routledge, MASB
Jim Meier, (an executive at MillerCoors), is an expert on getting marketing and finance to work together. He has written a fascinating chapter on the problems of doing this. The portrayal of marketers through the eyes of finance people is amusing, if sadly true. Marketing is seen as being “..fraught with subjectivity, murkiness, and fluffiness” (Meier, 2016, page 152). Meier worries that what marketers measure is often quite divorced from financial outcomes. Even if the measures used are useful, “the trail goes cold before it reaches a true financial destination” (Meier, 2016, page 154). In return finance people are seen as having a “lack of understanding of what truly matters” (Meier, 2016, page 153).
After outlining the problems Meier explains what actions were taken at MillerCoors. One idea was seeding finance people throughout the organization. Allowing these distributed finance staff, “mini-CFOs”, a chance to better understand what will help other disciplines perform their roles. MillerCoors is even examining the possibility of valuing brands periodically to better highlight the effect of decisions on these hard to measure intangible assets. These assets are crucial to the success of a firm like MillerCoors but can be missed if one concentrates only on numbers that get reported in company financial accounts.
To be clear it isn’t just finance people that need to change. For marketers understand and influence finance decisions “does necessitate that the organization take steps to “financialize the marketers” but not to an extreme in which they are converted into de facto accountants.” (Meier, 2016, pager 163). The point is a good one, to influence finance decisions it is not enough to plead for finance people to understand marketing, marketers need to try to understand finance. It is a tough challenge for the discipline but one that I think/hope will be very worthwhile.
Read: James Meier, 2016, Creating a Partnership Between Marketing and Finance, In Accountable Marketing: Linking Marketing Actions to Financial Performance, Edited by David W. Stewart and Craig T. Gugel, Routledge, MASB