Marketing Thought
Clarifying management theory for students, academics and practitioners.

Capitalizing Spending On Intangibles

My final comment on Lev and Gu’s The End of Accounting discusses their idea of how to improve financial reporting. This is a bit more controversial to my mind but worth considering.

They argue that accounting uses too many estimates. As such, although the authors want to create more records of intangibles they argue against adding estimates onto the balance sheet for brand values and other intangibles. (This is because the fair values of brands and other intangibles are extremely hard to come up with.) Instead they suggest that objective values should be used.

“We don’t suggest to value intangibles by their current purchase or sale prices (fair values). Rather, in line with the treatment of these assets in the national income accounts, we propose to capitalize the investment in these intangibles, using their objective original costs.” (Lev and Gu, 2016, page 215)

The argument is that capitalizing spending gives an objective value — to capitalize we use the records we have of what was spent on brand building. The advantage is that managers and their auditors can’t just pick their favourite valuation system. The downside is that the amount spent may not have too much relationship to the value created.

Maybe capitalizing investments in intangibles is not going solve all the problems we see but it may be acceptable to accountants and so this may make it worth thinking about.

Read: Baruch Lev and Feng Gu (2016) The End of Accounting and the Path Forward for Investors and Managers, Wiley

Who Has An Interest In Changing Accounting

Baruch Lev and Feng Gu, accounting professors, ask a simple question. “Why are managers and auditors so blasé about accounting for intangibles?” (Lev and Gu, 2016, page 90). The way intangibles are accounted for clearer violates the concept of matching. This concept says we should charge expenses to the profit and loss statement when what is “bought” is  used up but we currently expense brand costs when they are spent not when the brand is used up. This is clearly inconsistent with matching that is central to much accounting theory.

The classic defence to this violation of matching is that there is no problem free solution. To be fair this argument is true; however we account for intangibles is going to encounter problems. That said Lev and Gu have a point when they say that accountants seem unwilling to even try and come up with a better way.

Lev and Gu argue that this lack of interest in even considering change is that change isn’t in managers’ or auditors’ interests. Managers don’t want intangibles on the balance sheet because if they are recorded intangibles can be a stark reminder of mistakes when managers make them. If you have an asset on the balance sheet a manager needs to explain what happened if it is impaired, (i.e. the value written down). Nowadays spending on intangibles can be justified verbally by the manager as an investment when it is made but no one sees a record of the investment. Because there is no record if the value of the brand is frittered away no one needs to explain it. Similarly, auditors don’t want to add anything to the accounts that is hard to point to as such values are likely to be tempting for law suits.

Lev and Gu seem to have a good argument to me. They argue the people who would benefit most from change are investors and they are being shortchanged in order to preserve a status quo that benefits managers and auditors.

Read: Baruch Lev and Feng Gu (2016) The End of Accounting and the Path Forward for Investors and Managers, Wiley

Financial Information and Stock Prices

For the next few weeks I’ll discuss Lev and Gu’s new book on the problems of financial accounting, The End of Accounting. This book sets out the case that financial accounting reports are getting increasingly divorced from usefulness to investors. The authors argue that too much of value is omitted — financial accounts simply do not capture what makes a firm valuable. Counter-intuitively Lev and Gu also suggest that, despite the accounts not capturing important things, there are also too many estimates hidden in the accounting values. They are fans of disclosures and commentary rather than estimates.

One particularly useful task they take on is documenting the discrepancy between the market value of firms and the information captured in company financial accounts. The accounting information they examine is a) the firm value in the accounts (known as book value) and b) earnings.

They use regression to see how useful book value is at predicting the market value of U.S. public companies. It used to be decent in 1950s but the usefulness at predicting market value has declined precipitously since then. A similar story is told when one looks at the usefulness of reported earnings in predicting market value. As they say, “Come to think of it, this is not totally surprising. By the structure of accounting procedures, what affects the income statement also affects the balance sheet, and vice versa.” (Lev and Gu, 2016, page 35). Put simply if you don’t record assets effectively on the balance sheet this will cause you to charge the wrong amount to the profit and loss and the earnings will also be off.

Lev and Gu do a great job of motivating the problem in financial accounting.

Read: Baruch Lev and Feng Gu (2016) The End of Accounting: and the Path Forward for Investors and Managers, Wiley

The Flat Maximum And Data Science

Steven Finlay has a useful book on Data Science, (Predictive Analytics, Data Mining and Big Data). He has lots of helpful practical advice in an easy to access form. Beyond a general recommendation to read the book I will highlight a point he makes — namely the existence of the flat maximum effect. According to Finlay “The flat maximum effect states that for most problems there is not a single best model that is substantially better than all others.” (Finlay, 2014, page 105).

This means that although the benefits to be gained from using analytics may often be significant they can diminish relatively quickly after you already have a model and are simply searching for a better model. While some models may be a little better than some others the flat maximum effect means that you often don’t need to be too obsessive about using the perfect model. If one model isn’t necessarily the very best, but it is close and it works for your business, then you might choose the second best model. One of the reasons to choose a slightly sub-optimal model includes that this model seems credible to the general, i.e. non-data-scientist, managers and so is more likely to be implemented. This will make it infinitely better than a supposedly superior model that isn’t likely to be used. (You can always run the superior model and compare the results to make sure you aren’t sacrificing too much).

The flat maximum effect helps explain why the perfect is often the enemy of the good. Why not accept good as it might be really quite near perfect and be actually achievable?

Read: Steven Finlay (2014) Predictive Analytics, Data Mining and Big Data, Palgrave MacMillan

Behavioural Economics And Policy In Canada

One of the most interesting things about behavioural economics is that it is quite practical. Insights can be applied, often directly, to issues in the public sphere. Furthermore, many of the ideas generated in behavioural economics are simple tweaks — tweaks can be very cheap which often makes the ideas popular with politicians. After all if you can improve the effectiveness of public policy without taxing people more why wouldn’t you?

In discussing the work by Canadian governmental agencies French and Oreopoulos discuss the idea that interventions can be divided into “low-touch” and “high-touch” nudges.  As they say, “…low-touch nudges are often cheaper to implement and focus on making decisions more salient and simpler for the individual” (French and Oreopoulos, 2016). There seems little reason not to do low-touch nudges, such as redesigning forms to make them easier to fill in. They are often close to costless and so any benefits gained are a bonus.

High-touch nudges are more elaborate; the authors give the example of motivational interviewing. This is an approach to help jobseekers transition into work. Such interventions are a little beyond what we might normally think of as nudges, but governments often already provide such support. The key difference in using behavioural insights to inform policy is that what works, given the way people actually behave, is considered and the results tested. These high-touch nudges are likely to be expensive so people may have different views about whether these represent good things for public agencies to do. That said, when interventions are being conducted hopefully public agencies will try to help people as effectively as they can for the money invested.

I think the use of behavioural insights to inform the actions of public agencies is one of the most exciting trends of recent years. It is interesting to see how French and Oreopoulos detail what Canadian agencies are up to.

Read: Robert French and Philip Oreopoulos (2016) Applying Behavioural Economics to Public Policy in Canada, NBER working Paper 22671

Data Without Theory

My final note on Gary Smith’s impressive Standard Deviations book concerns an important point that statistically inclined people often seem to miss. He is keen to note that data isn’t enough on its own. “Data without theory… is treacherous” (Smith, 2014, page 233).

Smith describes a case where a cholera outbreak was statistically associated with people not leaving their villages a few days before. If this was thought to be useful one might look for lack of movement between villages and conclude that we can predict cholera from movements. This is a lot of work to go to when a simple piece of “theory” — by which I mean thought about causes — helps us work out what is happening without massive amounts of number crunching. What is happening is pretty simple. Floods come and people stop leaving their villages, then cholera comes borne by the flood water. We can predict cholera from easily observed floods; we don’t need to capture movement data.

Thinking — developing a causal theory — allows us to use the data much more effectively. Theory without data is a problem which can afflict academics — we can get divorced from reality.

Data without theory is also a problem — you can end up believing and doing some pretty silly things. As Smith says it can fuel bubbles — we don’t know why the price is so high but it keeps on going up so we assume it will continue to do so. Always aim to come up with a at least plausible theory of what is causing whatever you observe in the data before putting too much faith in the result.

Read: Gary Smith, 2014, Standard Deviations: Flawed Assumptions, Tortured Data and Other Ways to Lie With Statistics, The Overlook Press.

Simpson’s Paradox: Data can be very confusing

One of the strangest things in statistics is Simpson’s paradox. The paradox happens when two sets of data each show the same result but when you combine the data into a single data set the combined table gives you a different result.

Smith explains this using a click data example. In the data he shows when you look at aggregate data a two-click format is more profitable for the entire group than a one-click format. One might conclude that the two-click is better as it performs best in aggregate. “This conclusion might be an expensive mistake” (Smith, 2014, page 112).

The problem is that when you dig into the data there are two groups, U.S. and international customers. It is strange to notice that the one-click format is actually better for both groups of customers. What is going on?

The explanation is that in the example there are relatively more US customers using the two-click format than the one-click format. The US customers are much more profitable. The relatively high number of the more profitable type of customers in the two-click format makes it look more profitable but it is not the format that is more profitable. It is that the specific format happens to have more of the profitable customers. If you compare like with like you notice that one click is simply better.

As Smith says: “The Key to being alert to a possible Simpson’s Paradox is to think about whether a confounding factor has been ignored” (Smith, 2014, page 112).

Data can be strange but often very interesting.

Read: Gary Smith, 2014, Standard Deviations: Flawed Assumptions, Tortured Data and Other Ways to Lie With Statistics, The Overlook Press.

The Secret Axis

Gary Smith’s advice on statistics, Standard Deviations, is a really useful and entertaining book. He points out a number of major problems with the way stats are used. Some problems arise from deliberate tricks played by researchers/managers describing data. Other problems arise through carelessness; the researcher/manager using the data doesn’t realize they are abusing the data. Over the next few weeks I’ll examine three problems that Smith highlights.

One of the problems Smith describes comes from the way that data is visualized. He makes a host of scathing and funny comments about data presentation. I liked his description of “The Secret Axis” (Smith, 2014, page 73) which is something I often see in graphs. (Technically something I don’t see given it is a missing axis).

Smith gives high profile examples of abuse of data visualization. In 1982 Ronald Reagan presented his budget plan with no numbers on the Y-axis; the viewer couldn’t know the scale of what was being presented. A 9% difference in tax plans was represented as a 90% difference on the (unspecified) Y-axis. The quote from David Gergen, Reagan’s spokesman is fantastic. ‘”We tried it with numbers and found they were very hard to read on television so we took them off”‘ (Smith, 2014, page 74). Let us be (very, very) generous and assume that Gergen made a mistake that just happened to make his boss look better.

The lesson is that we all need to be careful about the way we present data. We don’t want to leave anyone with a false impression because of our secret axis.

Conversely when confronted with a secret axis don’t accept it. A graph without a clear axis is merely a pretty picture and shouldn’t be treated seriously.

Read: Gary Smith, 2014, Standard Deviations: Flawed Assumptions, Tortured Data and Other Ways to Lie With Statistics, The Overlook Press.

Nudging: Calm down it really isn’t the end of freedom

Mark D. White has written an ominously titled book “The Manipulation of Choice: Ethics and Libertarian Paternalism”. He really doesn’t like the sort of Nudging proposed by Thaler and Sunstein. I think that he needs to chill out. He gets excited about minor philosophical issues while ignoring big social issues.

He says that no one can know anyone else’s “interests”. (This term is as slippery/poorly defined as it sounds.) Given this governments, employees, and other libertarian paternalists shouldn’t try and help us make better decisions. This is a highly dogmatic position: he violently objects to minor tweaks to the way choices are presented.

Telling a new employee “you will be enrolled in the 401k unless you say otherwise” is apparently a gross violation of freedom. He argues this because we don’t know whether the employee’s “interests” are to enroll. He is right employers can’t know employees “interests” with certainty, but to be fair neither can the employee. Furthermore, young employees may have special problems understanding any longer-term interests. We have a massively hard problem. The simple practical solution is to set a default that seems helpful while letting the employee change the choice if he or she wishes.

One can, and should, argue about whether using a default will be the most effective solution. That said, I think we should at least experiment to improve choices given changing the default is a tiny tweak designed to alleviate pensioner poverty which is a massive social ill.

Although White says his approach can appeal across the political spectrum it seems to have a traditional libertarian underpinning. His “So What Should We Do Instead” (White 2013, page 137) basically says do nothing because government should be as small as possible. He then suggest that we should worry about: “Holding People Accountable For Their Choices” (White 2013, page 145). I agree accountability is often good and may sometimes encourage learning. Unfortunately, life is only lived once. If you didn’t save for retirement you don’t get a do-over to try again after you have learned that pensioner poverty is something to avoid. If smoking kills you, one can’t resolve to make a different choice next time.

Overall White’s approach only works if you are willing to ditch all social welfare programs. If collectively we aren’t willing to abandon the sick and old — even if they have made bad decisions that contributed to their problems — then we need a better response than telling people that their suffering is preferable to a minor tweak to a default.

One should always worry that a nudge may be too intrusive given the value of what it could achieve. Some nudges will undoubtedly be ineffective or even counter-productive. Still I want to be nudged to help me make better choices and don’t feel that choosing to put healthy foods at eye-level is a fundamental attack on my freedom.

Read: Mark D. White (2013) The Manipulation of Choice: Ethics and Libertarian Paternalism, Palgrave Macmillan

What’s Wrong With This CLV Formula?

Peter Fader and Bruce Hardie are experts in understanding the value of customer relationships. They have offered advice on problems with CLV calculations, especially those taught in MBA programs.

They outline five issues. Many of these are things to bear in mind that we might already know but can forget to clarify. For instance, they note that typically CLV is a projection and so it isn’t a true value but an expected value. The way to cover this point is writing E(CLV), i.e. expected value of CLV, rather than CLV. I suspect their idea won’t stick but they are technically correct. In a similar vein Fader and Hardie note that we should include the initial period’s margin to be a full lifetime.

They also talk of the problem with a small number of periods, e.g., three years, being used to calculate a life. This omits the value of the relationship beyond the initial periods so isn’t a lifetime value. The simplest solution is to use a larger number of periods which more closely approximate to lifetimes than a shorter number of periods.

The authors don‘t like assuming a constant retention rate, as this is typically not the case in the real world. Furthermore, they correctly note that many businesses do not observe the time the customer is lost. A retailer never really knows you have ceased to be a customer, the retailer just doesn’t see you for a while and guesses whether you are ever coming back. As such they don’t like the idea of trying to shoehorn a messy customer relationship into a nice neat formula.

“The bottom line is that there is not “one formula” that can be used to compute customer lifetime value” (Fader and Hardie, 2014, page 4). It is a good point. I’m nervous to fully endorse it, not because I disagree – it is true that the world is complex and no one formula does everything. The worry I have is that we already have loads of CLV variants; I’m hoping to get rid of some of these as just plain wrong. I don’t want to encourage people to come up with their own versions as advice that nothing is perfect might encourage. That said Fader and Hardie make a lot of great points and their advice is always worth noting.

Read: Peter Fader and Bruce Hardie (2014) What’s Wrong With This CLV Formula?, <>