This week I have a second (and last) post on Agrawal, Gans and Golfarb’s Prediction Machines. This interesting books discusses the difference between machine learning and traditional statistics. The idea being machine learning is more functional, more concerned with a useful result than a precisely accurate one. The challenge is that machine learning predicts not explains.
Useful As Opposed To Accurately Explaining
As the authors say:
“Machine learning science had different goals from statistics. Whereas statistics emphasized being correct on average, machine learning did not require that. Instead, the goal was operational effectiveness. predictions could have biases as long as they were better.”
Agrawal, Gans, and Goldfarb, 2018, page 40
Note it is quite possible for a biased algorithm to be more useful than a perfectly accurate one if the bias isn’t central to the system. Sometimes in life it is better to be biased than accurate. Given a concern about falling we might overestimate how close we are to the edge of a cliff with a steep drop. This bias isn’t usually a bad thing. In such cases it is generally better to err on the side of caution. If you don’t get as near to the cliff edge as you could you don’t lose much. Going right up to the cliff edge and getting it wrong is a much more serious problem.
Hypothesis Testing: Machine Learning Predicts Not Explains
Furthermore, traditionally statistics tests pre-determined hypothesis.
“Machine learning has less need to specify in advance what goes into the model.”
Agrawal, Gans, and Goldfarb, 2018 page 40
This allows the models to be much more complex as a multitude of factors can enter the model. This is often great but it can also mean that we don’t really understand what is going on. There are just a lot of factors interacting in a way that isn’t clear to the outside observer. As I said machine learning predicts not explains
Machines Are Not Necessarily Smart
For all its value machine learning and wider artificial intelligence (AI) can still be not that smart. The authors give an example of a chess machine that saw a pattern. Grand masters would sacrifice a queen just before winning. The chess machine then started to sacrifice the queen far too much. Mistakenly the machine thought the act of sacrificing the queen being useful in itself — i.e. causing the win. This misunderstand that a grand master decides to sacrifice a queen only when the sacrifice was going to cause the win.
Will AI End The World?
The end of the book tackles the big question we all want to know the answer to. Will AI end the world? Probably wisely they punt on the big question but suggest traditional economic models may help us understand the AI we are creating. The machines have goals and will pursue these relentlessly. In this machines actually resemble the weird Homo Economicus models used in traditional economic theory. Perhaps Homo Economicus does now exist — “H.E.” is just an AI.
Read: Ajay Agrawal, Joshua Gans, and Avi Goldfarb (2018) Prediction Machines: The Simple Economics of Artificial Intelligence, Harvard Business Review Press
For more on the book see here