Machine learning and the end of the world

This week I have a second (and last) post on Agrawal, Gans and Golfarb’s Prediction Machines. This interesting books discusses the difference between machine learning and traditional statistics. The idea being machine learning is more functional, more concerned with a useful result than a precisely accurate one. As the authors say “Machine learning science had different goals from statistics. Whereas statistics emphasized being correct on average, machine learning did not require that. Instead, the goal was operational effectiveness. predictions could have biases as long as they were better.” (Agrawal, Gans, and Goldfarb, 2018 page 40). Note it is quite possible for a biased algorithm to be more useful than a perfectly accurate one if the bias isn’t central to the system. Sometimes in life it is better to be biased than accurate — we might overestimate how close we are to the edge of a cliff with a steep drop. This bias isn’t a bad thing — as in such a case it is generally better to err on the side of caution.

Furthermore, traditionally statistics tests pre-determined hypothesis, “Machine learning has less need to specify in advance what goes into the model.” (Agrawal, Gans, and Goldfarb, 2018 page 40). This allows the models to be much more complex as a multitude of factors can enter the model. This is often great but it can mean that we don’t really understand what is going on. There are just a lot of factors interacting in a way that isn’t clear to the outside observer.

For all its value artificial intelligence (AI) can still be not that smart. The authors give an example of a chess machine that saw a pattern — grand masters would sacrifice a queen just before winning. The chess machine then started to sacrifice the queen far too much. Mistaking the act of sacrificing the queen being useful in itself — i.e. causing the win — with a grand master’s decision to sacrifice a queen only when the sacrifice was going to cause the win.

The end of the book tackles the big question we all want to know the answer to. Will AI end the world? Probably wisely they punt on the big question but suggest traditional economic models may help us understand the AI we are creating. The machines have goals and will pursue these relentlessly. In this they actually resemble the weird Homo Economicus models used in traditional economic theory. Perhaps Homo Economicus does now exist — “H.E.” is just an AI.

Read: Ajay Agrawal, Joshua Gans, and Avi Goldfarb (2018) Prediction Machines: The Simple Economics of Artificial Intelligence, Harvard Business Review Press