Johan Fourie's blog

I'd rather be a comma than a fullstop

Posts Tagged ‘machine learning

The compelling case for technological optimism

leave a comment »


In September last year, I visited the Computer History Museum in Mountain View, California. The museum is dedicated to preserving and presenting all aspects of the computer revolution, from its roots in the twentieth century to self-driving cars today. What is remarkable is to observe, while walking through the more than 90000 objects on display, the profound change in technology over the last three decades. The mobile computing display, I thought, summarised this change best, showing the first laptop computers of the 1980s (see image above) to a modern-day iPhone. But what also became clear from the exhibitions was that those ‘in the know’ at the start of the revolution were right about the transformational impact of computers, but almost certainly wrong about the way it would affect us.

We are now at the cusp of another revolution. Artificial intelligence, led by remarkable innovations in machine learning technology, is making rapid progress. It is already all around us. The image-recognition software of Facebook, the voice recognition of Apple’s Siri and, probably most ambitiously, the self-driving ability of Tesla’s electric cars all rely on machine learning. And computer scientists are finding more applications every day, from financial markets – Michael Jordaan recently announced a machine learning unit trust – to court judgements – a team of economists and computer scientists have shown that the quality of New York verdicts can be significantly improved with machine learning technology.  Ask any technology optimist, and they will tell you the next few years will see the release of new applications that we currently cannot even imagine.

But there is a paradox. Just as machine learning technology is taking off, a new NBER Working Paper by three economists, Erik Brynjolfsson, Chad Syverson and Daniel Rock affiliated to MIT and Chicago, show something peculiar: a decline in labour productivity over the last decade. Across both the developed and developing world, growth in labour productivity, meaning the amount of output per worker, is falling. Whereas one would expect that rapid improvements in technology would boost total factor productivity, boosting investment and raising the ability of workers to build more stuff faster, we observe slower growth, and in some countries even stagnation.


This has led some to be more pessimistic about the prospects of artificial intelligence, and in technological innovation more generally. Robert Gordon, in his ‘The Rise and Fall of American Growth’, argue that, despite an upward shift in productivity between 1995 and 2004, American productivity is on a long-run decline. Other notable economists, including Nicholas Bloom and William Nordhaus, are somewhat pessimistic about the ability of long-run productivity growth to return to earlier levels. Even the Congressional Budget Office in the US has reduced its 110-year labour productivity forecast, from 1.8 to 1.5%. On 10 years, that is equivalent to a decline of $600 billion in 2017.

How is it possible, to paraphrase Robert Solow in 1987, that we see machine learning applications everywhere but in the productivity statistics? The simplest explanation, of course, is that our optimism is misplaced. Has Siri or Facebook’s image recognition software really made us that more productive? Some technologies never live up to the hype. Peter Thiel famously quipped: ‘We wanted flying cars, instead we got 140 characters’.

Brynjolfsson and co-authors, though, make a compelling case for technological optimism, offering three reasons for why ‘even a modest number of currently existing technologies could combine to substantially raise productivity growth and societal welfare’. One reason for the apparent paradox, the authors argue, is the mismeasurement of output and productivity. The slowdown in productivity of productivity in the last decade may simply be an illusion, as most new technologies – think of Google Maps’ accuracy in estimating our arrival time – involve no monetary cost. Even though these ‘free’ technologies significantly improve our living standards, they are not picked up by traditional estimates of GDP and productivity. A second reason is that the benefits of the AI revolution are concentrated, with little improvement in productivity for the median worker. Google (now Alphabet), Apple, and Facebook have seen their market share increase rapidly in comparison to other large industries. Where AI was adopted outside ICT, these were often in zero-sum industries, like finance or advertising. A third, and perhaps most likely, reason is that it takes a considerable time to be able to sufficiently harness new technologies. This is especially true, the authors argue, ‘for those major new technologies that ultimately have an important effect on aggregate statistics and welfare’, also known as general purpose technologies (GPT).

There are two reasons why it takes long for GPTs to be seen in the statistics. It takes time to build up the stock necessary to have an impact on the aggregate statistics. While mobile phones are everywhere, the applications that benefit from machine learning are still only a small part of our daily lives. Second, it takes time to identify the complementary technologies and make these investments. ‘While the fundamental importance of the core invention and its potential for society might be clearly recognizable at the outset, the myriad necessary co-inventions, obstacles and adjustments needed along the way await discovery over time, and the required path may be lengthy and arduous. Never mistake a clear view for a short distance.’

As Brynjolfsson and friends argue, even if we do not see AI technology in the productivity statistics yet, it is too early to be pessimistic. The high valuations of AI companies suggest that investors believe there is real value in those companies, and it is likely that the effects on living standards may be even larger than the benefits that investors hope to capture.

Machine learning technology, in particular, will shape our lives in many ways. But much like those looking towards the future in the early 1990s and wondering how computers may affect our lives, we have little idea of the applications and complementary innovations that will determine the Googles and Facebooks of the next decade. Let the Machine (Learning) Age begin!

An edited version of this article originally appeared in the 30 November 2017 edition of finweek.


The future of work: don’t fear the robots, embrace them

leave a comment »


One of the things of being an economist teaching at a university is that parents inevitably think you have a lot of insight about the future of the job market. What is the ‘safest’ programme, parents typically ask, that will guarantee Ryan or Samantha a well-paying job at the end of three years? Translated: How do I maximize the return on my investment?

As with any investment, there are risks. Not all university students graduate; a recent study on higher education pass-through rates – by Stellenbosch University’s Research in Social and Economic Policy (ReSEP)-unit – shows that less than 40% of South African students attain their degree within four years of starting (remember, most degrees are three-year programmes). Only 58% of students complete their degree within 6 years. (The numbers are particularly low at UNISA, a distance-learning university, where only 28% of students complete their degree within six years.) There is a good chance Ryan never completes his degree in the first place, leaving only debt, psychological scars and forgone income in the labour market behind. The researchers also find that, while matric marks are strongly correlated with access to university, they matter less for university success. Samantha may have been a bright spark in school, but that is no guarantee that she will be successful at university.

But what worries most parents about their investment is not so much the internal factors that lead to success (like getting Ryan to attend class, one of the most important determinants of success), but the external threats that may affect his chances of finding a job. The biggest culprit nowadays: robots.

The threat of robots is everywhere, it seems. Autonomous vehicles will soon substitute the most ubiquitous job of the twentieth century – taxi and truck drivers. Blue-collar jobs are first in the firing line, from farm labourers replaced by GPS-coordinated harvesters to postal workers replaced by, well, e-mail. But white collar work – which is often the domain of university graduates – will be soon to follow: lawyers, accountants, and middle-management, to name a few that have been singled out. Basically any job with repetitive tasks run the risk of robotification.

Parents are eager to know which job types are most likely to succumb to the robot overlords. If lawyers are of no use in the future, why study law? This is, of course, a reasonable concern. Several of the standard activities undertaken by lawyers are repetitive, easily-automatable. And artificial intelligence challenges even non-repetitive work: it allows software to search through large volumes of legal texts at a fraction of the time a paralegal would during the ‘discovery’ phase of a case. Not so fast, says Tim Bessen, an economist at the Boston University School of Law. He shows that, in the period that this software has spread through the US, the number of paralegals have increased by 1.1% per year. Because the costs of undertaking these ‘discovery’ services have fallen dramatically as a result of the new technology, the frequency of such services have increased even more, requiring more paralegals, not fewer.

It is not only that robots substitute existing repetitive work, it is that they can do it so much better. Although robots and their algorithms are not entirely objective – because algorithms adjust to human behaviour, they can often reinforce our prejudices – their biases tend to be more transparent and corrigible. A new NBER study shows just how robots could transform one of the oldest human professions – the judge – and in so doing realise huge societal benefits. The five authors, three computer scientists and two economists, want to know the following: can US judges’ decisions be improved by using a machine learning algorithm?

Every year, more than 10 million Americans are arrested. Soon after arrest, a judge must decide where defendants will await trail – at home or in jail. By law, judges should base their decision on the probability of the defendant fleeing or committing another murder. Whether the defendant is guilty or not should not enter this decision.

To investigate whether judges make fair decisions, the authors train a face recognition algorithm on a dataset of 758 027 defendants in New York City. They have detailed information about these defendants: whether they were released, whether they committed new crimes, etc. They then construct an algorithm to process the same information a judge would have at their disposal, and the algorithm then provides a prediction of the crime risk associated with each defendant.

Comparing their results to those of the judges, they find that an algorithm can have large welfare gains: a ‘policy simulation shows crime can be reduced by up to 24.8% with no change in jailing rates, or jail populations can be reduced by 42.0% with no increase in crime rates’. All categories of crime, including violent crimes, decline. The percentage of African-Americans and Hispanics in jail also fall significantly.

Will robots replace judges? Probably not – but the quality of judges’ decisions can be improved significantly by using robots. This will be true in most other skilled professions too, from law to management to academic economists like me.

Matriculants on the cusp of their careers (and their anxious investor-parents) have no reason to fear the coming of the robots. If Ryan and Samantha, regardless of their field-of-study, see them as complements – by learning their language, and how to collaborate with them – the benefits, for themselves and society-at-large, will be greater than the costs.

*An edited version of this first appeared in Finweek magazine of 4 May.

Written by Johan Fourie

May 26, 2017 at 09:33