Johan Fourie's blog

I'd rather be a comma than a fullstop

Posts Tagged ‘artificial intelligence

The compelling case for technological optimism

leave a comment »

PortableComputers

In September last year, I visited the Computer History Museum in Mountain View, California. The museum is dedicated to preserving and presenting all aspects of the computer revolution, from its roots in the twentieth century to self-driving cars today. What is remarkable is to observe, while walking through the more than 90000 objects on display, the profound change in technology over the last three decades. The mobile computing display, I thought, summarised this change best, showing the first laptop computers of the 1980s (see image above) to a modern-day iPhone. But what also became clear from the exhibitions was that those ‘in the know’ at the start of the revolution were right about the transformational impact of computers, but almost certainly wrong about the way it would affect us.

We are now at the cusp of another revolution. Artificial intelligence, led by remarkable innovations in machine learning technology, is making rapid progress. It is already all around us. The image-recognition software of Facebook, the voice recognition of Apple’s Siri and, probably most ambitiously, the self-driving ability of Tesla’s electric cars all rely on machine learning. And computer scientists are finding more applications every day, from financial markets – Michael Jordaan recently announced a machine learning unit trust – to court judgements – a team of economists and computer scientists have shown that the quality of New York verdicts can be significantly improved with machine learning technology.  Ask any technology optimist, and they will tell you the next few years will see the release of new applications that we currently cannot even imagine.

But there is a paradox. Just as machine learning technology is taking off, a new NBER Working Paper by three economists, Erik Brynjolfsson, Chad Syverson and Daniel Rock affiliated to MIT and Chicago, show something peculiar: a decline in labour productivity over the last decade. Across both the developed and developing world, growth in labour productivity, meaning the amount of output per worker, is falling. Whereas one would expect that rapid improvements in technology would boost total factor productivity, boosting investment and raising the ability of workers to build more stuff faster, we observe slower growth, and in some countries even stagnation.

TrendGrowthRates

This has led some to be more pessimistic about the prospects of artificial intelligence, and in technological innovation more generally. Robert Gordon, in his ‘The Rise and Fall of American Growth’, argue that, despite an upward shift in productivity between 1995 and 2004, American productivity is on a long-run decline. Other notable economists, including Nicholas Bloom and William Nordhaus, are somewhat pessimistic about the ability of long-run productivity growth to return to earlier levels. Even the Congressional Budget Office in the US has reduced its 110-year labour productivity forecast, from 1.8 to 1.5%. On 10 years, that is equivalent to a decline of $600 billion in 2017.

How is it possible, to paraphrase Robert Solow in 1987, that we see machine learning applications everywhere but in the productivity statistics? The simplest explanation, of course, is that our optimism is misplaced. Has Siri or Facebook’s image recognition software really made us that more productive? Some technologies never live up to the hype. Peter Thiel famously quipped: ‘We wanted flying cars, instead we got 140 characters’.

Brynjolfsson and co-authors, though, make a compelling case for technological optimism, offering three reasons for why ‘even a modest number of currently existing technologies could combine to substantially raise productivity growth and societal welfare’. One reason for the apparent paradox, the authors argue, is the mismeasurement of output and productivity. The slowdown in productivity of productivity in the last decade may simply be an illusion, as most new technologies – think of Google Maps’ accuracy in estimating our arrival time – involve no monetary cost. Even though these ‘free’ technologies significantly improve our living standards, they are not picked up by traditional estimates of GDP and productivity. A second reason is that the benefits of the AI revolution are concentrated, with little improvement in productivity for the median worker. Google (now Alphabet), Apple, and Facebook have seen their market share increase rapidly in comparison to other large industries. Where AI was adopted outside ICT, these were often in zero-sum industries, like finance or advertising. A third, and perhaps most likely, reason is that it takes a considerable time to be able to sufficiently harness new technologies. This is especially true, the authors argue, ‘for those major new technologies that ultimately have an important effect on aggregate statistics and welfare’, also known as general purpose technologies (GPT).

There are two reasons why it takes long for GPTs to be seen in the statistics. It takes time to build up the stock necessary to have an impact on the aggregate statistics. While mobile phones are everywhere, the applications that benefit from machine learning are still only a small part of our daily lives. Second, it takes time to identify the complementary technologies and make these investments. ‘While the fundamental importance of the core invention and its potential for society might be clearly recognizable at the outset, the myriad necessary co-inventions, obstacles and adjustments needed along the way await discovery over time, and the required path may be lengthy and arduous. Never mistake a clear view for a short distance.’

As Brynjolfsson and friends argue, even if we do not see AI technology in the productivity statistics yet, it is too early to be pessimistic. The high valuations of AI companies suggest that investors believe there is real value in those companies, and it is likely that the effects on living standards may be even larger than the benefits that investors hope to capture.

Machine learning technology, in particular, will shape our lives in many ways. But much like those looking towards the future in the early 1990s and wondering how computers may affect our lives, we have little idea of the applications and complementary innovations that will determine the Googles and Facebooks of the next decade. Let the Machine (Learning) Age begin!

An edited version of this article originally appeared in the 30 November 2017 edition of finweek.

Advertisements

Sapiens and Naledi

leave a comment »

Naledi

I finally read Sapiens: A Brief History of Humankind by Yuval Noah Harari. It is a provocative book, one that challenges many of our long-held beliefs. Religion, for example, is one topic that will upset many – one of the ‘myths’ or ‘fictions’ humans have, says Harari, like money or empire. But it is the discussion of how we have domesticated plants and animals and its implications for today – ‘We did not domesticate food. It domesticated us.’ – that is revealing, if sometimes leaning towards the sensationalist – ‘modern industrial agriculture might well be the greatest crime in history’.

The book sets out to explain the three most important revolutions in human history, the Cognitive Revolution (around 70 000 BCE), the Neolithic Revolution (around 10 000 BCE) and the Scientific Revolution (around 1500 CE). It is much better at the first and second than at the third. In fact, as with one of my favourite books, Jared Diamond’s Guns, Germs and Steel, Harari unsuccessfully attempts to make the post-1500 and particularly the post-1800 period fit into the simple framework of the two earlier epochs. (For example: he attributes the Industrial Revolution to only two things – imperialism and science. If this was true, then China should have had an Industrial Revolution in the 15th century, when they were the most advanced scientifically and were discovering the world with their giant fleets. But they didn’t.) Read the book for the first half, not for the last.

The topic of humans and their evolution is a fascinating, and also fast-changing one. The October 2015 edition of National Geographic tells the tale of Lee Berger’s discoveries of Homo Naledi in a cave near Johannesburg. Homo Naledi fits somewhere between the apelike australopithecines like Lucy, a skeleton discovered in Ehtiopia in 1974, and Homo Habilis, the ‘first’ known human ancestor of us, Homo Sapiens, which was classified in Kenya in the 1970s. This evolution occurred maybe two to three million years ago. Homo Habilis (or the myriad of other forms of proto-humans that existed but are still undiscovered) evolved into Homo Erectus, and then Homo Sapiens. These sapiens left Africa in two waves. Almost all human DNA derive from the second ‘Out-of-Africa migration’ around 75 000 BCE. (The Neanderthals derive from the first out-migration, and new evidence suggests may have left a tiny DNA footprint in some modern Europeans.) The Cognitive Revolution, when we begin to see evidence of art and burials and other cultural traits, begin around 70 000. Homo sapiens – modern humans in all respects similar to us today – reached South Asia around 50 000 years ago, Australia around 46 000 years ago, 43 000 years ago, North America around 15 000 years ago, the Pacific islands around 1300 BCE and New Zealand only around 1280, about the same time as University College in Oxford was founded.

The cave Berger and his large team of archaeologists uncovered was a remarkable find, a possible link between our apelike ancestors and modern humans. It has also shifted attention back to South Africa and our rich archaeological history. This is one area of science where we clearly have a comparative advantage, and more can be done to promote this field of research.

But why care about the evolution of humans, you may ask. In Sapiens we find the answer: according to Harari, we are about to be replaced by a superior human. After the Scientific Revolution came the Information Revolution of the twentieth century. And now we are at the cusp of the Biotechnological Revolution. Soon we will be able to engineer humans to become amortal (not immortal, because we would still be able to die in car crashes and terrorist attacks). And these humans might be smarter, quicker, better than us. And once artificial intelligence reaches the singularity, who knows what they will do to humans, to us.

I am less pessimistic that we will soon be replaced by Homo Mechanica. Biotechnology, instead of replicating human brains, might allow us to fully exploit our extraordinary creativity. Much like love, it might help us to become better versions of ourselves, a new and improved Homo Sapiens.