SITTING IN AN office in San Francisco, Igor Barani calls up some medical scans on his screen. He is the chief executive of Enlitic, one of a host of startups applying deep learning to medicine, starting with the analysis of images such as X-rays and CT scans. It is an obvious use of the technology. Deep learning is renowned for its superhuman prowess at certain forms of image recognition; there are large sets of labelled training data to crunch; and there is tremendous potential to make health care more accurate and efficient.
Dr Barani (who used to be an oncologist) points to some CT scans of a patients lungs, taken from three different angles. Red blobs flicker on the screen as Enlitics deep-learning system examines and compares them to see if they are blood vessels, harmless imaging artefacts or malignant lung nodules. The system ends up highlighting a particular feature for further investigation. In a test against three expert human radiologists working together, Enlitics system was 50% better at classifying malignant tumours and had a false-negative rate (where a cancer is missed) of zero, compared with 7% for the humans. Another of Enlitics systems, which examines X-rays to detect wrist fractures, also handily outperformed human experts. The firms technology is currently being tested in 40 clinics across Australia.
A computer that dispenses expert radiology advice is just one example of how jobs currently done by highly trained white-collar workers can be automated, thanks to the advance of deep learning and other forms of artificial intelligence. The idea that manual work can be carried out by machines is already familiar; now ever-smarter machines can perform tasks done by information workers, too. What determines vulnerability to automation, experts say, is not so much whether the work concerned is manual or white-collar but whether or not it is routine. Machines can already do many forms of routine manual labour, and are now able to perform some routine cognitive tasks too. As a result, says Andrew Ng, a highly trained and specialised radiologist may now be in greater danger of being replaced by a machine than his own executive assistant: She does so many different things that I dont see a machine being able to automate everything she does any time soon.
So which jobs are most vulnerable? In a widely noted study published in 2013, Carl Benedikt Frey and Michael Osborne examined the probability of computerisation for 702 occupations and found that 47% of workers in America had jobs at high risk of potential automation. In particular, they warned that most workers in transport and logistics (such as taxi and delivery drivers) and office support (such as receptionists and security guards) are likely to be substituted by computer capital, and that many workers in sales and services (such as cashiers, counter and rental clerks, telemarketers and accountants) also faced a high risk of computerisation. They concluded that recent developments in machine learning will put a substantial share of employment, across a wide range of occupations, at risk in the near future. Subsequent studies put the equivalent figure at 35% of the workforce for Britain (where more people work in creative fields less susceptible to automation) and 49% for Japan.
What determines vulnerability to automation is not so much whether the work concerned is manual or white-collar but whether or not it is routine
Economists are already worrying about job polarisation, where middle-skill jobs (such as those in manufacturing) are declining but both low-skill and high-skill jobs are expanding. In effect, the workforce bifurcates into two groups doing non-routine work: highly paid, skilled workers (such as architects and senior managers) on the one hand and low-paid, unskilled workers (such as cleaners and burger-flippers) on the other. The stagnation of median wages in many Western countries is cited as evidence that automation is already having an effectthough it is hard to disentangle the impact of offshoring, which has also moved many routine jobs (including manufacturing and call-centre work) to low-wage countries in the developing world. Figures published by the Federal Reserve Bank of St Louis show that in America, employment in non-routine cognitive and non-routine manual jobs has grown steadily since the 1980s, whereas employment in routine jobs has been broadly flat (see chart). As more jobs are automated, this trend seems likely to continue.
And this is only the start. We are just seeing the tip of the iceberg. No office job is safe, says Sebastian Thrun, an AI professor at Stanford known for his work on self-driving cars. Automation is now blind to the colour of your collar, declares Jerry Kaplan, another Stanford academic and author of Humans Need Not Apply, a book that predicts upheaval in the labour market. Gloomiest of all is Martin Ford, a software entrepreneur and the bestselling author of Rise of the Robots. He warns of the threat of a jobless future, pointing out that most jobs can be broken down into a series of routine tasks, more and more of which can be done by machines.
In previous waves of automation, workers had the option of moving from routine jobs in one industry to routine jobs in another; but now the same big data techniques that allow companies to improve their marketing and customer-service operations also give them the raw material to train machine-learning systems to perform the jobs of more and more people. E-discovery software can search mountains of legal documents much more quickly than human clerks or paralegals can. Some forms of journalism, such as writing market reports and sports summaries, are also being automated.
Predictions that automation will make humans redundant have been made before, however, going back to the Industrial Revolution, when textile workers, most famously the Luddites, protested that machines and steam engines would destroy their livelihoods. Never until now did human invention devise such expedients for dispensing with the labour of the poor, said a pamphlet at the time. Subsequent outbreaks of concern occurred in the 1920s (March of the machine makes idle hands, declared a New York Times headline in 1928), the 1930s (when John Maynard Keynes coined the term technological unemployment) and 1940s, when the New York Times referred to the revival of such worries as the renewal of an old argument.
As computers began to appear in offices and robots on factory floors, President John F. Kennedy declared that the major domestic challenge of the 1960s was to maintain full employment at a time when automation
is replacing men. In 1964 a group of Nobel prizewinners, known as the Ad Hoc Committee on the Triple Revolution, sent President Lyndon Johnson a memo alerting him to the danger of a revolution triggered by the combination of the computer and the automated self-regulating machine. This, they said, was leading to a new era of production which requires progressively less human labour and threatened to divide society into a skilled elite and an unskilled underclass. The advent of personal computers in the 1980s provoked further hand-wringing over potential job losses.
Yet in the past technology has always ended up creating more jobs than it destroys. That is because of the way automation works in practice, explains David Autor, an economist at the Massachusetts Institute of Technology. Automating a particular task, so that it can be done more quickly or cheaply, increases the demand for human workers to do the other tasks around it that have not been automated.
There are many historical examples of this in weaving, says James Bessen, an economist at the Boston University School of Law. During the Industrial Revolution more and more tasks in the weaving process were automated, prompting workers to focus on the things machines could not do, such as operating a machine, and then tending multiple machines to keep them running smoothly. This caused output to grow explosively. In America during the 19th century the amount of coarse cloth a single weaver could produce in an hour increased by a factor of 50, and the amount of labour required per yard of cloth fell by 98%. This made cloth cheaper and increased demand for it, which in turn created more jobs for weavers: their numbers quadrupled between 1830 and 1900. In other words, technology gradually changed the nature of the weavers job, and the skills required to do it, rather than replacing it altogether.
In a more recent example, automated teller machines (ATMs) might have been expected to spell doom for bank tellers by taking over some of their routine tasks, and indeed in America their average number fell from 20 per branch in 1988 to 13 in 2004, Mr Bessen notes. But that reduced the cost of running a bank branch, allowing banks to open more branches in response to customer demand. The number of urban bank branches rose by 43% over the same period, so the total number of employees increased. Rather than destroying jobs, ATMs changed bank employees work mix, away from routine tasks and towards things like sales and customer service that machines could not do.
The same pattern can be seen in industry after industry after the introduction of computers, says Mr Bessen: rather than destroying jobs, automation redefines them, and in ways that reduce costs and boost demand. In a recent analysis of the American workforce between 1982 and 2012, he found that employment grew significantly faster in occupations (for example, graphic design) that made more use of computers, as automation sped up one aspect of a job, enabling workers to do the other parts better. The net effect was that more computer-intensive jobs within an industry displaced less computer-intensive ones. Computers thus reallocate rather than displace jobs, requiring workers to learn new skills. This is true of a wide range of occupations, Mr Bessen found, not just in computer-related fields such as software development but also in administrative work, health care and many other areas. Only manufacturing jobs expanded more slowly than the workforce did over the period of study, but that had more to do with business cycles and offshoring to China than with technology, he says.
So far, the same seems to be true of fields where AI is being deployed. For example, the introduction of software capable of analysing large volumes of legal documents might have been expected to reduce the number of legal clerks and paralegals, who act as human search engines during the discovery phase of a case; in fact automation has reduced the cost of discovery and increased demand for it. Judges are more willing to allow discovery now, because its cheaper and easier, says Mr Bessen. The number of legal clerks in America increased by 1.1% a year between 2000 and 2013. Similarly, the automation of shopping through e-commerce, along with more accurate recommendations, encourages people to buy more and has increased overall employment in retailing. In radiology, says Dr Barani, Enlitics technology empowers practitioners, making average ones into experts. Rather than putting them out of work, the technology increases capacity, which may help in the developing world, where there is a shortage of specialists.
And while it is easy to see fields in which automation might do away with the need for human labour, it is less obvious where technology might create new jobs. We cant predict what jobs will be created in the future, but its always been like that, says Joel Mokyr, an economic historian at Northwestern University. Imagine trying to tell someone a century ago that her great-grandchildren would be video-game designers or cybersecurity specialists, he suggests. These are jobs that nobody in the past would have predicted.
Similarly, just as people worry about the potential impact of self-driving vehicles today, a century ago there was much concern about the impact of the switch from horses to cars, notes Mr Autor. Horse-related jobs declined, but entirely new jobs were created in the motel and fast-food industries that arose to serve motorists and truck drivers. As those industries decline, new ones will emerge. Self-driving vehicles will give people more time to consume goods and services, increasing demand elsewhere in the economy; and autonomous vehicles might greatly expand demand for products (such as food) delivered locally.
There will also be some new jobs created in the field of AI itself. Self-driving vehicles may need remote operators to cope with emergencies, or ride-along concierges who knock on doors and manhandle packages. Corporate chatbot and customer-service AIs will need to be built and trained and have dialogue written for them (AI firms are said to be busy hiring poets); they will have to be constantly updated and maintained, just as websites are today. And no matter how advanced artificial intelligence becomes, some jobs are always likely to be better done by humans, notably those involving empathy or social interaction. Doctors, therapists, hairdressers and personal trainers fall into that category. An analysis of the British workforce by Deloitte, a consultancy, highlighted a profound shift over the past two decades towards caring jobs: the number of nursing assistants increased by 909%, teaching assistants by 580% and careworkers by 168%.
Focusing only on what is lost misses a central economic mechanism by which automation affects the demand for labour, notes Mr Autor: that it raises the value of the tasks that can be done only by humans. Ultimately, he says, those worried that automation will cause mass unemployment are succumbing to what economists call the lump of labour fallacy. This notion that theres only a finite amount of work to do, and therefore that if you automate some of it theres less for people to do, is just totally wrong, he says. Those sounding warnings about technological unemployment basically ignore the issue of the economic response to automation, says Mr Bessen.
But couldnt this time be different? As Mr Ford points out in Rise of the Robots, the impact of automation this time around is broader-based: not every industry was affected two centuries ago, but every industry uses computers today. During previous waves of automation, he argues, workers could switch from one kind of routine work to another; but this time many workers will have to switch from routine, unskilled jobs to non-routine, skilled jobs to stay ahead of automation. That makes it more important than ever to help workers acquire new skills quickly. But so far, says Mr Autor, there is zero evidence that AI is having a new and significantly different impact on employment. And while everyone worries about AI, says Mr Mokyr, far more labour is being replaced by cheap workers overseas.
Another difference is that whereas the shift from agriculture to industry typically took decades, software can be deployed much more rapidly. Google can invent something like Smart Reply and have millions of people using it just a few months later. Even so, most firms tend to implement new technology more slowly, not least for non-technological reasons. Enlitic and other companies developing AI for use in medicine, for example, must grapple with complex regulations and a fragmented marketplace, particularly in America (which is why many startups are testing their technology elsewhere). It takes time for processes to change, standards to emerge and people to learn new skills. The distinction between invention and implementation is critical, and too often ignored, observes Mr Bessen.
What of the worry that new, high-tech industries are less labour-intensive than earlier ones? Mr Frey cites a paper he co-wrote last year showing that only 0.5% of American workers are employed in industries that have emerged since 2000. Technology might create fewer and fewer jobs, while exposing a growing share of them to automation, he says. An oft-cited example is that of Instagram, a photo-sharing app. When it was bought by Facebook in 2012 for $1 billion, it had tens of millions of users, but only 13 employees. Kodak, which once employed 145,000 people making photographic products, went into bankruptcy at around the same time. But such comparisons are misleading, says Marc Andreessen. It was smartphones, not Instagram, that undermined Kodak, and far more people are employed by the smartphone industry and its surrounding ecosystems than ever worked for Kodak or the traditional photography industry.
Is this time different?
So who is right: the pessimists (many of them techie types), who say this time is different and machines really will take all the jobs, or the optimists (mostly economists and historians), who insist that in the end technology always creates more jobs than it destroys? The truth probably lies somewhere in between. AI will not cause mass unemployment, but it will speed up the existing trend of computer-related automation, disrupting labour markets just as technological change has done before, and requiring workers to learn new skills more quickly than in the past. Mr Bessen predicts a difficult transition rather than a sharp break with history. But despite the wide range of views expressed, pretty much everyone agrees on the prescription: that companies and governments will need to make it easier for workers to acquire new skills and switch jobs as needed. That would provide the best defence in the event that the pessimists are right and the impact of artificial intelligence proves to be more rapid and more dramatic than the optimists expect.