The AI revolution

We may not realise it, but artificial intelligence (AI) has been a part of our daily lives — working quietly in the background — for decades. Now more advanced AI is beginning to take a front-seat role. Just how much will we come to rely on this technology and what risks does it pose to us?  

Rebecca Tunstall, Investment Manager, Rathbones

For the past decade Google has invited the world’s technology geeks to California to showcase its latest technology. This year, at the open-air Shoreline Amphitheatre in Mountain View, it unveiled its most advanced AI assistant, called Duplex.

It was an event with the potential to be just as consequential for society as the creation of the company’s eponymous search engine three decades ago. And yet it was so mundane, too.

Duplex booked Google CEO Sundar Pichai an appointment at a Californian hair salon — umming and ahing and incorporating the speech imperfections that make a real human voice so distinctive.

Duplex is a significant leap from Google Assistant, which is the company’s alternative to Apple’s Siri and has been on Android phones for some time. Google Assistant is easily distinguishable as an AI — you even have to activate it by saying: “Okay, Google.” There is no possibility of confusion. But Duplex was deliberately hiding its true nature with its life-like speech imperfections and conversational manner.

The hairdresser at the other end of the line had no idea she was talking to a machine. The 5,000-strong crowd watching on whooped, cheered and applauded. But around the rest of the world the reaction was decidedly more mixed.

How will you now know if you are talking on the phone to a computer or a human? Are computers becoming so clever that they can replace humans and leave us redundant?

The thinking machine

Artificial intelligence is as old as Velcro. The term was coined by American computer scientist John McCarthy in 1955 when he organised the “Dartmouth Summer Research Project on Artificial Intelligence” to bring together mathematicians and scientists to brainstorm over the development of “thinking machines”.

The funding proposal for the six-week project offered a definition of AI that is still helpful today: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

The subject as a field of study was born. Early research began by exploring logic and problem-solving techniques to work out how humans learn and adapt and how those processes could be written into computer programming.

Progress was initially slow. When a young Nigel Shadbolt started studying a PhD in artificial intelligence at the University of Edinburgh in 1978 the department was under threat of closure on the grounds that AI “had no future”.

Now, 40 years later, Sir Nigel Shadbolt is a professor of computer science at the University of Oxford and recognised as one of the world’s leading experts on the subject. The future of AI has never looked brighter — or bleaker, depending on your disposition to dystopian predictions.

Two things have driven this change — the development of computer processing power and cloud computing.

Reflecting on the past four decades in a recent episode of the Rathbones Look forward series, Sir Nigel said: “If improvements in the speed of air travel had kept pace with the improvement of computer processors, we would be able to fly from London to Sydney in a quarter of a second.”

Most recently, the development of cloud computing means that AI can now soak up unparalleled amounts of data and utilise additional processing power beyond its “physical” constraints.

With more global internet users every day, tech companies also have access to the biggest data pool in history, which they can use to inform and shape their activities. It is no surprise that it is the large tech companies that are leading the way in AI investment. According to research by McKinsey, companies such as Google and Baidu invested $20 billion to $30 billion in AI in 2016.

Data-powered algorithms now influence our whole lives. Netflix, for example, has estimated that improving its personalisation algorithms to recommend programmes that customers are likely to enjoy has avoided subscription cancellations that would have otherwise reduced the company’s revenue by $1 billion annually.

Algorithms — commonly described as “the maths that computers use to decide stuff” — shape what public services we receive and how much we pay for insurance, flights, rail tickets and hotel rooms. They even help determine how stock markets behave.

AI is now driving machines, too. Amazon acquired robotics company Kiva for $775 million in 2012. Kiva, now renamed Amazon Robotics, automated picking and packing in the company’s warehouses, substantially reducing average “click-to-ship” times — from around an hour for a human to 15 minutes for the AI system — and cutting operating costs by around 20%.

However, the concern for many people is that the capabilities of AI are extending further into the realms of human work.

It is not hard to imagine that a fully automated AI with the high-quality speech synthesis and voice-recognition technology demonstrated in Duplex could render call-centre workers redundant.

Driverless transport poses another threat. Research company IHS Markit predicts that by 2040 there will be 33 million driverless vehicles sold globally, which will represent a quarter of all vehicle sales. With just under 300,000 licensed taxi drivers in the UK in 2017, human drivers will have a difficult time competing with potentially cheaper and safer machine alternatives.

Despite these concerns, we need not be overly pessimistic, argues Sir Nigel. Disruptive technology almost always creates more jobs than we predict. He jokes that his “mum wasn't a search optimisation engineer… There are whole jobs now we’d never thought of”.

And in the case of AI there will almost certainly be a transition period where full automation is tentatively guided by human hand. Logistics companies may retrain staff to be at either end of transporting goods, like a pilot handles the take-off and landing of a plane. Similarly, a single worker might watch over 10 or 20 AIs in a call centre.

Human augmentation

Not all AI is being designed in a way that will replace humans like-for-like. There are many “augmentative” AI projects in development that are designed to boost human capabilities without replacing them.

One of the most promising areas in which AI could augment human capabilities is healthcare. Infervision is a Chinese company that builds AI capable of using machine learning and visual recognition software to diagnose lung cancer from CT scans and X-rays. Infervision hopes that its AI can provide a safety net for overworked doctors.

Google is working on technology that will help diagnose diabetic retinopathy, the fastest-growing cause of blindness, which is a threat to 415 million diabetic patients around the world. If found early the disease can be treated, but there are insufficient trained medical experts available to detect it. In tests Google’s deep learning algorithms have performed at least as well as ophthalmologists and could in future help healthcare workers screen many more patients in areas of the world with limited resources.

Google researchers found their AI was not only able to diagnose eye disease but could also pick up patterns in the eye that could be used to predict the risk of a heart-attack or stroke with up to 70% accuracy — just below the 72% accuracy of orthodox blood-testing techniques currently in use.

Ethical issues

Maybe AI is not as big a threat to livelihoods as we fear, but it still poses enormous ethical challenges.

The Duplex voice challenge is one. Google subsequently clarified that upon its public release Duplex will give prior warning to the people it calls, but history tells us that technology is impossible to keep isolated. Many are anxious about the potential damage an AI like Duplex could do in the hands of hackers, scammers or someone trying to spread political misinformation.

Another issue is how organisations acquire the data that fuels their AI programmes. Many of those organisations investing most in the development of AI are under fire from large parts of the public, politicians and privacy activists. Companies like Facebook and Amazon need data to feed their AIs. Changes to European data privacy law under GDPR, for instance, mean that organisations must now inform users of how they intend to use their data and receive permission before doing so.

Constraining access to data may also constrain technological development. In 2017 Nvidia, a computer hardware company, trialled an experimental vehicle AI that taught itself to drive by watching humans. AI that can teach itself without needing any human input may bring enormous benefits in fields of science where it could conduct research and make discoveries independently, but it needs access to data.

Unsurprisingly, tech companies are lobbying against greater regulation, emphasising the importance of laws keeping pace with the evolving AI landscape. The kinds of AI that we live with in future may be moulded by whoever wins the battle for control of our data.

The AI odyssey

It is impossible to ignore the extent to which our perception of AI is rooted in the science-fiction of the mid 20th century. Creatives with outsized expectations and experts with extraordinary predictions make for a rich creative soup, and films such as 2001: A Space Odyssey were wildly successful and informed generations of the public what they could expect from AI.

Today much of the public’s perception of AI remains disproportionately informed by fiction. AI means far more than the anthropomorphised, humanoid forms we see in films. It reaches into all corners of our lives and has been worked on, incrementally, by computer scientists for decades.

The coming “AI revolution” will affect us all, and it will be as disruptive to society as the Industrial Revolution. It arguably offers far more benefits than threats; but those threats exist, and we will have to watch carefully for them.

For more on the future of AI with Sir Nigel Shadbolt, and to hear many other discussions on the future of our changing world, visit rathboneslookforward.com

Interested?

We make the management of complex wealth simpler for families, entrepreneurs, senior corporate executives, financial professionals, the resident non-domiciled and overseas investors.

020 7399 0350
privateoffice@rathbones.com

Request a call back

The value of your investments and the income from them may go down as well as up, and you could get back less than you invested.