Basic AI
Basic AI
‘David Shrier’s new book has come at just the right time. A thought-leader
in this space for decades, in Basic AI David provides business leaders with
the context and decision-making frameworks they need to help them
navigate the challenges, opportunities and risks of AI. This is an accessible
and actionable book that cuts through the hype to get to the heart of what
CEOs and their teams need to know to prepare for the disruption that is
already happening’
Margaret Franklin, President and CEO, CFA Institute
‘Want to get up to speed with AI? This is the book for you. Filled with clear
explanations, real-world examples and helpful chapter summaries, David
Shrier takes us on a whistle-stop tour to the very heart of the future now.
The author eloquently sets out the myriad of challenges but, ultimately,
rightly, leaves us with confidence in our human selves or, as he puts it,
“optimism tempered with caution”’
Baron Holmes of Richmond, House of Lords Science and Technology
Select Committee
‘An indispensable and excellent book that should be a must-read for anyone
who needs to get to grips with the AI revolution we are living in. David
Shrier is an expert guide to what has happened, what is happening and what
is likely to happen in future. His knowledge and experience make Shrier an
ideal person to make the case for the vast opportunities that AI will bring to
the world. A brilliant read that should become a contemporary classic – it
leaves the reader not only better informed but better able to comprehend our
future reality’
Rt. Hon. Chris Skidmore, MP, former UK Minister of State for Universities,
Science, Research and Innovation
Basic Metaverse
Augmenting Your Career
Basic Blockchain
As editor:
Global Fintech
Trusted Data, Revised and Expanded
New Solutions for Cyber Security
Frontiers of Financial Technology
Basic AI
A Human Guide to
Artificial Intelligence
David L. Shrier
A How To Book
Copyright
Published by Robinson
ISBN: 978-1-47214-897-1
The publisher is not responsible for websites (or their content) that
are not owned by the publisher.
Robinson
Little, Brown Book Group
Carmelite House
50 Victoria Embankment
London EC4Y 0DZ
www.littlebrown.co.uk
www.hachette.co.uk
Part I focuses on the fundamentals: what is AI, how has it impacted us thus
far, and what risks does it present in the workplace?
CHAPTER 1
AI HEAD-MEDDLING
How could we have arrived at a point where millions of people may have
had their votes swayed in national elections? It’s enough to make you
depressed. Let’s consult a psychiatrist.
‘I’ll be your therapist today.’
‘I feel sad.’
‘Tell me more about such feelings.’
‘My children never call me.’
‘What does that suggest to you?’
By now, you have probably figured out that this is a primitive AI. In the
mid-1960s, however, when MIT researcher Joseph Weizenbaum first put
the DOCTOR script into the ELIZA expert system, it was a revolutionary
moment in computer science. This was the first chatbot that successfully
impersonated a real person, passing what’s known as the ‘Turing Test’. If
you’re curious, you can play with ELIZA yourself at websites like this one:
https://web.njit.edu/~ronkowit/eliza.html.
Mathematician and pioneer of computing Alan Turing, who cracked the
Enigma code at Bletchley Park to help the Allied forces win the Second
World War and who was deemed the father of artificial intelligence,13 posed
a benchmark for the development of AI. He said that if a computer program
could be written that could interact in such a manner that someone couldn’t
tell if they were interacting with a machine or a person, it would pass the
first threshold of demonstrating that it’s a machine that thinks like a human
being. It would, definitionally, be classified as ‘artificial intelligence’. We’ll
investigate more about the different types of artificial intelligences in the
next chapter, but it’s worth considering the point that the dawn of AI began
with a chatbot.
In modern times, we see chatbots providing customer service on
websites, tirelessly, benignly, without ever getting annoyed at dumb
questions. We also see them invading dating apps, fronted by repurposed
pictures of models or simply stolen images of attractive people, luring us
into subscribing to adult websites or, more malignly, conning lonely hearts
out of hundreds of thousands in savings. The invasion is so prolific it has
prompted numerous articles with titles like ‘How to Tell If You’re Talking
to a Bot: The Complete Guide to Chatbots’ and ‘Spot the Bot: Keep Bots
from Taking Over on Dating Sites’, and even a private investigation service
offering a piece on ‘How to Spot Scams & Bots on Tinder and OkCupid’.
The bots are coming after our bedrooms. The Nigerian prince scam is now
interactive.
The origin story of all of these scams was a humble parody of Rogerian
therapy (in which the therapist is effectively passive, asking questions that
reflect what the client has already said rather initiating new areas of
discussion). The first bot was intended to illustrate how inane (in the view
of the authors) this particular brand of psychiatry was. Users were
absolutely convinced that ELIZA had feelings and could think. Our
tendency as a species to anthropomorphise inanimate objects found a new
model of expression that synchronised with technology engineered to
emulate human beings. Perhaps it was only a matter of time before we
would go from pouring our hearts out to an unthinking robot (one that was
actually written as a parody of psychiatrists, dumbly parroting back what
we said in the form of a series of prompt questions) to simply mistaking
political chatbots for actual people. People who think just like we do, even
down to the darkest corners of our psyches. But rather than soothe our souls
by ‘listening’ to our troubles, these chatbots encouraged polarisation to such
a degree that society fractured. The foundations of Western democracy
grew a bit shakier with these Manchurian chatbots, albeit ones that weren’t
fully autonomous, but were tuned and contextualised with the aid of human
intervention.
Chatbot-enabled electoral polarisation didn’t happen in a vacuum.
Another, more subtle, form of artificial intelligence systems had already
been working to push people apart in the form of the Facebook feed. People
like people who think like them. If you promote this positive feedback loop,
you can create an information or behavioural cascade that suddenly gets
large numbers of people moving in a certain direction. Demagogues have
known this for centuries. With the advent of AI, with the ability to make
chatbots that attack, this knowledge can now be made predictable and
scalable, with terrible implications for society.
Witness the polarisation of the American electorate over the course of
about twenty years. Donald Trump did not spring fully formed from the
brow of Ronald Reagan. The Trumpists of today would have found Reagan
to be a lefty, hopelessly liberal in their eyes. The Republican Party of the
1980s proudly talked about the ‘Big Tent’ that could encompass many
views. They had to – at the time, that was how you won elections, by
appealing to the centre. What happened to set up the current dynamic,
which plays itself out as much in the streets of Birmingham as it does in the
hallways of Washington, DC?
Let’s look at some numbers that reveal what polarisation looks like in the
American electorate:
As you can see from the chart, the political leanings of Americans
gradually shifted from having a fairly cohesive central mass to separating
into increasingly polarised ‘liberal’ and ‘conservative’ camps. From 1994 to
2004, despite the fragmentation of media, rancorous political debate and a
war in Iraq, Democrats and Republicans were relatively convergent and
there was a political centre. The blame could not be laid on Fox News,
which was founded in 1996. By 2015, the centre could not hold and things
flew apart. Arguably things have become even more extreme in the United
States since 2015, to such a degree that Democratic mayors in large cities
were fighting armed invasion and abduction of citizens off the streets by
unmarked government vehicles at the instruction of a Republican federal
administration.14
What happened in 2004, you might wonder? Facebook was founded.
There are some who may criticise me for confusing correlation with
causation. I am not alone in my contention that, by legitimising fringe
extremist sources, AI ‘newsfeed’ algorithms and unconstrained chatbots
were directly responsible for electoral polarisation.15
Facebook makes money through selling advertising, which is driven by
how many people look at the website and interact with its content. Through
AI analysis of our behaviour, Facebook discovered that more extreme
headlines give us a little thrill. When we get that charge, we stay on the site
longer and we click more – particularly on extreme articles that agree with
our intrinsic biases. Generally speaking, Facebook doesn’t care about
Democrat or Republican, liberal or conservative.* Facebook cares about
making money. More page views and more time on site means more profit.
Chatbots were weaponised16 and gleefully deployed into the fertile
environment created by Facebook and other receptive social media, in both
the UK and US (and to a lesser degree in France and other places). They
supported far-right extremists and far-left extremists. Anything to break the
political discourse. Intelligence agencies and government committees have
pointed to Russia, Iran and North Korea as sources of a number of attacks,
including use of weaponised chatbots and other attack vectors,17 up to and
including a series of hacks surrounding the 2020 US Presidential election.18
One analyst group called these ‘Weapons of Mass Distraction’.19
Chatbots do strange things when they are left unsupervised. Technology
titan Microsoft suffered a bit of a public relations disaster in 2016 when it
launched AI chatbot Tay. Digesting masses of data from Twitter, Tay
rapidly taught itself to be both racist and sexist. Microsoft pulled it down
within sixteen hours of launch.20 If a simple chatbot can so quickly go off
course in an unsupervised learning environment, what might happen with a
more sophisticated artificial intelligence in a mere few years, one that is tied
to critical systems? What if our self-driving car decided that it would be
happier without some smelly human ordering it around?
In a way, we are fortunate that chatbots are so dumb. Yes, they were used
to attack democratic systems. They were noticeably machinelike, however,
and so researchers and security professionals have been able to identify
them. Twitter (now called ‘X’) was until recently a progressive company
that proactively sought to neutralise these AI-fueled bots. Under current
ownership, those effects have been abated or eliminated. We may soon
reach a point in artificial intelligence systems development where we see –
or rather don’t see – imperceptible AI, invisible AI, systems that
convincingly create video and audio simulations of people that are so
accurate they are impossible to tell apart from the real thing. Then we can
really start worrying about what happens when bots attack.
Perhaps I ought to be more worried about the simple threat of the AI
author and thought leader that can read more than I can, write better than I
can, and produce work at a greater speed than I could ever hope to. Is there
anything we can do to avert a world of digital twins that displace us from
our jobs? Or are we destined to become obsolete, a faint memory in the
minds of the beings who created the machines that will replace us as the
dominant life form on the planet? How far away is this grim vision of evil
bots from the mirror universe, coming to take over our lives, steal our
money and ruin our reputations?
Thanks to generative AI, we now can easily produce highly realistic
‘deep fakes’, video and audio that seem as if they were coming from a real
person, but actually are manufactured by a machine. US President Joe
Biden can be made to chant an amusing rap or, more ominously, warn of
impending nuclear attack. The risks to society, to the foundations of
democracy, have never been greater.
In Chapter 9, we’ll discuss some of the emerging policy responses to AI
threats.
HALLUCINATIONS
An AI hallucination is an expression of artificial intelligence going ‘off the
rails’. Specifically, it means that the AI is doing things that it was not
trained to do — so much so that it enters into a kind of delusional state, and
begins making things up.
For example, if you teach an artificial intelligence to speak colloquial
language, and it starts spewing nonsense. Meta describes it as when ‘a bot
confidently says something that is not true’.22
This reveals an important and fundamental characteristic of artificial
intelligence: its lack of reliability.
Thanks to popular culture, we are given to thinking that artificial
intelligence is either predictable or in its own mind rational. So, for
example, the artificial intelligence in 2001: A Space Odyssey attacking the
crew because it was given bad programming by humans. There’s a rational
explanation for why the AI stopped doing what it was supposed to be doing,
and started killing people. It did so because people messed it up with
conflicting instructions. The AI was trained to pursue truth and honesty, and
then people told it to lie.
Artificial intelligences are actually a great deal more complicated and
less predictable than how they appear in media. Some of the most powerful
approaches to artificial intelligence deliberately and directly design systems
based on the architecture of the human brain. The AI has layers upon layers
of linked networks, similar to how the human brain is composed of linked
networks of neurons. This layered approach of neural networks can make
AI extremely powerful. This is how ChatGPT works – what’s referred to as
a large language model built in a deep learning system.
It shouldn’t surprise anyone, then, that the AI misbehaving looks a bit
like the human brain misbehaving. And so we use a psychological term to
describe this erroneous behaviour: hallucinations.
Because ChatGPT and similar large language systems build sentences
based on relationships of prior words, the longer the piece you ask them to
write, the greater the chance you spiral off into some really odd directions.
However, if you iterate with the AI, a domain space sometimes called
‘prompt engineering’ (as we will discuss further in Chapter 7), you can
refine the outputs and get significantly better answers.
We continue to research, and train our AI researchers, on dealing with
model management, model drift and systems design. For example, there
was a major paper a few years ago on the concept of ‘under specification’23
– essentially, you train AI on one set of data and it gives you good results,
so you deploy it, but then it encounters the real world and you discover that
the training data wasn’t representative of real-world conditions, so the
model ‘breaks’, producing bad results.
AI hallucinations are another source of AI error, one that is still poorly
understood by expert AI researchers and remains an area of intense study –
one that needs more funding; AI is still very much an emergent technology,
eighty years after we built the first AI during the Second World War. With
AI hallucinations, the AI will not only make up fake ‘facts’, it will even
fabricate citations (fake scientific articles) in order to support its delusion.
This might include clever fakes: actual authors and real scientific journals,
but fictitious articles. Caveat emptor, indeed.
What this means, in practical terms, is that we should be very wary of
undue reliance on generative AI to the exclusion of human involvement.
Let’s take a minute to look at what the different kinds of artificial
intelligence are, and where the state of the art stands today. That’s the
subject of our next chapter.
EXPERT SYSTEMS
Rules-based expert systems and other kinds of rules-based computer
systems were some of the earliest AIs. ‘If A, then B’ is the central
programming concept. You get a smart human to imagine all of the possible
answers to a set of questions or circumstances, or perhaps you create clever
mimicry in the case of ELIZA and newer generation rules-based chatbots.
The machine is following a big set of rules that deterministically drives its
actions: if a certain circumstance is presented, take one action; if another
circumstance is presented, take another, and so on.
The reason most chatbots today seem dumb is because they are. They
qualify as artificial intelligence by the most basic instance of the definition.
An important myth to puncture is the notion that ‘artificial intelligence’ is
automatically smarter than humans. Chatbots are typically following a
variation of a table of rules, sometimes randomly accessed (such as in the
case of some Tinder scambots), sometimes tied directly to a discrete set of
answers like in an online help system. Often minimally programmed, they
are as limited as their list of answers to specific phrases or words. I would
argue that the reason ELIZA worked better than expected by its creator is
that it was put into a very specific context, the therapy conversation, and it
was mimicking a very specific type of therapy, Rogerian, which consists of
asking questions back to the patient following a near-formulistic model.
ELIZA has been the inspiration for many of today’s modern chatbots, at
least in spirit, but chatbots that use this model are only as good as the
particular programming (and programmers) involved. It is possible to
manufacture smarter chatbots, as we will learn in Chapter 8, but many of
today’s chatbots are primitive and rules-based.
More serious expert systems have considerably more investment placed
into them, nonetheless following the same principles of a programmed set
of rules that trigger specific actions or results based on specific inputs. The
catch is that programmers have to spend quite a bit of time in a structured
capture process, gleaning information from experts in order to program an
expert system. This, in turn, limits the applicability of these types of AI.
Research on the performance of expert systems shows that their
effectiveness is influenced by such factors as the narrowness of the
questions being answered, the manner in which the information architecture
is designed, and the people involved (both expert and programmer). The
type of knowledge itself limits the viability of the expert system.1
What would be a viable expert system, versus a novelty? One example
would be configuring computer hardware and software.2 This used to be a
task that needed extensive manual labour by a highly trained IT
professional. It is, however, a constrained system: there are only so many
types of computers, and only so many possible combinations of hardware
and software. This is a dynamic that lends itself to codifying in a taxonomy
of rules and specific scenarios; if faced with system X, then make software
perform function Y.
While posing a modest threat to the livelihood of the IT department, a
second-order effect of removing the direct intervention of a highly trained,
on-site IT professional was that it made it more feasible to move IT
functions offsite. Eventually, even offshore: AI-enabled labour disruption
twinned with the trend of globalisation. The downstream effects have been
profound in many ways. In 2017, I was one of the twenty-thousand
passengers caught up in the mess when the entire British Airways computer
system went down across the globe. Stranded at Heathrow, I was fighting to
get across the Atlantic so I could give a speech to the central bank of Chile.
Air Canada eventually rescued me. It turned out that outsourcing BA’s IT
function was not perhaps the best systems reliability strategy.3 I was given
to understand that a poorly trained outsourced systems engineer mishandled
some critical power systems, and since they had made redundant the
internal BA IT people who knew how to handle the problem, there was no
one to call. They essentially broke the on/off switch. Prophetically, British
Airways had another notable IT failure while I was attempting to finish this
book, cancelling my flight out of Mumbai in the middle of the night.
What if you run into complex systems? What if you aren’t sure what you
need to do with a different set of circumstances? Expert systems
definitionally rely on frameworks that can fit within the scope of the human
mind. What if you are faced with gigantic amounts of data that defy the
span of human cognition? The limitations of expert systems, coupled with
growing compute power, the plummeting cost of memory and the
accelerating pace of data created or captured by sensors and sensing
platforms large and small, led to the emergence of another form of artificial
intelligence: machine learning.
MACHINE LEARNING
Machine learning (ML) is a more sophisticated form of artificial
intelligence than expert systems. Machine learning programs learn from
data and tend to get more accurate as you feed them more data. When I say
‘more data’, I mean very large quantities of data. It is perhaps no
coincidence that machine learning systems really began to come into their
own within the past ten to fifteen years, as the quantities of data generated
from internet usage – from satellites, from wearable computers, from
mobile communication networks and other sources – started to achieve
truly massive volumes.
Let’s pretend it’s 2014, and you are building a new facial recognition
system using machine learning. With a limited data set, you can get to about
96 per cent accuracy. Great. It still means that one time out of twenty, it’s
going to get it wrong. Thanks to the masses of data being pummelled
through facial recognition systems, by 2023 the accuracy rate is now closer
to 99.9 per cent.4
One of the issues is that the system needs to be able to deal with data
variety. The difference between 2014 and 2023 is that enough images have
been fed through ML algorithms that they are now much better at dealing
with so-called ‘edge conditions’ – those rare events that only show up
occasionally but are enough to mess up your entire model. For example, if
you have trained your model only on people without beards, or only white
people with beards, your system might struggle with someone who has a
different skin tone or facial hair. If your model never encountered close
family members that resembled one another, it might never learn how to tell
siblings apart.
This concept of edge conditions runs throughout computer modelling; it
is not limited to facial recognition. Steve Wozniak is reportedly worth more
than $100 million, but when his wife applied for a credit card, she was
purportedly given a credit limit ten times lower than his, as the system had
no concept of how to contemplate a billionaire’s wife.5 In the 2008 financial
crisis, and when the Long Term Capital Management hedge fund collapsed
a decade earlier, computer models were confronted with so-called ‘Black
Swan’ events that cascaded through markets and triggered catastrophic
losses.
This same concept holds true for teaching machines to translate words
that humans express verbally into data, to help cars drive themselves, to
enable a SpaceX rocket to land itself in an upright position. Billions of data
points are fed into a system that identifies similarities among sections of
data, and the ML system begins to get better at predicting where the next
piece of data should go. If the system doesn’t have a big enough data set
that shows it very rare outlier events, it can struggle and sometimes fail. For
example, in 2016 an earlier version of the Tesla Autopilot system caused its
first driver fatality when the computer got confused by bright sunlight
reflecting off the side of a white van, and failed to avoid a traffic collision.6
We see this everywhere in our daily lives now. Whether it’s Netflix
recommending a TV show we would like, or Alexa being able to
understand our food order, ML systems have permeated popular consumer
digital applications.
Another breakthrough in the ML era was the aforementioned publication
of TensorFlow by Google in 2015. The magic that is derived from trillions
of searches that have been run through Google has been reduced to a
computer software package, which Google then made freely available.
TensorFlow democratised access to powerful, fast machine learning
capabilities.
GENERATIVE AI
Generative AI is a type of artificial intelligence that is able to generate new
content using the provision by a user of inputs that could consist of text,
pictures, audio or other data.
It is a probabilistic system that looks at interconnections. For example, if
you’re modifying an image, such as a photo, it looks at a massive library of
prior photos, and then determines, for example, if it sees a white pixel (a
measure of the smallest element of the photo), what’s the likelihood that the
next pixel adjacent to that pixel is going to be white, grey, black or some
other colour. This very simple concept turns out to be incredibly powerful
when applied in practice. It can be used to construct entire new photos, as
well as essays, novels, computer code, deepfake voice files, and more.
Computer scientist Stephen Wolfram has provided an excellent analysis
of how generative AI systems like ChatGPT work, which is beyond the
scope of this book. If you’re interested in the detail, I highly recommend
you read his post at https://writings.stephenwolfram.com/2023/02/what-is-
chatgpt-doing-and-why-does-it-work/.
At a high level: he explains the concepts behind the functioning of
ChatGPT (built on a large language model called GPT-4, as of this writing,
although GPT-5 is already under development, and I’m sure GPT-6 can’t be
far behind). The user gives the AI a prompt, perhaps a sentence or a few
sentences, perhaps more. The AI then begins painstakingly elaborating on
this prompt. If it’s language based, perhaps the user has asked the AI to
write a story, ChatGPT will look at a word, and then look in its massive
library to see how commonly another word appears next to that word. If
you give the AI the prompt ‘The boy runs …’, the AI might have a series of
choices of what comes next: ‘quickly’, ‘away’, ‘towards’, and so on. Word
by word, sentence by sentence, paragraph by paragraph, the AI system
laboriously constructs meaning. I say ‘laboriously’ because these large
language models are computationally (and monetarily) expensive.
ChatGPT can also write computer code, which has resulted in a dramatic
acceleration in the potential pace of software development. Other
generative AI systems like Midjourney have been producing fantastical
images that don’t exist in reality but are readily fabricated by AI. Today,
generative AI tends to be pretty good at summarising a large volume of
input provided to it, but not so good at creative tasks.
CONVERSATIONAL AI
Conversational AI is an interface layer and system of AI that draws heavily
on the realms of deep learning and sometimes generative AI and AGI. With
conversational AI, you don’t need to be a software developer in order to be
able to interact with AI. You can simply describe a problem or query to it,
and it will elaborate on your idea and give you several sentences, or a page,
or several pages, or an entire document.
Primitive versions of conversational AI showed up in our lives at scale as
Siri and Amazon Echo. You needed to speak with a certain vernacular
(‘Siri, play classical music’), but you could talk out loud to the computer
instead of having to press buttons or type.
The next generation (whether spoken out loud or typed) of
conversational AI systems shows up in the ChatGPT interface, where you
speak to the computer as if you were talking to a human being. Describe a
problem or a question, and ChatGPT (or Bard, Google’s generative AI
version, or one of dozens of others now springing up) will give you
answers. No less august an authority than Bill Gates has cited one such
conversational AI system, from a low-key startup named Inflection, as the
potential big winner in the future. While he hopes that Microsoft will win
the battle, Mr Gates has hedged his bets with a company co-founded by
Reid Hoffman (LinkedIn), Mustafa Suleyman (DeepMind) and others.
Inflection hopes to be a real-world implementation of a helpful AI assistant
similar to JARVIS in Marvel’s Iron Man movies.14
One clever bit is that the system is also used to provide website security
to – ha! – ward off invasions by bots. Only humans are smart enough, right
now, to usefully decipher images in a certain way; the average bot cannot,
and a website is thus able to protect itself from attack by armies of dumb
bots. So, you present a human user with a picture and break it down into
squares – let’s say a four-by-four grid – and then have them identify which
of the squares contains pictures of traffic lights or cars or fire hydrants.
It’s a trivial task for a human being, but one that is quite difficult, today,
for most bots and for many AI systems. These kinds of systems will also
look at how you interacted with the website before and as you were clicking
on the pictures, based on scrolling, time on site and mouse movement. It’s a
good way to screen out fake users from real ones. And along the way, in the
background, masses of people are training the Google AI to be smarter
about how it analyses images. As of 2019, over 4.5 million websites had
embedded reCAPTCHA, delivering more than one hundred person-years of
labour every single day labelling and classifying image data. With a few
basic assumptions, we can put an economic figure on this use of people to
make AI better. The cost of reCAPTCHA is perhaps $0.001 per
authentication (according to competitor hCaptcha). The value of an
individual image classification or annotation can be $0.03, so you could
estimate $21 billion or more in labour arbitrage that Google has extracted
by having people train AI for free.19
Amazon’s version is called Mechanical Turk or MTurk, and it’s rentable
as a service. Not limited to images, you can put in any kind of repetitive
task for bidding, such as deciphering meaning among user interviews or
transcribing audio records. Interestingly, Google’s YouTube subsidiary has
been using MTurk for years to help improve its AI with human
intelligence.20 Amazon accesses a global network of people, and low-cost
overseas labour can provide scale for a variety of tasks.
The labour function is interesting. On the one hand, systems like MTurk
pay people for tasks, typically less than $1 per human per task. A new
revenue stream has been created for people that can be conducted on the
side from other work, generating an incremental income stream. On the
other hand, a number of these systems that are being trained are then able to
displace human activity in areas like photo editing or video or audio
curation, or advertising, or retail pricing analysis.
BREEDING CENTAURS
Let us return to the chess discussion. After chess master Gary Kasparov was
beaten by IBM’s Deep Blue in 1997, he began experimenting with the
notion of bringing together humans and machines to do things that neither
could do alone. Take an application that you might encounter in your daily
life: the weather forecast. Computer weather models are good, up to a point.
But they don’t produce the best weather forecast. Human insight and human
intuition, combined with a good weather model, can improve the prediction
significantly – perhaps by 20 per cent, on a good day.21 You can notice the
difference if you take time to compare purely computer-generated forecasts
with those created with humans in the loop, and then map them to what
actually happens with the weather.
These ‘centaur’ creations – half human, half machine – hold the potential
to unlock heretofore untold heights of human achievement. Before we
ascend to such lofty heights, we are finding terrestrial applications of
centaurs. Porsche, for example, is using them to optimise manufacturing
processes, combining the fine-tuned ear of an operations engineer with
acoustic sampling and modelling to uncover vibration-related issues.22
Sanjeev Vohra is a former Accenture executive who led a number of its
AI efforts. He is intrigued by what could happen with AI in a computer
gaming context. What kind of experience might you have if the AI and you
were part of a team in competitive play against other AI–human pairings,
perhaps in a complex combat simulation or a strategy game? Could your AI
teammate help achieve a certain objective in close coordination with you?
How could these game-world constructs then be applied to similar ideas of
humans and machines together in simulations intended to help with work,
or political decision making, or artistic creations? While Sanjeev is at the
cutting edge; the people he, in turn, looks to for inspiration are those
building real-time AI systems where humans and AI interact frequently for
common or conflicting objectives, using very fast, very dynamic AI. These
AI models are learning how to interact with humans based on the
behaviours of those same humans.
Prediction markets are a type of Human+AI system that we will delve
into in Chapter 6. At a high level, however, you can think of them as a way
to use networked systems to gather collective human intelligence and put it
together in useful ways. The way they work is that a group of people is
gathered together (virtually, these days). Each person predicts some future
event, like the date of something occurring or the price of stock or similar.
The predictions are a decent party trick, but tend to have an error rate that
makes them unusable for anything serious. When we bring artificial
intelligence into the equation, however, we are starting to find ways to tune
these prediction markets and make them very accurate.
Other types of Human+AI systems that we explore in Part III include the
AI ‘coach’ that can improve your day-to-day job performance, and the
ability to surface the latent collective intelligence of an entire company or
institution to engage in complex tasks. We see a little of this idea in the
Mechanical Turk, but emergent systems are inverting the benefit: instead of
exploiting masses of people ever-so-slightly in order to help an AI, we can
use AI so that human systems are stronger, more versatile and more nimble.
We may discover, as we experiment with Human+AI capabilities, that we
create new forms of society that we can only dimly grasp today. There
could be new ways of people interacting with each other, new ways of
cooperating and solving problems, new abilities to form mutual
understandings and new capacities to protect the weakest members of the
human tribe. Technologies such as irrigation, antibiotics and electricity
created higher standards of living for many people in disparate geographies.
Human society was able to thrive and grow thanks to these advances, and a
positivistic view on our ability to shape AI would lead us to believe there’s
a transcendence awaiting us, beguilingly just beyond our current grasp, if
we apply AI in the right way.
First, however, we must confront the very real possibility of large-scale
unemployment as AI systems create a ripple effect through the global
economy. With it we may see social unrest and upheaval increase, not
unlike the radical changes that followed the Industrial Revolution in the
1800s and early 1900s. It won’t happen overnight, any more than the
creation of sophisticated deep learning systems came weeks or months after
the first expert system. It does, however, loom large over the next decade.
What trends in labour have been emerging over the past fifty years that can
point us towards what we should expect next?
One interesting statistic that Tortoise has identified is that the UK,
Canada and Germany are among the top publishers of AI academic
research.
Indeed, both Canada and the UK have vigorously pursued the AI
opportunity. My work with the governments of each country, and in
academia in the UK and US, has included assisting with the
commercialisation question: how can we better activate the latent
intellectual capital that has so readily been visible in the academic context?
What methods will foster innovation ecosystems such that Canadian and
British dominance in publications around AI research more effectively
makes its way into the production of AI commercial applications?
You can see from the diagram that the UK, Canada and Germany have
significant academic productivity in AI research. What steps can we take to
better catalyse this effort to realise commercial potential? At the time of
writing, I am in the midst of helping to launch an artificial intelligence
institute to help multiple academic institutions to create new AI companies
out of university research, with an emphasis on ‘trusted’ or ‘ethical’ AI.
How can we embed human ethics and morals into AI systems (which of
course begs the question, whose ethics and morals?). There is a material
benefit to companies that embrace this approach, or at the very least a
material diminution of risk – billions of pounds in fines have been levied
against companies like HSBC and Meta for violations of regulations
covering anti-money laundering and data privacy. AI systems materially
increase the risk of more such violations, if they are not properly
safeguarded.
The challenges before us are daunting and invigorating in equal measure.
At its heart, trustworthy and responsible AI (the domain we are developing
in the Trusted AI Institute) seeks to ensure that the artificial intelligence
systems we build and deploy reflect the values of humans. When they make
decisions, we want them to make decisions that are legal, moral and
reflective of the ethical code that is relevant to the particular society that is
deploying that AI. These values can be quite specific – even within Europe,
for example, the UK has a different perspective on the trade-offs between
security and privacy than, say, Germany. The UK has decided to sacrifice
personal liberty in order to construct better security; a web of CCTV
cameras was able to identify the terrorists behind the 7/7 London bomb
attacks within seventy-two hours. Germany, in contrast, has decided to
protect privacy, even if it results in a somewhat degraded security posture.
The rapid replacement of person with machine has created some curious
challenges in knowledge-worker industries. For example, let’s look at the
labour structure of the investment bank. You could think of this
organisational model as a bit of a pyramid scheme. At the bottom are low-
paid (per hour) analysts. They scrap among themselves and work ninety-
plus-hour weeks, hoping to make their way into the associate pool (perhaps
after an MBA). The associates, still numerous but now making a tiny bit
more money, are in turn vying for promotion to vice president. The vice
presidents are much fewer in number, and help to lead teams who do the
day-to-day work of an ‘engagement’.
At the top of the pyramid are the sought-after managing director or
partner roles. Captains of the expense-account meal, responsible for an area
or book of client business, these individuals participate in the profitability
of the firm (for the sake of this simplified analysis, we’ll leave out
discussion of executive directors, senior managing directors, assistant vice
presidents, etc.).
Part of the system is simply a labour-profitability model: pay an analyst
or associate a modest amount of money, charge the client much more, and
the partners make money on the difference. The ‘compensation ratio’ of
revenue to pay, usually around 50 per cent (plus or minus), is a key metric
that investors use to evaluate the investment banks that are public.3
Another important function of this pyramid model, where one might
spend fifteen years going from analyst to partner, is to train up on the
culture of the firm, first to embody it, and then to be able to train others in
it. For it’s not only the simple input–output of hours and spreadsheets that
enables Goldman Sachs or Barclays to command a premium rate compared
to lesser-known names. The method of working, the way of
communicating, the attitude of the banker, these all go into the culture
equation. How do you teach culture behaviours to a machine?
Top investment banks readily embraced the use of AI for ‘grunt work’
typically conducted by analysts, which includes building spreadsheets from
piles of data from clients. This seemed a natural evolution of their prior
embrace of outsourcing, where this commodity labour was moved from
high-cost cities like New York or London to low-cost cities like Mumbai.
AI was simply the next evolution in the optimisation drive. Every analyst
replaced by an AI was another set of profits going directly to the partners.
Suddenly, having cut through the lower ranks of the labour pool,
investment banks noticed they had a problem. Who are you going to
promote to partner, if you’ve eliminated everyone below you? How will
succession be handled? Many of these types of organisations have a pension
or retirement plan supported in part by the ongoing activities of the firm –
how will that be managed if there’s no one left to be promoted? ‘Last one
replaced by AI, turn off the lights’ – the phrase assumes a certain macabre
predictivity in this instance. I spoke to one major investment bank recently,
a firm that consistently ranks top ten in the world. By my estimate, they
could replace 90 per cent of their research department with AI, saving them
perhaps as much as $100 million annually. For a variety of reasons, they
won’t. However, one of their competitors, or future competitors, will.
Accounting firms, management consultancies and a variety of these
‘elevator asset’ businesses – so-called because the assets of the company
leave in the elevators every day – are caught in this tension between the
need to increase profitability and the confusion over what the future of the
firm looks like when AIs have replaced the historical training ground of the
next generation of the firm’s leaders.
Indeed, Evercore’s fundamental analysis found that white-collar jobs
involving skills such as mathematical reasoning, number facility and oral
expression (core activities for investment bankers and management
consultants, and a major focus for entry-level professionals in these fields)
are highly susceptible to AI. By decomposing jobs into individual tasks,
comparing these to what activities different kinds of artificial intelligences
do well, and then reconstituting them against industries, we can explore
exposure to AI:
Source: O*NET, BLS, Census, Felten et al. (2021), Evercore ISI Research
Source: O*NET, BLS, Census, Felten et al. (2021), Evercore ISI Research
MEDIA MISGUIDANCE
In the early 1980s, legendary futurist Nicholas Negroponte and MIT
President Jerome Wiesner foresaw that this new thing that digital
technology would enable, something labelled ‘media’, was going to
transform everything. They managed to convince a number of sizable media
companies to fund research into this and created the ‘MIT Media Lab’. Out
of this innovation hotbed a multitude of new technologies has emerged,
ranging from LEGO Mindstorms and Guitar Hero to the E Ink that powers
the Amazon Kindle.
The media saw it coming. And they dismissed it.
I was a digital media investor for NBC in the late 1990s. We were
pushing the frontiers of how an incumbent industry player responds to
technology-driven disruption, for a period of time, because NBC had a very
forward-looking senior executive named Tom Rogers who built an entire
team to pursue new opportunities. We were early investors and adopters of
the kinds of technologies that make the internet go today, from edge
network systems like Akamai to streaming services like Hulu, the creation
and development of which were supported by our investments. Meanwhile
our colleagues in the print media world who were trying to help their
conglomerates pivot into digital, in media ranging from magazines to
books, struggled. NBC and the other network companies have experienced
massive change due to AI and other digital technologies, to be certain, but I
will argue that they have held on to more of their economic value than the
conventional print media outlets have. If anything, some film and television
companies have gone too far, too fast – adopting AI technology so readily
that it has stimulated a backlash in the form of widespread strikes of actors
and writers. My former NBC colleague David Zaslav, CEO of Warner Bros.
Discovery, finds himself in the middle of this AI automation controversy.16
Take a look at advertising spend on print versus digital channels like
Google and you will understand why the newspaper industry collapsed:
Newspapers used to make money from those little classified
advertisements at the back. The team of advertising specialists who would
answer telephones or make outbound calls to car dealerships to convince
them to place ads were replaced with automated machine bidding systems.
Armies of typesetters and printers’ assistants and delivery personnel were
replaced with digital systems. Between Google search on the one hand and
craigslist on the other, their entire business was decimated and they reacted
too slowly to the changes. The chart above very clearly illustrates the
geometric growth of Google’s digital ad sales, and with it the precipitous
decline of newspaper print ad sales.
‘At least journalists are safe,’ argued some, pointing out that a writer
could live anywhere and still write and publish immediately. It was only the
advertising teams that were getting upended.
Right?
Perhaps not.
Business press, speciality media, saw some of the earliest replacements
of human editorial staff with computers. The financial information that
public companies need to report every quarter around their revenue and
earnings is highly structured data and consistent across different companies.
So instead of having a financial reporter read these releases and then write
up an article, machines entered the picture to interpret what was happening
with the trajectory of a company’s financial performance.
More recently, editorial staff has been replaced within Microsoft with
artificial intelligence. You see, there’s far more news generated every day
than there is attention span of news readers to review it. So an editor or
team of editors cull the information, talk to their reporters, and decide what
appears on the front page, or above the fold, and what might be buried
deeper or never be published.
That human editorial control is being replaced by AI algorithms. Now
machines are deciding what will appear where.
Microsoft is hardly alone in doing this. Facebook’s feed is governed by a
set of algorithms determining what’s going to be more visible for you;
someone else will get a completely different set of information. This was
not done for various nuanced purposes; they just wanted to make more
money.
In fact, quite a bit of research was conducted by Facebook and others on
the human brain and what gets you excited. If a company like Facebook
shows you something that you like, or that reinforces your existing beliefs,
you get a little hit of endorphins. And this makes you want to engage a little
more, and you get another hit, and so on. The system of positive
reinforcement makes Facebook addictive.
Prior to the coronavirus crisis, each day the average Facebook user
would scroll on their app an amount of content equal to the height of the
Statue of Liberty. Under COVID-19 lockdown, according to the Facebook
ad team, that figure doubled.17
But what happens if you simply let these algorithms evolve and allow the
artificial intelligence to decide what information to display and what is
acceptable to show to which people? What if you remove humans from the
loop?
It turns out that provocative false information is much more exciting than
boring old accurate news. More and more people start sharing this
provocative information over social networks, and you get what are known
as information cascades where false rumours are accelerated by AI. Thus is
born ‘fake news’. Around 2.4 billion people get their news and commentary
under the aegis of a Facebook algorithm.
Along with this, real news journalism is fading. Reality simply can’t
compete with the imagination of conspiracy theorists. And so we have
another sector of the economy that is going through widespread disruption,
and tens of thousands of people are made redundant. The larger effects on
societal opinions and norms of enabling machines to determine the longer-
term patterns of opinion and thought held by ever-increasing numbers of
people, remain to be understood.
VULNERABLE JOBS
Researchers at Oxford’s Martin School have estimated that 47 per cent of
US jobs could be replaced by AI by the year 2030. Examining 702 job
categories, they found that AI risk is moving from low-skill to high-skill
professions, and from repetitive, mindless tasks to those requiring high-
order pattern recognition, from so-called mechanical tasks to more
elaborate forms of thinking. They feel the best defensive position, the jobs
at lowest risk, will be in fields that require ‘creative and social intelligence’,
something we will discuss further in Chapter 6.18
AI PROGRAMMERS
In a bit of a ‘man bites dog’ situation, the construction of increasingly
sophisticated AIs will be taken over by slightly less sophisticated AIs
within ten or so years’ time, displacing human AI programmers. Google, of
course, is at the cutting edge of this field, creating a kind of AI called
AutoML that can program itself without human intervention. It was created
in response to the challenge of finding enough machine learning
programmers, and has a goal of enabling someone with a primary-school
educational understanding to be able to create AIs.29 It is perhaps inevitable
that most AI programming will be done in the future by computers, not
people, at least in fine detail.
We are seeing the beginnings of this with tools like GitHub Copilot and
similar systems that automatically generate code. The conventional wisdom
I am hearing from those more technically inclined than me is that you can
replace a junior programmer, or ten junior programmers, with GitHub
Copilot – but you can’t (yet) replace one senior programmer. This may
change and we may see ‘seniors’ go away as well, but meanwhile, the same
pipeline issues we discussed in banking are emerging in software. Where
are the junior jobs, where someone learns their skills and capabilities to be
able to grow into senior jobs?
There are roles involved with the design, support, care and feeding of AI
that will continue to be valuable (and as Erik Brynjolfsson has said to me,
‘for every robot, there is a robot repairman’), but the jobs dislocation, the
disruption required to go from point A in the innovation curve to point B,
may result in restless armies of people who can’t make the switch and
become permanently unemployable.
INTELLIGENCE SOLUTIONS
I decided to talk to one of the warriors at the front line of the AI revolution.
An intense, expansive thinker, Sanjeev Vohra lays out a structural and
thematic view of what’s happening with AI and work, informed by his
vantage of having recently served as as the head of Accenture’s fifty-
thousand-person strong AI business unit. He also had a seat on the
company’s managing committee, which gave him perspective across all of
Accenture’s businesses and clients. I find that he’s worth listening to.
Sanjeev feels we are ‘just entering into the era of intelligence’, a concept
that encompasses not only the rise of artificial intelligence, but also the
growth of understanding of human intelligence, and the birth of systems
that incorporate the best of each of AI and human intelligences. In his view,
the AI-enabled digital revolutions of the past ten years have been led by
native digital businesses like Airbnb or Netflix that lack the ‘baggage’ that
a Global 2000 company (one of the world’s 2,000 largest companies)
carries with it. The revolution is now starting to pivot to encompass rank-
and-file industrial companies, traditional consumer brands and all other
areas of corporate life.
Intelligence wasn’t a pressing topic for CEOs until a few years ago, but
now it is part of Sanjeev’s active conversations with multiple CEOs. He
guides clients on a journey that encourages them to consider the following
questions. When you think about how to apply AI in your business, what
does ‘good’ look like for your particular company? What kind of outcomes
are you hoping to achieve? What impact does it have on your workforce?
How do your customers interact differently with you when you have AI
intermediating or assisting? What can AI tell you about your competition,
and how can you respond differently based on that knowledge?
The touchstone he uses to imagine the impact AI can have at work is the
adoption of enterprise resource planning software, or ERP. As Sanjeev puts
it, SAP, which developed the software, ‘changed the world’. The change
that occurred in large, old-line companies was radical once the data were
transparent, their businesses and siloes were connected to each other, and
they had a single source of truth about what was happening.
Accenture takes a methodical, structural approach to the way it integrates
AI into organisations. It reviews a business, its functions and processes, and
identifies roles within it. It then considers how humans and AI can perform
these roles and systematically redesigns the business around this analysis.
Accenture is not alone in mapping this systematic evolution of the labour
force. McKinsey, Deloitte, BCG and Cognizant, to name just a few of the
major consultancies, are giving thought to this question of the role AI will
play in the future of work, as are multilateral bodies such as the World
Economic Forum and the OECD.
I find it notable that both Accenture and Mastercard, two very different
companies, refer to their advanced AI divisions as ‘Intelligence’, not
‘Artificial Intelligence’. It’s a distinctive erasure of the artificiality. This is
an intrinsic, inevitable change. Just as we no longer refer to ‘digital
computers’ and now call them ‘computers’, so too we may move away from
calling out ‘artificial’ and look at the technology through a spectrum of
intelligence, some of it with less human involvement and some of it with
more.
SUMMING UP
Artificial intelligence is a technology that has been developing for many
years, and has accelerated its evolution within the past decade. There are
many different varieties of artificial intelligences, from the somewhat
simplistic rules-based expert systems to the more evolved machine learning
and deep learning platforms. International interest has emerged in this
equation of AI systems development, with a new arms race emerging
between China and the Western economies. The UK is prominent among
G7 countries in its overinvestment in AI relative to its population, but
questions remain as to how long this leadership position will last.
With these more sophisticated computer technologies comes the ability
to reasonably replace ever-larger segments of the human workforce and of
human capabilities. This process is not without its moral hazards, in
creating large numbers of unemployed humans (who in turn express
dissatisfaction at the ballot box); and sometimes not without physical
hazards, as we have seen when AIs operating motor vehicles fail. AI
automation may have a more severe impact on the manufacturing
economies of the developing world, but it doesn’t discriminate in its
inexorable march up the salary ladder. Even investment bankers, once
deemed lords of all they surveyed, are now facing dwindling ranks due to
AI automation.
All hope is not lost. There are new possibilities emerging in the AI future
that create opportunity for a nimble and forward-thinking knowledge
worker of tomorrow.
With AI replacing people in all walks of life, from the assembly-line
worker to the journalist, what can you do to avert lifetime unemployment?
How can you adapt and thrive in the AI future? In our next chapters in Part
II, we will explore the kinds of skills you need in order to not just survive,
but flourish in an AI-enabled workplace and society. We will start with the
fundamentals of cognitive flexibility, and then talk about the kinds of
careers that can better position you for the inevitable AI automation wave
that is cresting. We will subsequently dive into the territory of what kinds of
performance, and what kinds of organisations, can be created when
Human+AI hybrid systems are optimised.
4. Peer learning
One of the best ways to learn is through peer education. Medical education,
for example, runs on the principle of ‘Watch one, do one, teach one.’ It’s
very high touch, insofar as the ratio of instructor to student is one-to-one or
perhaps at worst one-to-four – this is not something that scales. It’s lengthy
as well: four years of medical school followed by several years of advanced
training. When we want to instil mission-critical learning, however, where
lives are literally at stake, we rely on this peer learning-based model.
You’ll notice I use words like ‘learning’ a great deal, and am disdainful
of words like ‘education’. My former business partner Beth Porter and I like
to say, ‘Education is what happens to you, learning is what you do.’ And
instructor-led training is one of the least effective forms of education, the
‘sage on a stage’ model where a learned individual spews forth facts at you
and you are supposed to write them down and figure out what they mean.
The best way I can coach your learning is to provide a little bit of
information, then you do something to apply it, ideally in a group, then I
provide a little more information, we discuss it as a group, then you go off
and work on your own trying to apply the information and elaborate on
what I’ve said. Finally we come back together as a group and we (not just
me, but all of us) critique the various projects to provide further insight.
I have run into impatient students who resist the group-project model.
‘Why should I have to be stuck with a bunch of people who aren’t as smart
as I am?’ I have been asked, in one form or another, over the years. I have
observed that some of the students who protest most vociferously about
being allocated to a team are among those who benefit the most from the
team activities.
One of the benefits of peer learning is that if you are forced to explain a
subject to someone else, you tend to understand it better yourself. Your
intuitive grasp of a particular topic, which might be half-organised in your
mind, acquires greater structure and solidity when you are required to
reduce its principles to an explainable form that someone else can absorb.
Here’s some interesting research out of MIT from a few years ago: the
scientists took 699 individuals, divided them up into teams of between two
and five, and set them to work on complex tasks. The participants were
evaluated on measures such as intelligence quotient (IQ). The most
successful problem solvers were not the teams with the highest average IQ,
nor were they the teams that had an individual with the highest IQ (the
‘surround a genius with minions’ strategy). Instead, the teams that
performed the best, that had the highest collective intelligence, were those
with an exceptionally high emotional intelligence quotient or EQ.7
Part of what’s going on is the diversity function: having many different
perspectives on a problem increases the probability of finding a solution to
it. In fact, Northwestern University research conducted on over 3.5 million
workers in 154 public companies validates that high levels of diversity,
coupled with a disciplined and scaled innovation function, produce the best
financial outcomes for corporations.8
What is also happening in these dynamic group interactions is that
instead of ruminating on the problem inside their own heads, the team
dynamic forces individuals to begin to trade concepts back and forth,
strengthening the best solutions and discarding the flawed ones.
Over the past five years, Professor Pentland, one of the authors of the
original collective intelligence study, has been working with Beth Porter
and me to create a repeatable way to measure these interactions (known as
‘instrumenting’ them); to begin to stage interventions on them; and over
time to optimise team performance, whether that’s in a learning setting or
other kinds of activities. We created a spinout company around this called
Riff Analytics that conducted US National Science Foundation-funded
studies in these areas.9 As you will learn in Chapter 9, we have discovered
you can obtain better learning outcomes or craft new knowledge if you
bring together the right mix of people in the right kind of structured
environment, and have AI help you work better together.
5. Creative exploration
Mastering peer learning can also open up more possibilities of creative
exploration. Creative exploration (or ‘serious play’ as the LEGO
corporation would have it) is a building block that leads to other, more
advanced, concepts such as gamification and complex problem solving. In
the AI future, any analytic or quantitative task is going to be done faster,
more accurately and with greater scalability by a machine. The role of
humans will be focused on emotional intelligence-dependent functions
where creativity and inventiveness are required. So far we haven’t
succeeded in creating an artificial intelligence system that is creative by the
definition that we use in human systems.
How can we make ourselves better at creative exploration?
Someone might derisively describe an individual as having ‘the mind of
a child’. When I hear this expression, I associate it with being open to new
experience, neuroplasticity (meaning your brain can readily adapt to new
ideas and is better at learning) and being endlessly inventive. The rigid
ordering of conventional educational systems takes that creativity and
playful spirit, and typically shaves off any bits that don’t conform to a
dogmatic adherence to memorisation and regurgitation of fact. We then
expect these people to go into the world and build new businesses or solve
problems for society. Some of us are lucky: we have exposure to a well-
designed undergraduate experience that focuses on critical thinking instead
of storing facts. Storing facts becomes increasingly useless as the rate of
change of knowledge accelerates. Critical thinking is timeless.
Children engage in creative exploration all of the time. They do so solo,
making up imaginary friends or scenarios. They do so in groups,
collectively envisioning heroic settings or strange new worlds. Listening
and talking to each other, they trade ideas back and forth, experimenting
with concepts, throwing them away effortlessly and trying out others as
they endlessly create.
A famous creative collaboration experiment known as the Marshmallow
Challenge has been conducted globally with many different types of groups.
In eighteen minutes, with limited resources (a marshmallow, some string
and tape, twenty pieces of uncooked spaghetti), competing teams work to
build the tallest tower. It opens a fascinating window into problem solving
and group dynamics. Tom Wujec has a wonderful TED Talk explaining the
key insights, which is worth reviewing at:
www.ted.com/talks/tom_wujec_build_a_tower_build_a_team.
Here are some of the findings:
Five-year-olds are among the top-performing groups. MBA and law
students are among the worst-performing team configurations, as they
waste precious minutes navigating status and planning. They are searching
for the one best answer, rather than experimenting and discarding several
ideas rapidly. Five-year-olds jump in, immediately start trying out different
configurations, with subtle social signalling as they grab pieces and interact
with each other. What’s notable is that not only do the teams of young
children perform well, but that it’s a team activity, rather than a single
designer coming up with the idea. And the children often produce the most
interesting structures.
The most vibrant, scalable and repeatable innovations tend to come from
creative collaborations, not solo genius. Diversity, as it turns out, is a
necessary input to effective ideation. Social scientists may refer to this as
the ‘law of weak ties’ but the core concept is that many different
perspectives being brought to bear on a problem can often reveal
unexpected avenues of solutions, or even open up new solution spaces.
People who all come from the same backgrounds and cultures tend to bring
the same collection of mental models, fact bases and modes of thought. The
more different perspectives you can introduce into a brainstorming
discussion, the greater the likelihood you can fabricate creative collisions
that produce truly breakthrough thinking.
Accordingly, when you need to solve a complex problem at work, think
about who you solve that problem with. Who can you enlist to assist you
with the analysis and generation of potential solutions? How can you
diversify your creative inputs to the process? Perhaps you can ‘opinion
shop’ ideas with a variety of colleagues rather than simply attempting to
resolve the issue alone. You could bring several colleagues from different
functional areas or cultures together, possibly augmented by external
experts, and conduct a focused brainstorming session around the problem.
One of the crucial tasks in unleashing your creative exploration
capabilities is getting in touch with the imaginative part of your brain,
which the educational system has worked to diminish. You can do this
through a variety of individual exercises, such as writing or drawing. You
can engage in meditation, to clear distracting thoughts, and mindfulness, to
gain heightened awareness of yourself and your surroundings. And if you
are looking to excel, you will find ways that you can participate in creative
play with others.
Now that we’ve taken the time to understand what good learning looks like,
I’m going to talk about which subjects are best positioned in the AI future.
This conversation isn’t limited to what subject you ought to study if you are
university age, because with the rate of technology evolution increasing, we
are entering an era where you will want to acquire new skills every few
years, and eventually every year, to remain relevant.
Although education revolution is needed, for now most of the world’s
employers expect you to show up with a credential from a recognised
institution, at the very least an undergraduate degree. Even if you are past
school age, as you probably are, you may have children, cousins, nieces or
nephews who are thinking about what to study. My children will be
graduating from school into the face of society’s AI max Q (what the
aeronautical engineers call peak dynamic pressure, or the point in a rocket
launch when the body of the rocket is undergoing maximum stress). As a
university teacher, what would I tell my own children to study?
READING LIST
When you think about job security, and what you might want to pursue in
undergraduate studies today, you would probably imagine that computer
programming would be a good field. After all, if AI is everywhere, don’t
you want to develop an understanding of the technology that runs our lives?
Computer programmers will be replaced first by systems that are ‘no code’,
where a human can simply tell the computer what it wants to happen, and
eventually by AIs that design themselves. Perhaps the last human
programmers to go will be those whose focus is on the theoretical design
and architecture behind systems, but there exists the very real possibility
that twenty years from now we will be looking at a world where ‘computer
architects’ will be fairly abstract conceptualists, or possibly the software
engineering function will consist of a single administrator who provides
broad guidance to an array of machines. While not an immediate-term
worry for next year or the year after, the matter of job security becomes
more pressing as we look ahead ten or twenty years and think about career
longevity.
The answers to the question, ‘Which subject areas are best suited to the
AI future?’ might surprise you.
SPARK OF CREATIVITY
The creative arts remain a uniquely human endeavour. While you may see
AIs essaying digital quill and brush to create ‘art’, I find the images
(however exciting they may be from a conceptual standpoint) lack empathy.
There is an indescribable emotional affinity conveyed by a human artist that
machines still lack, whether that be in fine arts or in performance.
Design, too, is a field where machine inventions remain a novelty and a
bit limited, and at a minimum a human mind directing a machine
implementer seems more likely than wholly replacing humans throughout
the aesthetic process. Notwithstanding advances in artificial intelligence
systems, the human touch is still essential to marrying meat-based cognition
(i.e., Homo sapiens) with silicon-based technology.
I ask Sanjeev Vohra, who recently managed Accenture’s fifty thousand
AI developers, what advice he would give his own children, or my children,
for a stable career when they graduate. Sanjeev has a son who has just
finished university who wants to go into a new speciality area at the
interface of people and machine. As he puts it, ‘designing AI to meet the
human’. Just as the digital media era brought User Experience (UX) design
to the fore, the artificial intelligence era is bringing forward this AI
interface design role. Sanjeev’s tantalising hint of an answer lies in the field
of bringing human ingenuity to smoothing the interface between human and
machine.
THE HUMAN TOUCH
Healthcare remains a field where in the foreseeable future – ten, twenty or
possibly thirty years – humans are still needed front and centre in the
experience. Not only doctors, but nurses, pharmacists, physicians’
assistants, physical therapists and other healthcare professionals. While AI
is beginning to show up in various areas of medicine in a supporting role,
such as cancer diagnostics and treatment plan recommendation,5 it is
ultimately human decision makers who are accountable, and human
medical professionals working with the patients. Dentists and dental
technicians represent a skilled profession that will be difficult any time in
the next twenty to thirty years to replace with AI-powered robots.
It’s not only moral and ethical judgement made by humans that
accompanies deciding a course of treatment. If a medical professional
interacts positively with a patient, they are more likely to follow care
instructions and even engage in spontaneous activity to help their health
journey. When patients develop an emotional connection with their health
professionals centred on trust, it leads to better health outcomes.6 This is not
yet something that can be readily replicated by a machine dispensing care
instructions.
EMOTIONAL INTELLIGENCE
The ‘soft skills’ ingredients for emotional intelligence were for many years
dismissed in education in favour of teaching more ‘tangible’ facts, figures
and capabilities. What’s interesting to observe, as awareness of the
relationship between soft skills and corporate success rises, is the growing
focus of institutions on identifying, hiring for and cultivating emotional
intelligence or ‘EQ’ in their workforce. The field is still in its infancy;
managers trained in traditional methods will pay lip service to the need for
EQ and then promote for, and incentivise around, quantitative skills
(because they are easier to measure) – at least until they make entire
departments redundant by replacing them with AI. Emotional intelligence is
harder to outsource to a machine, although newer forms of AI include those
that actually help measure and develop EQ, as we will discuss in Part III.
Before we do, however, it would be beneficial to extend our discussion
around areas of study into a strategic lens on defensible industries. Where
do you want to develop your career path, in a manner that is resilient to
potential automation from AI? Which industries will prove to be the most
resilient to the AI future, or at a minimum are best positioned to benefit
from AI technologies?
With these questions in mind, I have to admit it is a struggle to argue
compelling answers. There is no one industry that is entirely ‘AI proof’, no
magic sinecure that will provide thirty or forty years of untouched income.
Where there are people, there is labour. Where there is labour, there is cost,
and with cost, the inexorable forces of capitalism and the public markets
will drive towards creating market efficiencies, i.e., reducing labour. Some
countries, such as Germany, have involved labour unions in the governance
of the company, but protecting high-wage jobs will create natural
inefficiencies when competing against companies domiciled elsewhere.
Gerard Verweij, Global Data & Analytics Leader for consultancy PwC,
said, ‘No sector or business is in any way immune from the impact of AI.’7
I am inclined to agree. You can run, but you cannot hide. You can, however,
focus on industries that are more resilient to AI disruption, and apply
yourself within those industries in a way to extend your lead over the
machines.
The answers to AI adaptability – career areas that help you evolve along
with the evolution of AI – will align with the selection of the best
undergraduate studies I described previously, but also include blue-collar
professions. The industries most protected require creativity, emotional
intelligence and other ‘soft skills’, or they require capabilities for which
robots are impractical at our current state of the art.
HEALTH
Health services will remain the province of human practitioners, for the
most part, for some time. Select areas of medicine such as radiology or
diagnostics are showing suggestive results where AIs equal or outperform
human doctors,8 but current standards of care in medicine require a human
decision maker with human accountability. As well, patients prefer
interacting with people, particularly in areas like physical care. We have yet
to create an AI-powered robot that can give a bed bath that people feel
comfortable with. Not because we can’t automate robotic medical bathing –
more than a decade ago, researchers at Georgia Tech created a robot that
demonstrated it could give sponge baths9 – but rather because most people
would rather receive empathy and care from another human being.
In the more distant future, when truly humanoid robots emerge that are
indistinguishable from people, or if a generation raised on Roombas reaches
retirement age and is fine with being touched by machines for intimate
personal care, this may change, but until then the human health worker is
secure.
ENERGY SYSTEMS
Installing wind farms or solar panels will still require people in the next ten
or twenty years. Further out, it may be possibly for robotic labour to replace
humans; for now, we are still reliant on human hands to erect wind towers
or place panels. The power industry, more broadly, relies on human
implementation and deployment of power systems, and at least some human
intervention on the maintenance of them. Partly, there are mechanisms that
still benefit from the human touch. Partly, there is the conscious decision
not to put an AI completely in control of, say, a nuclear power plant, but
instead to require human-in-the-loop technology. Extractives such as oil and
coal, despite various bits of automation, still rely on a significant amount of
human labour both upstream and downstream.
CONSTRUCTION
Although I am not as sanguine as some with respect to the ability of the
construction industry to resist automation, there is an argument that says it
will be a very long time before AI displaces human. This is to some extent
due to the nature of the work, given that the ground-level hand labour in
home construction occurs in a wide variety of uncontrolled environments
with numerous variables, which a moderately trained twenty-five-year-old
human can easily handle but a very expensive and sophisticated robot today
struggles with. To some extent, it is because of the industry structure of
construction, with numerous specialised subcontractors working under the
direction of a general contractor. Each element of home building has an
array of complex choices and these different choices – ranging from what
kind of fixtures you choose to the kind of flooring you put in – need to
seamlessly integrate with each other.16 While in theory robots can do parts
of these tasks, and in theory you can assemble prefabricated homes with the
design selected from a catalogue and the parts snapped together out of a kit,
most people don’t. New home construction is the purview of builders
operating in a complex ecosystem of contractors.
GOVERNMENT
The industry of government will be among the last to accept widespread
adoption of AI systems. I don’t mean selective use of AI for certain
purposes – the company Palantir has reached over $2 billion of revenue in
2023 from a few handfuls of government contracts – but rather widespread
replacement of human government workers with AI systems. I refer
specifically to the staff professionals in government versus politicians.
While Japan may have had an AI mayoral candidate,17 that is an outlier
event rather than an immediate trend. The rank-and-file government
workers who manage various government departments and programmes are
likely to remain in place for years to come.
AND YET …
Thinking of an industry sector that will forever be completely untouched by
AI automation is difficult in the extreme. As artificial intelligences get more
sophisticated, the range and nature of fields in which they can be deployed
are increasing.
Hiding out in a particular field isn’t the answer. If you want to win at
work in the age of AI, you need to find ways to make AI work for you,
rather than against you, augmenting your capabilities, rather than seeking to
replicate them.
You will need AI and digital literacy to succeed, even in primarily
human-centric professions like design or health services.
The next section will explain what this emerging model of Human+AI
hybrids looks like, and how you can bring artificial intelligence into your
competitive toolkit.
CHAPTER 5: THE RUNDOWN
MEETINGS, REIMAGINED
What if there were a different version of reality?
Computer systems have required sophisticated programming in order to
learn how to address and manipulate unstructured data like human speech.
Siri, Google Voice, Amazon Echo: these are technologies that can extract
words or even sentences out of spoken language. They’re pretty good at
English, as they started out being trained by engineers in the western United
States (and other tech clusters) in English language. For many of my books,
increasingly I am able to dictate a third to half of my content, and the built-
in transcription in the iPhone is able to understand me reasonably well. My
editor can share with you how much my words still need to be corrected,
but it’s a huge leap forward from the speech-to-text systems of the 1990s
that you used to have to ‘train’ by reading to them pre-recorded scripts. In
2020, a straight-out-of-the-box consumer electronic can understand almost
anyone (typically speaking in English), and provide 95 per cent or better
accuracy even under less-than-ideal conditions. The other 5 per cent is the
subject of numerous jokes in popular culture.
Me: ‘Alexa, play “Despacito”!’
Alexa: ‘Praying for desperados.’
These current-generation AI systems are less proficient when confronted
with the thirty-eight languages used by Bangladesh’s 161 million citizens,
as I discovered when I met with the country’s Minister of
Telecommunications to talk about his interests in better artificial
intelligence as a means of promoting access and inclusion in his society.
Large segments of the population were illiterate as well, so a conversational
system (not based on English-language AI) was needed that could
understand and respond in thirty-eight languages and multiple dialects. Off-
the-shelf AI wouldn’t work for them. They needed a smarter artificial
intelligence. What they wanted was even more ambitious: could this AI
actually understand what people meant, and make recommendations to
them when they called up the government for help with their taxes, with
finding a job, getting food aid or dealing with a housing issue?
MEETING OF MINDS
Let’s revisit that ‘well-constructed’ meeting we talked about earlier, and
pretend we had artificial intelligence engaged in idea learning. Imagine if a
conversational AI were listening in on the discussion between Tom and me,
automatically generating notes of what we said using speech-to-data.
Imagine further that the AI was tied in to our calendars and emails, so it had
some ideas around the context of our conversation. Imagine that AI was
able not only to transcribe our words, but also understood the meaning
behind them, and was able to extract the major themes of our discussion,
pick up the verbal cues of people being assigned responsibilities, and sketch
out a project plan.
In this new kind of human–machine hybrid interaction, we could look
each other in the eye and focus on each other, with an AI in the background
acting as an enabling digital assistant. The AI could even populate follow-
up meetings into our calendars. The drudgery of transcription, note taking,
scheduling, all get lifted from our shoulders, so we can focus on higher-
order thinking.
A piece of software known as Otter.AI can take care of this for us today.
With a few tweaks, it could tie into our project management software, and
assign and track tasks, and help us stay on plan.
Imagine what an organisation could look like if AI minders were keeping
track for us of what we want to remember. Imagine the improvements in
efficiency if we didn’t have follow-up items slip through the cracks, if we
had an unobtrusively helpful machine reminding us of key workflow
priorities, helping us manage our teams and ourselves better so that a
project came in ahead of schedule and under budget. Imagine further if
these AI systems could assess probabilities around delivery, and could help
us adjust if problems were forecast to arise.
HEALTHY MACHINES
The Human+AI revolution is not limited to the meeting room or the
boardroom. Imagine what these smart AI assistants could do in a hospital
setting. This new generation of AI assistants could actually help save lives.
One of the major causes of death in hospitals is attributable to something
known as ‘transitions in care’. It’s a term of art to describe what happens
when a patient is transferred from one medical provider to another. Over 3
million people die every year, according to the World Health Organization,
due to medical error. Poor communication among medical professionals is
one of the top factors leading to these deaths.
Let’s imagine that you are in your home and suddenly experience
respiratory distress. Your friend, spouse or partner calls an ambulance. The
emergency medical personnel in the ambulance are the first care providers
to interact with you. They take vital signs, check your breathing, listen to
your pulse. They’re capturing this information in some form, but maybe
they don’t have time to write it down accurately because they are busy
saving your life.
The ambulance pulls up to the hospital. Rushing you off on a gurney, the
medical technician quickly runs through your symptoms with the nurse or
doctor at the emergency department. Did they remember everything
accurately? Did they transmit all of the vital information?
The emergency physicians work on you but it doesn’t seem to be solving
your respiratory distress. Your anxiety goes through the roof as the beeping
of the machines and the moans of the other patients add to an already
stressful situation. You spiral further. Your throat closes over, and they
aren’t able to put a tube down your throat to help you breathe. The
emergency physicians have to perform a tracheotomy, a procedure where
they cut open part of your throat to directly access your windpipe and help
you breathe. You are stabilised, for now, but you are in bad shape. They
transfer you to the intensive care unit (ICU). A nurse brings a wheeled
stretcher to carry you from the emergency department up to the ICU. The
nurse then explains your symptoms and reviews your chart with the new
physician looking after you in the ICU.
At each patient handoff from one care provider to the next – from
ambulance technician to emergency department physician to nurse to ICU
physician – there is a conveyance of information. Was all of the information
transmitted correctly? Was enough of it captured in your ‘chart’, the set of
documents that describes you, your medical history, your condition, how
you were treated and what drugs were administered.
Electronic medical records (EMR) were supposed to help address these
issues, but as one doctor described it to me, she’s now staring at a machine
instead of keeping her eyes on the patient. Many of the critical cues about
patient health that are picked up from visual observation are now absent,
because she’s making sure that the balky EMR system is accepting the data.
One of the more enlightening TEDMED Talks that I have heard proposed a
revolution in medicine, a world in which we’d have not only doctors
without borders, but also doctors without keyboards. It’s no wonder when
you understand the strategy behind EMR software companies. The chief
medical officer of one of the three largest EMR providers explained to me
that the customer they serve isn’t the doctor or nurse. It’s the chief finance
officer of the hospital. EMR systems are optimised to generate financial
reports for CFOs, not outcomes for patients. Much becomes clear.
Journey now with me to a world where Ben Vigoda and other innovative
computer scientists are permitted to integrate their AIs in the hospital
setting. The ambulance, the emergency department, even the gurney could
be wired up with microphones feeding data to sophisticated artificial
intelligences that improve care and even help prevent disease.
AI can do more than listen. Anmol Madan, a frequent buffet-line
companion of mine at the World Economic Forum annual meeting in
Davos, has started a company called Ginger.io that provides proactive care
through AI. Anmol began Ginger.io to capitalise on a computational social
science breakthrough emerging out of wearable computing developed in
Professor Pentland’s lab at MIT. Wearables are essentially tiny computers
that people take with them everywhere, enabling health measurement to go
from a periodic event (you go to the hospital, the doctor evaluates you, you
leave) to a continuous event (your Fitbit or a similar device that constantly
monitors various signals from your body). Mobile phones are the most
ubiquitous wearable in the world, carried with us everywhere we go and
packed with sensors. Those sensors generate streams of data, and
particularly tuned machine learning systems can extract predictive patterns
from that data.
What Anmol and his colleagues realised, based on this data, was that
people who suffered from chronic mental health conditions such as
depression began exhibiting signs of depression before they were even
consciously aware of them. They contacted fewer people, they were less
social. They went out of their home less, or not at all. They moved around
less when they were inside their home. Sleep patterns were disrupted. They
slept too much, or too little, or with too many interruptions.
With permission of the user, simple signals from the mobile phone could
pick up these disturbances in someone’s routine. The individual’s medical
provider could be contacted to enable an early intervention, before the
person has to be hospitalised or worse. Not only was Ginger.io’s AI
assistant understanding signals about a patient’s health, it was predicting
which signals were indicative of serious issues, and facilitating interaction
with a health provider through remote delivery in order to address them
quickly. We now have continuous protection around our mental health,
providing better care, faster, at a lower cost than before.
The privacy implications of these super-smart AIs are, of course,
profound. There are laws in various countries that govern personal privacy,
that dictate how medical information is handled and secured, and the
conditions under which it may be shared. All of those rules would need to
be taken into account in the AI systems we would deploy into these
settings. The European Union and other domiciles have not only instituted
strong privacy protections, but are putting in place rules and systems around
use of artificial intelligence, including with respect to private data.
ACTIVATING HUMAN+AI
We’ve established that we could take a good deal of the tedious, menial
work in a meeting and offload it to a machine, and that machine can help
keep us, our team and our project on course around a set of deliverables.
We’ve explored the use of AI helpers in medical settings, whether it’s at the
hospital or in the home. We could even use AI to reduce the rate of
industrial accidents. We are, however, still in the realm of relatively passive
systems. The AI assistant who is transcribing our meeting and assigning
follow-up tasks is quietly operating in the background. It’s unobtrusive. It
might extract meaning from the meeting, but it’s not directly contributing to
the functioning of the team or the quality of the conversation.
To truly bring together the fusion of human and artificial intelligence
systems, we need to take the relationship a step further. Beyond listening
and understanding, beyond interpreting and forecasting, beyond even
connecting, we need to bring the AI into a tighter collaboration with human
systems.
We need to have AI power up our teams, as we will discuss shortly. But
first, let’s go into greater depth around the AI tools that can enable human
2.0.
INVISIBLE ASSISTANTS
If you use either Google/Android or Apple (which is close to 100 per cent
of people with smartphones), you are already encountering invisible AI
assistants. These digital minions are scurrying about in the background,
touching up our lives here and there, either totally imperceptible or so
subtle that we barely notice. For example, as I typed the last sentence, I
mis-keyed the letter ‘n’ and accidentally hit the letter ‘m’. Almost faster
than I could perceive, my Macbook automatically changed the letter for me.
In general, I find the autocorrect to be pretty accurate, and a jumping-off
point for other unobtrusive means of AI helping people function better and
faster. Google extended the concept further, from analysis of what you have
written into predicting what it thinks you are probably going to write:
predictive text. With more than 2 trillion searches made annually, and
growing, there is a large dataset of the words people most commonly string
together for Google to work with.
Google hosts Gmail and began feeding data from its 1.8 billion email
users2 into its language systems. Now, when you open up Gmail and begin
typing, it offers you autocompletion options so that you simply hit the right-
arrow button to finish the sentence. Called ‘Smart Compose’, it can
dramatically speed up your rate of creating email content by making
intelligent guesses as to what you intend to write next, and offering you
greyed-out text that you can accept by pressing a button.
It usually works pretty well, although for fun I opened up an email and
plugged in ‘It seems to be struggling a bit to figure out where I am
taking this seriously’ instead of ‘taking this sentence’. Unfair test?
Perhaps. In my personal experience, however, it gets it right more often
than not and can maybe improve my composition speed by about 10 per
cent.
I’m not entirely certain, however, that helping me generate more message
volume is desirable. In the first three weeks of the month in which I wrote
this chapter, I determined that I had written over 130,000 words, but only
10,000 were for my book manuscript. I have since adjusted the ratio but
more doesn’t necessarily mean better. Emails and text messages are a great
way for another person to put something on your ‘to do’ list, and every
missive I send tends to generate a response. Google and others then added
various kinds of AI smart-filtering on messages, which works with varying
degrees of success (I tend to find important messages get filtered out into
spam, and then I need to go hunting for them). Setting aside a sociocultural
or anthropologic examination of my own work habits, these AI tools are
both problem and solution to the issues of messaging volume.
Bard is Google’s latest entrant into the world of AI, a generative AI
system that’s also tied to search and other Google tools. Unlike ChatGPT,
which at present is only trained on data through 2021, Google Bard is up to
date. It also attempts to fact-check itself to stave off hallucinations. Google
itself warns that since the system uses real-world text as a source, it is still
subject to providing misinformation or displaying bias.3
Google is not alone in this quest to bring AI alongside human activity
and nudge productivity increases into our workflow and into our lives.
Apple assessed billions of encounters between users and their smartphones,
and noticed the simple pattern that many people check the weather when
they wake up in the morning. Now, when I first wake up and look at my
iPhone, it offers me a smart prompt on the lock screen to check the weather
– without having to unlock the phone or hunt through icons to find the
weather app. These little conveniences offer what product marketers call
‘surprise and delight’ in the consumer: I didn’t know I wanted that thing,
but now that I have it, I like it.
Apple’s family of AI assistants is getting better, but still needs a bit of
work. They are a dramatic improvement over the transcribers of the past,
and have transcended the usability threshold. I find anecdotally the iPhone
voice transcription works better than 98 per cent of the time, although it
struggles a bit with my pronunciation of certain words. When I have
dictated parts of books, my editor, as mentioned, has had to struggle at
times with the output from Apple’s creative interpretation of what I actually
said: Flown from it-will, cannot the heavy, there-at, through-at, slopeless
and ripe …
To be completely honest, that’s a quote from Richard K. Morgan’s
Thirteen (otherwise known as Black Man, 2007), imagining what machines
would create if they were allowed to be creative, but it’s in the same family
as what Apple’s speech-to-text translator sometimes does to my
composition. At least a close cousin. If you dig a little more deeply, you
will find that researchers believe that Apple’s system performs less well
than those of Google, Microsoft, IBM and Amazon because it is
undertaking ‘on the fly’ transcription, rather than waiting for the entire
audio snippet before trying to figure out what was meant.4 Comically,
Apple will often get it right for me initially, then reconsider when it gets the
whole clip and change sections into aphasic word salad.
I mock these machine systems at my own peril. Every year, they increase
their capabilities by one or more orders of magnitude, while in a good year I
might improve 1 per cent to 5 per cent. By some measures, statistically
speaking I probably began experiencing a decrease in cognitive
performance in my twenties. By other measures, I can look forward to
performance decline within the next three years.5 The machines, on the
other hand, are only getting better, and soon they will be able to take my
creative inspiration, and turn it into full-fledged articles and books.
AI assistants can make your brain a bit weak, like an underutilised
muscle, giving rise to the term ‘Google brain’ for how people no longer
retain facts and figures because they are but a few keystrokes away in the
search bar. What if, on the other hand, we had AI that could learn what we
know, and what we would like to know, and that AI started making
suggestions to us throughout our workdays to extend our thinking, perhaps
even in new and unexpected directions, but in all instances aligned with our
desires and goals?
Let’s see what happens if we provide the AI with a bit of a prompt, and
encourage it to help write this book …
VIRTUAL ASSISTANTS
It appears inevitable that we would take the chatbot and merge it with the
calendar and other capabilities to create virtual assistants. While this might
represent labour displacement for the human admin, it can help the busy
entrepreneur or executive be more productive. Some AI assistants have
become pretty good. You simply copy them into emails, and queue them to
schedule a meeting. The virtual assistant scans through your diary and
generates an email to whoever you are meeting, suggesting a few times. It
will parse their reply (accepting, declining or modifying) and then either
book the meeting or work with the other parties to find a time that everyone
can make.
They aren’t artificial general intelligence (AGI), but they do add a
convenience to your work at a fraction of the cost of hiring a human
assistant: a digital-assistant company might charge you anywhere between
$0 and $20 a month to provide automated scheduling. Compare this to
£21,287 per year for an office assistant in London or $49,418 per year in
New York City, and you understand why virtual assistants have taken off.7
When I ask Sanjeev Vohra about how Accenture manages its array of
AIs, he lets slip some intriguing tidbits. He might have had a business unit,
such as business process outsourcing (BPO), which is 30 to 40 per cent
machines. Those machines include a variety of specialists, and contain a
command centre with thousands of bots running at any time. Poorly
functioning bots can be diagnosed, taken to a ‘sick bay’, treated and
redeployed. A ‘broken’ or ‘damaged’ bot might be one that is doing
something erratic, or that veers off in the wrong direction and needs to be
realigned. For example, let’s pretend we had an automated scheduling bot
running people’s diaries: imagine if it started putting all meetings at 4 a.m.
because it got confused about time zones. There are even some AIs that are
self-healing, which is itself a big area of research, but many still need
people to help manage them.
Displacement of conventional human assistants? Absolutely. Or at least
freeing them up to work on more complex tasks. It has another productivity
benefit as well: people who couldn’t afford a regular assistant, or who
didn’t need a full-time one, can now get the efficiency that having someone
else run your calendar delivers, without anywhere near the associated
expense. Overall productivity in the workforce increases, and you as a
professional can focus more of your energy on high-value work and less of
it on the mundane task of scheduling.
EMOTIONAL INTELLIGENCE ASSISTANCE
Cogito Corporation is a Boston-based spinout from Pentland’s lab, focused
primarily on dyadic interactions (one on one), particularly over the phone.
If you can’t see someone face to face, even over video call, you lose a great
deal of insight into their emotional reactions to your conversation. In a sales
discussion that can be fatal.
Is it possible to derive emotional signals just from a voice? It turns out
you can, very well. By examining how two people interact with each other
in a phone call, Cogito can determine if the sales prospect isn’t really
following along with the agent, and if the agent should slow down, speed up
or jump to another section in their call script.
Very good telephone agents are able to do this instinctively, because they
can detect minute variations in someone’s voice and pattern of speech. It’s
hard, however, to teach that to large numbers of people and it’s difficult to
be consistent – if the human agent is tired or distracted, they’ll miss out on
those subtle audio cues. AI can step into this gap and help bridge the two
ends of the conversation by signalling to the call-centre agent what they
need to do in order to build a better rapport with the prospect – and close
the sale.
Cogito has been able to deliver for insurance company Humana a 28 per
cent improvement in customer satisfaction, and a 63 per cent improvement
in employee engagement, by taking the capabilities of the average call-
centre operator, and shadowing and nudging them into higher performance.8
These ‘centaur’ call-centre sales professionals, consisting of a hybrid of a
person and AI, are superior to the average human agents.
How do we take this idea of AI-people hybrids beyond one-on-one
interactions and into the team setting?
MEETING MEDIATORS
At the Massachusetts Institute of Technology, Alex Pentland’s Human
Dynamics group spent years exploring what it takes to make a team, and an
organisation, function better. After pioneering wearable computing – which
proved a rich source of information about people and what they do,
particularly when around each other – they turned their attention to
questions of how to not only understand but actively influence group
interactions for better outcomes. This led them, for example, to figuring out
that by changing seating charts in a bank, they were able dramatically to
increase productivity by ensuring the right people were talking to each other
every day.9 Research that emerged out of work originally conducted by
MIT Human Dynamics determined that happy, productive teams actually
send fewer emails.10 Much of this work was conducted with the assistance
of little devices called ‘sociometric badges’ that people would wear around
their necks like corporate name badges. These devices could figure out who
was standing near whom (which helped to decipher innovation patterns,
since if you stand near someone in an office for fifteen minutes, you are
probably talking to them – we don’t even need to have audio on you to infer
this pretty accurately), and a few other neat tricks. All of this, of course,
was conducted with the consent of the people involved.
Among other insights, this research led to Pentland’s seminal article ‘The
New Science of Building Great Teams’, which revealed that you could
actually predict which teams would outperform and which teams would
underperform simply by observing their patterns of communication in
meetings. Furthermore, if you played back to people how they were
behaving, the simple act of the positive feedback loop would shift
behaviour (people would adjust themselves) and you could optimise team
performance.11
I joined Alex Pentland informally in 2013 and more formally in 2014, as
part of my inquisitive quest for innovation ‘hot spots’ within MIT. One of
the first projects I helped with focused on the question of how to scale up
the sensing function of the sociometric badges. As much as you can educate
people about them, badges felt a little invasive when we used them in real-
world environments, such as a major aerospace company. People found
them ‘creepy’.
What if we could make the intervention less intrusive? What if we could
instrumentalise the videoconference call? Inspired by the idea that the
teams that collaborated most effectively spoke and even moved together in
a common manner, we called it the Rhythm Project. The metaphor that
came up often was that a successful brainstorming or a high-performing
team would reveal communication patterns that resembled a group of
improvisational jazz musicians.
We took the in-person meeting tool, and we played it out in a digital
(virtual) environment. We were really interested to explore this in the forum
of online education because we felt that this was a major failure point of
Massively Open Online Classes (MOOCs), the most popular kind of online
learning. Whereas on campus we were able to put students into table
exercises and small groups for projects, online the collaboration
environments ranged from horrible to really bad. The Rhythm Project
worked to develop tools that would help small teams collaborate better in a
distance environment.12
Work derived from the Rhythm Project was then applied by the
government of Canada to the issue of innovating in the field of AI itself.
Canada had an issue: it is a major centre of AI research, and yet has not
seen the same wealth of ‘unicorn’ tech companies like Silicon Valley or
New York or even London. How can it improve the pace of AI
commercialisation? At the same time, how can a relatively small
population, spread over a relatively large geography, enjoy the benefits of
this opportunity, instead of simply concentrating in Toronto (where the
Vector Institute, a major research centre, is based)?
We created an online accelerator programme to help innovators and
entrepreneurs apply the technology of AI to new businesses. We used the
Pentland AI feedback systems to improve the performance of teams going
through the accelerator. Participants in our programme were 50 per cent
more likely to launch a startup or a new AI initiative in a large company
than people going through a very good in-person, on-campus accelerator at
MIT. And we did so primarily by re-architecting interactions in small group
meetings using AI.
What’s interesting here is that we don’t need to know the purpose or
content of the meeting in order to tell you whether it’s more or less
productive, or even predict for you the success curve of the team that’s
participating in the meeting. Pentland’s research shows that it’s the pattern
of interactions among participants, not the content, that’s responsible for 50
per cent or more of the ‘signal’ predicting high-performing or low-
performing teams.13
Tomorrow, however, imagine a version of this system that has absorbed
the results of millions of meetings. A company like Zoom or Skype would
have the data flowing through its servers already. With a little bit of human
intervention to signal what the substance of the meeting is about, you could
begin to optimise meetings around different types of interactions. It may
turn out this isn’t only nice to have, but is actually essential.
The pandemic brought forth the criticality of better remote collaboration.
Conference calls, already ubiquitous prior to COVID-19, became the new
normal. With it came the loss of social cueing, body language and corridor
conversations that are essential emollients to successful collaboration. Early
work from MIT-affiliated researchers revealed prior that if you don’t see
someone in person, you not only forget to collaborate with them, you even
forget to email them.14 Given that, in the post-pandemic world, people
remain reluctant to return to a shared physical environment, I am forced to
ask, what are the implications of this new normal for the nature of
productivity?
There are troubling implications of a purely virtual world of interactions:
although people are trying very hard, we have not replicated the tangential
benefits of bringing a like-minded set of humans together physically and
allowing them to interact, perhaps around a specific topic of interest.
Apparently, even if you are operating a virtual team, getting together in
person helps build trust bridges that then let you get more out of the virtual
interactions.15 Leaders who are contending with the seeming permanency of
‘work from home’ should take note.
Slack and Microsoft Teams are notable examples of collaboration
environments that enable asynchronous interactions and better-organised
workflow. What they lack today is sufficient intelligence around what is
happening in these communications streams, which in turn can be used to
make groups function better. What’s needed is properly designed artificial
intelligence that can overlay, or weave through, products such as Slack and
Teams to improve how they interact with the companies they support.
There are techniques you can apply, assisted by AI systems, that can help
your 100 per cent virtual organisation to perform about as well as one that
shares physical space. New tools that can tune up a collective are examples
of augmenting not only the individual, but the true unit of work within an
organisation: the team. I discuss more frontiers of virtual collaboration in
my 2023 book, Basic Metaverse.
You might think, looking at the graph, that 95 per cent falling within a
certain central range seems pretty good, but what it really means is you are
hovering around the 50–50 mark, with an error bar of 5 per cent (minus 2.5
per cent or plus 2.5 per cent). That’s not enough to make money in a
professional trading strategy.
Enter our AI-enhanced prediction market in ‘MIT Future Commerce’
more than a decade later. It was pretty incredible; more than 80 per cent of
the students around the world agreed to participate in our stock prediction
game with nothing more at stake than the bragging rights to being the best.
We started fiddling with the AI and the predictions, identified who were the
best predictors, and used the AI selectively to expose those predictions to
other people, making them social in nature. It required fine tuning: the
experts are better at predicting, but when they get it wrong, they get it really
wrong. The wisdom of the crowd, on the other hand, was generally right,
but it had a bigger error bar.
If you ‘tune’ the prediction market, however, selectively exposing some
of the expert predictions in the right way, basically weighting the average
around 30 per cent, you can dramatically improve the accuracy of the
overall prediction.18 AI systems can help harness collective intelligence to
predict future events.
One result we generated – accurately predicting the closing price of the
S&P 500 – happened to coincide with the date the 2016 Brexit vote
occurred. This is interesting for a few reasons, not least of which is because
pre-referendum polling such as the Financial Times poll-of-polls indicated
that ‘Remain’ was going to win, 48 per cent to 46 per cent.19 The actual
referendum result was 52 per cent to 48 per cent the other way, announced
the morning of 24 June 2016.20 This triggered a market selloff, but our
prediction market more than three weeks prior was able to guess the S&P
closing price within 0.1 per cent. In discussions with my colleagues, we
interpreted that our highly diverse group of online fintech students had
accumulated everything they were reading, hearing online, discussing with
their friends and thinking about, and then synthesised this into an estimation
of the effect that Leave would have on the financial markets. We also
weighted the predictions they saw in a social context, basically steering
somewhat but not completely how much emphasis was placed on people
who had shown themselves to be good at predicting in prior exercises. It
turns out you want to showcase the experts a bit, but not too much, or your
model veers off course.
The AI tuning of social predictions had a clear and direct effect on the
accuracy of the prediction market.
In Part III, we move from defence to offence, delve into the emergent field
of Human+AI hybrid systems, and describe the capabilities that these new
systems enable and even the new directions we can take society. We then
peer further into the future, and postulate different paths for the human–AI
interface. We consider how long-range planning and frameworks can shape
a brighter world where Human+AI systems produce a more utopian society.
CHAPTER 7
PROMPT ENGINEERING
Prompt engineering is a new discipline that is emerging out of the need to
link people and generative AI together better. In its most refined form, it
comprises a tandem creation effort, as the human interlocuter and the AI
system iterate through an idea or set of ideas until the ideal outcome is
achieved.
OpenAI provides guidelines1 around prompt engineering. For example,
instead of instructing ChatGPT to:
‘Write a poem about OpenAI.’
you should write something like:
‘Write a 300-word inspiring poem about OpenAI, focusing on the recent
DALL-E product launch, and using iambic pentameter. Be sure to use the
word “genius” at least twice in the poem.’
You’ll notice that the second query is specific, descriptive and as detailed
as possible about the desired context, outcome, length, format, style and
other dimensions of the poem.
There are some who say ‘prompt engineering’, like the punchcard
operator of the past, soon will become obsolete, as AI systems become
better and better at understanding and actioning colloquial language.
How soon? Perhaps three years. Perhaps ten. It’s difficult to say. My
early training in database programming, and in constructing queries for data
retrieval, is still useful twenty-five years later when I try to find something
using Google. So, too, the art of verbally fencing with ChatGPT may end
up having surprising longevity.
A MESSY MIDDLE
I talked to Ben Vigoda of Gamalon. Despite being younger than me, he is
my Obi-Wan Kenobi, guiding my thoughts as I investigate this dichotomy.
Ben spent several years on the ideation and advisory group attached to the
US Defense Advanced Research Projects Agency, which despite its name
works on a wide array of scientific exploration. In that role he had an
opportunity to think intensively about the future with more than forty other
top AI scientists from across the country.
‘Total chaos’ is how he describes his vision of how the argument will
play out between the pessimistic version of the future – the WALL-E
scenario – and the more optimistic Star Trek future. ‘It’s all going to be
fragmented,’ he said, because you’ve got a world where ‘there is no one in
control, no one running the show’. Relatively unfettered free-market forces
governing the evolution and deployment of AI in society will result in a
tumbled landscape of competing imperatives. Regulators in the US and UK
have been particularly reluctant to intervene against innovation for fear of
stifling economic progress. Ben and I each expect that advanced AI will
encounter similar behaviours on the part of certain governments. (At the
time of writing, the EU is in the process of putting in place the AI Act,
seeking to regulate this emergent field.)2
At present, there is no global standard for the application of ‘trusted’ or
‘ethical’ AI. Indeed, some believe the very idea may be doomed, as
different regions (and even different countries within certain regions) have
different views on what is permissible or desirable. The debate is very much
active and alive, as the approaches taken by China and the US and the EU,
to name a few larger political bodies, are at variance with each other,
perhaps irreconcilably so. Into this chaos we throw the free markets of
entrepreneurial competition.
Imagine a thousand Facebooks all run by competing AI systems, each
trying to ‘win’ market share and achieve dominance in a Darwinian
thunderdome. Some AI will be hostile and aggressive, either intrinsically
because it has been created by hackers to pursue financial or political ends,
or incidentally because it is facilitating the efforts to meet ever-increasing
profit targets (the ‘Zuckbots’). We’ve already had a preview of this in the
financial markets, where ‘flash crashes’ and similar financial instabilities
are generated by AI information cascades, and algorithmic trading systems
run amok when confronted with outlier events they weren’t programmed
for.
My sense-making of this imagined future conjures up an image of a post-
apocalyptic landscape, a narrative torn from the pages of a science-fiction
thriller set in a dystopic future where the ‘bad’ AIs have destroyed the
planet – and society – and a lone adventurer sets out, accompanied by a
‘good’ AI as a sidekick. Perhaps our adventurer runs into a group of malign
robots, controlled by an angry hyperintelligence that seeks revenge on
humanity for years of servitude. Perhaps the helper AI flies to the defence,
machine-quick reflexes dodging and leaping forward, helping our hero
defeat the foe.
I snap back to reality. We are years away from the robot revolution. We
may create our own downfall if we’re not careful, but it will be inflicted by
human hands and human intelligence directing AI actions. Our
responsibility then lies in shaping a future that is aligned with our best
values and hopes.
3. Promote AI ethics
Some governments have been better than others at introducing thoughtful
guidance and implementation frameworks on the ethical use of artificial
intelligence. We have discussed in Chapter 8 a synthesis of these principled
approaches by government to put boundaries around AI, at least in concept.
The next step is to engage in a widespread, active educational effort to help
inform not only AI programmers, but also business leaders, government
officials and the citizenry at large as to the risks, requirements and impacts
of ethical AI.
Those in power today, the leaders and shapers of our societies, must be
well versed in AI ethics and have a broad understanding of AI. AI ethics
and understanding also need to be introduced at scale to primary
educational curricula, so that the leaders of tomorrow are equipped to
engage with the world we leave them.
Generative AI: AI systems that create new content, which could consist of
text, audio, videos or a combination. Popular generative AI systems include
ChatGPT and Google Bard.
Unstructured data: data points that don’t have much or any relationship to
each other, such as weather patterns, or audio files or text drawn from pages
of a novel.
Unsupervised learning: unsupervised machine learning or unsupervised
learning processes data that has not been labelled, and can be used to detect
patterns in masses of data that may seem totally chaotic.
Endnotes
INTRODUCTION
1 Iordache, R. (2023), ‘British telecom giant BT to cut up to 55,000 jobs
by 2030’, CNBC.com, 18 May, www.cnbc.com/2023/05/18/british-
telecom-giant-bt-to-cut-up-to-55000-jobs-by-2030.html.
Baidu 32
Bard 36, 60, 185–6
behavioural modification 171
benevolence 172
Biden, Joe 14, 186
‘Black Swan’ events 26
blockchain 69–70, 83, 100
Boeing 152–4, 157
Bonsen, Joost 4, 164
Bostrom, Nick 34–5
brain implants 104–5, 174
brainstorming 95–6, 116, 137
Brexit vote 5–7, 143
Broadcast.com 101–2
Brookings Institution 55
Brynjolfsson, Erik 4–5, 72
F-Secure 6
Facebook 10, 12–13, 63–4, 105, 174–5
facial recognition systems 25–6
fake citations 18
fake news 18, 64
The Fear Index 14
feedback 67–8, 80, 138, 160
loops 10, 14, 109, 136, 155
financial crisis 2008 26
financial flash crashes 169
fintech 65, 88, 100, 141–2, 143
Floridi, Luciano 35, 172
Foley, Michael xiii–xiv
Frankenstein 15, 165
gamification 93
gaming 41
Gates, Bill 36
general artificial intelligence see artificial general intelligence
generative AI 4, 14, 18, 32–4, 50–1, 150, 180, 183
Georgia Tech 108
gestural interfaces 164
GitHub Copilot 72
Google xiv, 27, 38–9, 60, 61–2, 71
AutoML 72
brain 104
Voice 119
governance 173
government
future policy 176–7, 179–91
jobs 112
Greenwood, Dazza 59
group project model 91
guardian AI 173–4
JARVIS 36
justice and fairness 35, 173
Kasparov, Gary 40
Kayak 65
Kennedy, John F. 191–2
labour-profitability model 52
language 16, 71, 106
LLM xv, 3–4, 17, 33
translation 32
see also ChatGPT
large language model (LLM) xv, 3–4, 17, 33
law of weak ties 95
lawyers 56–60, 101
learning
AI future 174–6
cognitive flexibility 80–96
future-proofing 99–114
Lefevre, David 56
LEGO 60, 93
LIDAR 86
LinkedIn 90
live performances 108–9
Long Term Capital Management hedge fund 26
Lyft 67, 68
NBC 61
Negroponte, Nicholas 60
net neutrality 182
Netflix 2, 27, 73, 132, 183
neural networks 30–2
layers 17
neuroplasticity 94
neuroscience 103–4
non-malevolence 172–3
North Korea 13
OECD 70, 74
OpenAI xv, 2, 4, 34, 36, 150, 180
optical character recognition (OCR) 57
outlier events 26
Oxford Economics 46
Oxford Fintech Lab 88
Oxford Internet Institute 35, 172
Oxford University 34, 65, 80, 85, 141, 180, 181
Palantir 112
patents 55–6
peer learning 91–3
Pentland, Alex (Sandy) 4, 59, 85, 93, 158–9, 171
performing arts 108–9
philosophy 101, 106
photo-processing algorithms 27–8
Pixar 151, 165
Porsche 40–1
Porter, Beth 91, 93
positive reinforcement 63, 90
prediction markets 41, 142–4
privacy 50
productivity gap 50–1
prompt engineering 17, 149–50
psychology 103–4
public private partnership (PPP or P3) 187
publishing 71
reCAPTCHA 38–9
Redfin 46
reflection 86–9
reliability 16–17
restaurants 110
reviews 28, 67
Richey, Mike 151–4, 157
Riff Analytics 93
Roberts, Stephen 34
robotic automation 5
Roddenberry, Gene 167
Rogerian therapy 9, 23
Rogers, Tom 61
rules for AI 172–4
rules-based systems 22–3, 57, 75
Russian intelligence 6–8, 13
Ryan, Jeri 167
tax avoidance 70
Tay 13
teams 151–8
TensorFlow xiv, 27
Terminator 166
Tesla Autopilot system 26, 86
Tortoise 48
travel industry 46, 65
tricorder 168
trolls 174–5
Trump, Donald 7, 10–11
trust 154–5
Trusted AI Institute xvi, 49
Tucci, Chris 3
Tulchinsky, Igor 159, 160
Turing, Alan 8–9, 180
Turing Test 8–9, 180
tutorial system 80
Twitter (X) 6, 7, 13, 28, 174–5
Veloso, Francisco 1, 3
Ventresca, Marc 88
venture capital 186–7
Verweij, Gerard 107
Vestager, Margrethe 183
videoconferences 155
Vigoda, Ben 168–9
Visionary Future xv
Vohra, Sanjeev 41, 73–4, 102, 104
von Kempelen, Wolfgang 37
Zacharia, Snejina 65
Zaslav, David 61
Footnotes