Why we have co-evolved with technology

By David RobsonFeatures correspondent
Getty Images Stone tools are rarely considered technology, but they predate our species (Credit: Getty Images)Getty Images
Stone tools are rarely considered technology, but they predate our species (Credit: Getty Images)

Our relationship with technology has a far deeper history than it seems, argues the philosopher and author Tom Chatfield in his book Wise Animals.

The computer scientist Alan Kay once observed: "Technology is anything that was invented after you were born, everything else is just stuff."

There's some truth to this: when asked to name a world-changing technology, many people might reach for something recent like AI, smartphones or the internet. What comes less readily to mind are the older ones, such as stone tools.

However, if we want to understand today's technology better, we need to consider our long-running relationship with it, says philosopher and author Tom Chatfield. In his new book, Wise Animals: How Technology Made Us Who We Are, Chatfield argues we have co-evolved with technology over the last few million years. The entire human story, he says, is entwined with the innovations we create.

Chatfield (who also contributes to BBC.com) spoke to science writer David Robson about our ancestor's relationship with tech, how it has become an extension of our minds, and the insights that we can draw from taking a longer perspective on today’s developments.

Getty Images Computers are still considered technology, but now that they also exist inside cars, domestic appliances and more, their presence is often taken for granted (Credit: Getty Images)Getty Images
Computers are still considered technology, but now that they also exist inside cars, domestic appliances and more, their presence is often taken for granted (Credit: Getty Images)

What inspired you to write Wise Animals?

As someone who loves technology, I've been very frustrated by the way that it’s so often talked about – as though people are just consumers who are interested in buying technologies with a bunch of features that are going to fix their problems. Technology is implicated in everything we do; it's bound up with our politics, our personal relationships, the education of our children and the future of our planet. So it feels very important to me that we have a rich, multi-faceted way of talking and thinking about technology that gets the full spectrum of human activities into these discussions.

As part of your research, you looked deep into prehistory and human evolution. What should we know about our ancestors' relationship with technology?

The crucial point for me is that we can't put our finger on a single moment in which technology emerged, but we can say that technological culture predates the evolution of anything like Homo sapiens. When our ancient ancestors started crafting fine stone tools, and then later started creating survival strategies through fire and the harnessing of artificial heat, they moved to a place where technological expertise and bodies of knowledge were non-optional; when survival strategy was predicated upon intergenerational knowledge – an adaptation through nurture, rather than through nature. 

You suggest that technology is now so entwined with who we are that it's become an extension of our minds. What ethical issues does this bring?

I borrow this concept from the philosophers Andy Clark and David Chalmers. They made the point that, in many ways, human minds are literally extended into aspects of the environment surrounding them. So when I write things down, or even when I use my fingers to count, I'm outsourcing an aspect of cognition. The same occurs when I use a computer to manipulate shapes on a screen to see whether they will or won't fit together or fit into a certain space. I am almost becoming a coupled system with that device in order to perform a cognitive act. Their paper predated the rise of the smartphone, and the thesis was so radical at the time that they had trouble finding a publisher, but it has now come to seem almost like common sense. If your phone batteries run out, you feel like a little bit of your mind is missing.

We need a knowing and informed negotiation with the technologies around us – Tom Chatfield

A large part of your mental or cognitive capacities are bound up with the human-made world, and the ethics of these things and the values embedded in them are very important. For example, I could theoretically outsource a lot of child-raising to automated systems. I could set up surveillance cameras with climate control and AI. My children could be monitored in their rooms with facial recognition, and when they're sad, it could produce voices that cheer them up or it could them the story when they’re ready for bed. Fairly obviously, that is a horrific scenario, because, by outsourcing these things, I completely withdraw from a mutually caring, loving relationship with my children. 

The model for me is that we need a knowing and informed negotiation with the technologies around us – a conversation about what we want from the human-made world, why we want it, and which of its offerings might not be aligned with human thriving. These are incredibly important conversations. We mustn't be afraid to use the language of values and sentiment and morality when it comes to technology.

Getty Images We readily outsource wayfinding to an app, but how much else of our cognition should we lend to technology? (Credit: Getty Images)Getty Images
We readily outsource wayfinding to an app, but how much else of our cognition should we lend to technology? (Credit: Getty Images)

As a prosaic personal example, I'm happy to offload navigational skills to my phone, but I hate it when my phone starts auto-suggesting answers to people's messages. I don't really want to offload my social cognition to a computer – I'd rather engage in real communication from my mind to another person's.

Precisely. The question is, what tasks are so dangerous, dull, demeaning or repetitive that we're delighted to outsource them, and what do we feel are important to be done ourselves or by other humans? If I was going to be judged in a trial, I don't necessarily want an algorithm to pass a verdict on me, even if the algorithm is demonstrably very fair, because there's something about the human solidarity of people in society standing in judgement of other people. At work, I might prefer to have a relationship with human colleagues – to talk to and explain myself to other people – rather than just getting the work done more efficiently.

Technology may have evolved with us, but it’s not alive. Yet many of the latest technologies, especially artificial intelligences, can appear to act like they have a mind, tricking us into recognising some kind of sentience. You describe this as the "anthropomorphic delusion". What is it? And why is it dangerous?

There's a double danger to anthropomorphism. The first is that we treat machines like people, and project personalities, intentions and thoughts onto artificial intelligences. Although these systems are extraordinarily sophisticated, they don't possess anything like the human sense. And it's very dangerous to act as though they do. For a start, they don't have a consistent worldview; they are miraculously brilliant forms of autocomplete, working on pattern recognition, working on prediction. This is very powerful, but they tend to hallucinate and make up details that don't exist, and they will often contain various forms of bias or exclusion based upon a particular training set. But an AI can respond fast and plausibly to anything, and as human beings, we are very predisposed to equate speed and plausibility with truth. And that's a very dangerous thing.

You may also like:

Similarly, we might overlook the very large corporations that lie behind these entities, who have their own agendas, their own modes of profit, their own issues around privacy, and so on. So anthropomorphism gets in the way of something really important, which is the well-informed, critically engaged process of debating what these systems are, what they can do for us, what their risks are, and how we should deploy and regulate them

The other danger of anthropomorphising technology is that it can lead us to think of and treat ourselves like we're machines. But we are nothing like large language models: we are emotional creatures with minds and bodies who are deeply influenced by our physical environment, by our bodily health and well-being. Perhaps most importantly, we shouldn’t see [a machine’s] efficiency as a model for human thriving. We don't want to optimise ourselves with perfectible components, within some vast consequentialist system. The idea that humans can have dignity and autonomy and potential is very ill-served by the desire to optimise, maximise and perfect ourselves.

Tom Chatfield’s book Wise Animals: How Technology Made Us Who We Are, is published by Picador.

*David Robson is an award-winning science writer. His next book is The Laws of Connection: 13 Social Strategies That Will Transform Your Life, to be published by Canongate (UK) and Pegasus Books (USA & Canada) in June 2024. He is @d_a_robson on X and @davidarobson on Instagram and Threads.

--

If you liked this story, sign up for The Essential List newsletter – a handpicked selection of features, videos and can't-miss news delivered to your inbox every Friday.

Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.