Fatal Abstraction -‘managerial software’ is the problem! A new book by Darryl Campbell

https://wwnorton.com/books/fatal-abstraction

A tech insider explains how capitalism and software development make for such a dangerous mix.

{This item is from the website of WWNorton, the publisher}

Software was supposed to radically improve society. Outdated mechanical systems would be easily replaced; programs like PowerPoint would make information flow more freely; social media platforms like Facebook would bring people together; and generative AI would solve the world’s greatest ills. Yet in practice, few of the systems we looked to with such high hopes have lived up to their fundamental mandate. In fact, in too many cases they’ve made things worse, exposing us to immense risk at the societal and the individual levels. How did we get to this point?

In Fatal Abstraction, Darryl Campbell shows that the problem is “managerial software”: programs created and overseen not by engineers but by professional managers with only the most superficial knowledge of technology itself. The managerial ethos dominates the modern tech industry, from its globe-spanning giants all the way down to its trendy startups. It demands that corporate leaders should be specialists in business rather than experts in their company’s field; that they manage their companies exclusively through the abstractions of finance; and that profit margins must take priority over developing a quality product that is safe for the consumer and beneficial for society. These corporations rush the development process and package cheap, unproven, potentially dangerous software inside sleek and shiny new devices. As Campbell demonstrates, the problem with software is distinct from that of other consumer products, because of how quickly it can scale to the dimensions of the world itself, and because its inner workings resist the efforts of many professional managers to understand it with their limited technical background.

A former tech worker himself, Campbell shows how managerial software fails, and when it does what sorts of disastrous consequences ensue, from the Boeing 737 MAX crashes to a deadly self-driving car to PowerPoint propaganda, and beyond. Yet just because the tech industry is currently breaking its core promise does not mean the industry cannot change, or that the risks posed by managerial software should necessarily persist into the future. Campbell argues that the solution is tech workers with actual expertise establishing industry-wide principles of ethics and safety that corporations would be forced to follow. Fatal Abstraction is a stirring rebuke of the tech industry’s current managerial excesses, and also a hopeful glimpse of what a world shaped by good software can offer.

You can read a sample on Amazon, click the image below, but try not to buy it from them!

The rich are becoming a separate species of, specifically, apex predator

Apex predators preying on YOU.

Many species of creature have a subspecies, often determined by geography or climate. Gulls are one example, Cichlids another, not forgetting Darwin’s finches: isolated from their species on a tiny volcanic island, they speciate.

I contend that the rich are similarly speciating. They meet some if not all the criteria: they occupy different environments to the rest of us, they consume different nutrition and they breed only within their own species (that last point isn’t completely true, but it isn’t completely true of lions and tigers, or gulls either).

And they decide via AI what is important and what is perceived.

https://www.wired.co.uk/article/abeba-birhane-ai-datasets

“AI Is Steeped in Big Tech’s ‘Digital Colonialism’
Artificial intelligence continues to be fed racist and sexist training materials and then distributed around the world.”

Apex predators preying on YOU.

“It has been said that algorithms are “opinions embedded in code.” Few people understand the implications of that better than Abeba Birhane. Born and raised in Bahir Dar, Ethiopia, Birhane moved to Ireland to study: first psychology, then philosophy, then a PhD in cognitive science at University College Dublin.

“During her doctorate, she found herself surrounded by software developers and data science students—immersed in the models they were building and the data sets they were using. But she started to realize that no one was really asking questions about what was actually in those data sets.

“Artificial intelligence has infiltrated almost every aspect of our lives: It can determine whether you get hired, diagnose you with cancer, or make decisions about whether to release prisoners on parole. AI systems are often trained on gargantuan data sets, usually scraped from the web for cost-effectiveness and ease. But this means AI can inherit all the biases of the humans who design them, and any present in the data that feeds them. The end result mirrors society, with all the ugliness baked in.

“Failing to recognize this risks causing real-world harm. AI has already been accused of underestimating the health needs of Black patients and of making it less likely that people of color will be approved for a mortgage.

“Birhane redirected her research toward investigating the data sets that are increasingly shaping our world. She wants to expose their biases and hold the giant corporations that design and profit from them to account. Her work has garnered global recognition. In October 2022, she even got the opportunity to talk about the harms of Big Tech at a meeting with the Dalai Lama.

“Often, Birhane only has to scratch the surface of a data set before the problems jump out. In 2020, Birhane and colleague Vinay Prabhu audited two popular data sets. The first is “80 Million Tiny Images,” an MIT set that’s been cited in hundreds of academic papers and used for more than a decade to teach machine learning systems how to recognize people and objects. It was full of offensive labels—including racist slurs for images of Black people. In the other data set, ImageNet, they found pornographic content, including upskirt images of women, which ostensibly did not require the individuals’ explicit consent because they were scraped from the internet. Two days after the pair published their study, the MIT team apologized and took down the Tiny Images dataset.

“These problems come from the top. Machine learning research is overwhelmingly male and white, a demographic world away from the diverse communities it purports to help. And Big Tech firms don’t just offer online diversions—they hold enormous amounts of power to shape events in the real world.”

Apex predators preying on YOU.

And coincidentally up pops this perfect paradigmatic example…

https://jessicawildfire.substack.com/p/were-running-out-of-everything?utm_medium=email

If you’re rich, and bored, don’t worry about drought, bully your neighbours instead!

https://jessicawildfire.substack.com/p/were-running-out-of-everything?utm_medium=email

Glimmers of AGI Are Just an Illusion, Scientists Say

https://futurism.com/glimmers-agi-illusion

The only hopeful result of the dash for brains will be if AGI acts as a mirror of human-ness and reminds us what is human or what is machine — it’s always been the question, at least since the invention of the thermionic diode in 19xx.